Aug 31, 2024

Dancing galaxies make a monster at the cosmic dawn

Astronomers have spotted a pair of galaxies in the act of merging 12.8 billion years ago. The characteristics of these galaxies indicate that the merger will form a monster galaxy, one of the brightest types of objects in the Universe. These results are important for understanding the early evolution of galaxies and black holes in the early Universe.

Quasars are bright objects powered by matter falling into a supermassive black hole at the center of a galaxy in the early Universe.

The most accepted theory is that when two gas-rich galaxies merge to form a single larger galaxy, the gravitational interaction of the two galaxies causes gas to fall towards the supermassive black hole in one or both of the galaxies, causing quasar activity.

To test this theory, an international team of researchers led by Takuma Izumi used the ALMA (Atacama Large Millimeter/submillimeter Array) radio telescope to study the earliest known pair of close quasars.

This pair was discovered by Yoshiki Matsuoka, at Ehime University in Japan, in images taken by the Subaru Telescope.

Located in the direction of the constellation Virgo, this pair of quasars existed during the first 900 million years of the Universe.

The pair is dim, indicating that the quasars are still in the early stages of their evolution.

The ALMA observations mapped the host galaxies of the quasars and showed that the galaxies are linked by a "bridge" of gas and dust.

This indicates that the two galaxies are in fact merging.

Read more at Science Daily

How a salt giant radically reshaped Mediterranean marine biodiversity

A new study paves the way to understanding biotic recovery after an ecological crisis in the Mediterranean Sea about 5.5 million years ago. An international team led by Konstantina Agiadi from the University of Vienna has now been able to quantify how marine biota was impacted by the salinization of the Mediterranean: Only 11 percent of the endemic species survived the crisis, and the biodiversity did not recover for at least another 1.7 million years. The study was just published in the journal Science.

Lithospheric movements throughout Earth history have repeatedly led to the isolation of regional seas from the world ocean and to the massive accumulations of salt. Salt giants of thousands of cubic kilometers have been found by geologists in Europe, Australia, Siberia, the Middle East, and elsewhere. These salt accumulations present valuable natural resources and have been exploited from antiquity until today in mines around the world (e.g. at the Hallstatt mine in Austria or the Khewra Salt Mine in Pakistan).

The Mediterranean salt giant is a kilometer-thick layer of salt beneath the Mediterranean Sea, which was first discovered in the early 1970s. It formed about 5.5 million years ago because of the disconnection from the Atlantic during the Messinian Salinity Crisis. In a study published in the journal Science, an international team of researchers -- comprising 29 scientists from 25 institutes across Europe -- led by Konstantina Agiadi from University of Vienna now was able to quantify the loss of biodiversity in the Mediterranean Sea due to the Messinian crisis and the biotic recovery afterwards.

Huge impact on marine biodiversity


After several decades of painstaking research on fossils dated from 12 to 3.6 million years found on land in the peri-Mediterranean countries and in deep-sea sediment cores, the team found that almost 67% of the marine species in the Mediterranean Sea after the crisis were different than those before the crisis. Only 86 of 779 endemic species (living exclusively in the Mediterranean before the crisis) survived the enormous change in living conditions after the separation from the Atlantic. The change in the configuration of the gateways, which led to the formation of the salt giant itself, resulted in abrupt salinity and temperature fluctuations, but also changed the migration pathways of marine organisms, the flow of larvae and plankton and disrupted central processes of the ecosystem. Due to these changes, a large proportion of the Mediterranean inhabitants of that time, such as tropical reef-building corals, died out.After the reconnection to the Atlantic and the invasion of new species like the Great White shark and oceanic dolphins, Mediterranean marine biodiversity presented a novel pattern, with the number of species decreasing from west to east, as it does today.

Recovery took longer than expected

Because peripheral seas like the Mediterranean are important biodiversity hotspots, it was very likely that the formation of salt giants throughout geologic history had a great impact, but it hadn't been quantified up to now. "Our study now provides the first statistical analysis of such a major ecological crisis," explains Konstantina Agiadi from the Department of Geology. Furthermore, it also quantifies for the first time the timescales of recovery after a marine environmental crisis, which is actually much longer than expected: "The biodiversity in terms of number of species only recovered after more than 1.7 million years," says the geoscientist. The methods used in the study also provide a model connecting plate tectonics, the birth and death of the oceans, Salt, and marine Life that could be applied to other regions of the world.

Read more at Science Daily

This tiny backyard bug does the fastest backflips on earth

Move over, Sonic. There's a new spin-jumping champion in town -- the globular springtail (Dicyrtomina minuta). This diminutive hexapod backflips into the air, spinning to over 60 times its body height in the blink of an eye, and a new study features the first in-depth look at its jumping prowess.

Globular springtails are tiny, usually only a couple millimeters in body length. They don't fly, bite or sting. But they can jump. In fact, jumping is their go-to (and only) plan for avoiding predators. And they excel at it -- to the naked eye it seems as though they vanish entirely when they take off.

"When globular springtails jump, they don't just leap up and down, they flip through the air -- it's the closest you can get to a Sonic the Hedgehog jump in real life," says Adrian Smith, research assistant professor of biology at North Carolina State University and head of the evolutionary biology and behavior research lab at the North Carolina Museum of Natural Sciences. "So naturally I wanted to see how they do it."

Finding the globular springtails was easy enough -- they're all around us. The ones in this study are usually out from December through March. Smith "recruited" his research subjects by sifting through leaf litter from his own backyard. But the next part proved to be the most challenging.

"Globular springtails jump so fast that you can't see it in real time," Smith says. "If you try to film the jump with a regular camera, the springtail will appear in one frame, then vanish. When you look at the picture closely, you can see faint vapor trail curlicues left behind where it flipped through the one frame."

Smith solved that problem by using cameras that shoot 40,000 frames per second. He urged the springtails to jump by shining a light on them or lightly prodding them with an artist's paintbrush. Then he looked at how they took off, how fast and far they went, and how they landed.

Globular springtails don't use their legs to jump. Instead, they have an appendage called a furca that folds up underneath their abdomen and has a tiny, forked structure at its tip. When the springtails jump, the furca flips down and the forked tip pushes against the ground, launching them into a series of insanely fast backflips.

What do we mean by insanely fast?

"It only takes a globular springtail one thousandth of a second to backflip off the ground and they can reach a peak rate of 368 rotations per second," Smith says. "They accelerate their bodies into a jump at about the same rate as a flea, but on top of that they spin. No other animal on earth does a backflip faster than a globular springtail."

The springtails were also able to launch themselves over 60 millimeters into the air -- more than 60 times their own height. And in most cases, they went backward.

"They can lean into a jump and go slightly sideways, but when launching from a flat surface, they mostly travel up and backward, never forward," says Jacob Harrison, a postdoctoral researcher at the Georgia Institute of Technology and paper co-author. "Their inability to jump forward was an indication to us that jumping is primarily a means to escape danger, rather than a form of general locomotion."

Landing was found in two styles: uncontrolled and anchored. Globular springtails do have a sticky forked tube they can evert -- or push out of their bodies -- to grapple a surface or halt their momentum, but Smith observed that bouncing and tumbling to a stop was just as common as anchored landings.

Read more at Science Daily

Aug 30, 2024

Highest-resolution observations yet from the surface of Earth

The Event Horizon Telescope (EHT) Collaboration has conducted test observations, using the Atacama Large Millimeter/submillimeter Array (ALMA) and other facilities, that achieved the highest resolution ever obtained from the surface of Earth. They managed this feat by detecting light from distant galaxies at a frequency of around 345 GHz, equivalent to a wavelength of 0.87 mm. The Collaboration estimates that in future they will be able to make black hole images that are 50% more detailed than was possible before, bringing the region immediately outside the boundary of nearby supermassive black holes into sharper focus. They will also be able to image more black holes than they have done so far. The new detections, part of a pilot experiment, were published today in The Astronomical Journal.

The EHT Collaboration released images of M87*, the supermassive black hole at the centre of the M87 galaxy, in 2019, and of Sgr A*, the black hole at the heart of our Milky Way galaxy, in 2022. These images were obtained by linking together multiple radio observatories across the planet, using a technique called very long baseline interferometry (VLBI), to form a single 'Earth-sized' virtual telescope.

To get higher-resolution images, astronomers typically rely on bigger telescopes -- or a larger separation between observatories working as part of an interferometer. But since the EHT was already the size of Earth, increasing the resolution of their ground-based observations called for a different approach. Another way to increase the resolution of a telescope is to observe light of a shorter wavelength -- and that's what the EHT Collaboration has now done.

"With the EHT, we saw the first images of black holes using the 1.3-mm wavelength observations, but the bright ring we saw, formed by light bending in the black hole's gravity, still looked blurry because we were at the absolute limits of how sharp we could make the images," said the study's co-lead Alexander Raymond, previously a postdoctoral scholar at the Center for Astrophysics | Harvard & Smithsonian (CfA), and now at the Jet Propulsion Laboratory, both in the United States. "At 0.87 mm, our images will be sharper and more detailed, which in turn will likely reveal new properties, both those that were previously predicted and maybe some that weren't."

To show that they could make detections at 0.87 mm, the Collaboration conducted test observations of distant, bright galaxies at this wavelength. Rather than using the full EHT array, they employed two smaller subarrays, both of which included ALMA and the Atacama Pathfinder EXperiment (APEX) in the Atacama Desert in Chile. The European Southern Observatory (ESO) is a partner in ALMA and co-hosts and co-operates APEX. Other facilities used include the IRAM 30-meter telescope in Spain and the NOrthern Extended Millimeter Array (NOEMA) in France, as well as the Greenland Telescope and the Submillimeter Array in Hawai'i.

In this pilot experiment, the Collaboration achieved observations with detail as fine as 19 microarcseconds, meaning they observed at the highest-ever resolution from the surface of Earth. They have not been able to obtain images yet, though: while they made robust detections of light from several distant galaxies, not enough antennas were used to be able to accurately reconstruct an image from the data.

This technical test has opened up a new window to study black holes. With the full array, the EHT could see details as small as 13 microarcseconds, equivalent to seeing a bottle cap on the Moon from Earth. This means that, at 0.87 mm, they will be able to get images with a resolution about 50% higher than that of previously released M87* and SgrA* 1.3-mm images. In addition, there's potential to observe more distant, smaller and fainter black holes than the two the Collaboration has imaged thus far.

EHT Founding Director Sheperd "Shep" Doeleman, an astrophysicist at the CfA and study co-lead, says: "Looking at changes in the surrounding gas at different wavelengths will help us solve the mystery of how black holes attract and accrete matter, and how they can launch powerful jets that stream over galactic distances."

This is the first time that the VLBI technique has been successfully used at the 0.87 mm wavelength. While the ability to observe the night sky at 0.87 mm existed before the new detections, using the VLBI technique at this wavelength has always presented challenges that took time and technological advances to overcome. For example, water vapour in the atmosphere absorbs waves at 0.87 mm much more than it does at 1.3 mm, making it more difficult for radio telescopes to receive signals from black holes at the shorter wavelength. Combined with increasingly pronounced atmospheric turbulence and noise buildup at shorter wavelengths, and an inability to control global weather conditions during atmospherically sensitive observations, progress to shorter wavelengths for VLBI -- especially those that cross the barrier into the submillimetre regime -- has been slow. But with these new detections, that's all changed.

Read more at Science Daily

Number of fish species at risk of extinction fivefold higher than previous estimates, according to a new prediction

Researchers predict that 12.7% of marine teleost fish species are at risk of extinction, up fivefold from the International Union for Conservation of Nature's prior estimate of 2.5%. Nicolas Loiseau and Nicolas Mouquet from the MARBEC Unit (the Marine Biodiversity, Exploitation and Conservation Unit) in Montpellier, France, and colleagues report these findings in a study published August 29th in the open-access journal PLOS Biology. Their report includes nearly 5,000 species that did not receive an IUCN conservation status due to insufficient data.

The IUCN's Red List of Threatened Species tracks more than 150,000 species to guide global conservation efforts on behalf of the most threatened.

However, 38% of marine fish species (or 4,992 species at the time of this research) are considered Data-Deficient and do not receive an official conservation status or the associated protections.

To better direct conservation efforts toward the species that need them, Loiseau and colleagues combined a machine learning model with an artificial neural network to predict the extinction risks of Data-Deficient species.

The models were trained on occurrence data, biological traits, taxonomy and human uses from 13,195 species.

They categorized 78.5% of the 4,992 species as Non-Threatened or Threatened (which includes Critically Endangered, Endangered and Vulnerable IUCN categories). Predicted Threatened species increased fivefold (from 334 to 1,671) and predicted Non-Threatened species increased by a third (from 7,869 to 10,451).

Predicted Threatened species tended to have a small geographic range, large body size and low growth rate.

The extinction risk was also correlated with shallow habitats.

The South China Sea, the Philippine and Celebes Seas and the west coasts of Australia and North America emerged as hotspots for predicted Threatened species.

The researchers recommend increased research and conservation efforts in these areas.

The researchers observed "a marked change in conservation priority ranking after species IUCN predictions," recommending that the Pacific Islands and Southern Hemisphere's polar and subpolar regions be prioritized to account for emerging at-risk species.

Many species that remained Data-Deficient occur in the Coral Triangle, indicating that additional research is needed there.

The researchers note that models cannot replace direct evaluations of at-risk species but AI offers a unique opportunity to provide a rapid, extensive and cost-effective evaluation of extinction risk of species.

Read more at Science Daily

Researchers map 50,000 of DNA's mysterious 'knots' in the human genome

Innovative study of DNA's hidden structures may open up new approaches for treatment and diagnosis of diseases, including cancer.

DNA is well-known for its double helix shape. But the human genome also contains more than 50,000 unusual 'knot'-like DNA structures called i-motifs, researchers at the Garvan Institute of Medical Research have discovered.

Published today in The EMBO Journalis the first comprehensive map of these unique DNA structures, shedding light on their potential roles in gene regulation involved in disease.

In a landmark 2018 study, Garvan scientists were the first to directly visualise i-motifs inside living human cells using a new antibody tool they developed to recognise and attach to i-motifs. The current research builds on those findings by deploying this antibody to identify i-motif locations across the entire genome.

"In this study, we mapped more than 50,000 i-motif sites in the human genome that occur in all three of the cell types we examined," says senior author Professor Daniel Christ, Head of the Antibody Therapeutics Lab and Director of the Centre for Targeted Therapy at Garvan. "That's a remarkably high number for a DNA structure whose existence in cells was once considered controversial. Our findings confirm that i-motifs are not just laboratory curiosities but widespread -- and likely to play key roles in genomic function."

Curious DNA i-motifs could play a dynamic role in gene activity

I-motifs are DNA structures that differ from the iconic double helix shape. They form when stretches of cytosine letters on the same DNA strand pair with each other, creating a four-stranded, twisted structure protruding from the double helix.

The researchers found that i-motifs are not randomly scattered but concentrated in key functional areas of the genome, including regions that control gene activity.

"We discovered that i-motifs are associated with genes that are highly active during specific times in the cell cycle. This suggests they play a dynamic role in regulating gene activity," says Cristian David Peña Martinez, a research officer in the Antibody Therapeutics Lab and first author of the study.

"We also found that i-motifs form in the promoter region of oncogenes, for instance the MYC oncogene, which encodes one of cancer's most notorious 'undruggable' targets. This presents an exciting opportunity to target disease-linked genes through the i-motif structure," he says.

I-motifs hold promise for new type of therapies and diagnostics


"The widespread presence of i-motifs near these 'holy grail' sequences involved in hard-to-treat cancers opens up new possibilities for new diagnostic and therapeutic approaches. It might be possible to design drugs that target i-motifs to influence gene expression, which could expand current treatment options," says Associate Professor Sarah Kummerfeld, Chief Scientific Officer at Garvan and co-author of the study.

Professor Christ adds that mapping i-motifs was only possible thanks to Garvan's world-leading expertise in antibody development and genomics. "This study is an example of how fundamental research and technological innovation can come together to make paradigm-shifting discoveries," he says.

Read more at Science Daily

Gene therapy gets a turbo boost

For decades, scientists have dreamt of a future where genetic diseases, such as the blood clotting disorder hemophilia, could be a thing of the past. Gene therapy, the idea of fixing faulty genes with healthy ones, has held immense promise. But a major hurdle has been finding a safe and efficient way to deliver those genes.

Now, researchers at the University of Hawai'i's John A. Burns School of Medicine (JABSOM) have made a significant breakthrough in gene editing technology that could revolutionize how we treat genetic diseases. Their new method offers a faster, safer, and more efficient way to deliver healthy genes into the body, potentially leading to treatments for hundreds of conditions. This research was recently published in Nucleic Acids Research.

Here's how it works.

Current methods can fix errors in genes, but they can also cause unintended damage by creating breaks in the DNA. Additionally, they struggle to insert large chunks of genetic material such as whole genes.

The new technique, developed by Dr. Jesse Owens along with his team Dr. Brian Hew, Dr. Ryuei Sato and Sabranth Gupta, from JABSOM's Institute for Biogenesis Research and Cell and Molecular Biology Department, addresses these limitations. They used laboratory evolution to generate a new super-active integrase capable of inserting therapeutic genes into the genome at record-breaking efficiencies.

"It's like having a "paste" function for the human genome," said Dr. Owens. "It uses specially engineered 'integrases' to carefully insert healthy genes into the exact location needed, without causing breaks in the DNA. This method is much more efficient, with success rates of up to 96% in some cases."

"This could lead to faster and more affordable treatments for a wide range of diseases, potentially impacting hundreds of conditions with a single faulty gene," said Dr. Owens.

Faster Development of Treatments and a Broader Range of Applications

The implications of this research extend beyond gene therapy. The ability to efficiently insert large pieces of DNA has applications in other areas of medicine.

When making cell lines to produce therapeutic proteins, the gene encoding the protein is usually randomly inserted into the genome, and it rarely lands in a location in the genome that is good for production. This is like searching for a needle in a haystack. Additionally, finding a cell with the gene inserted correctly and producing the desired protein can take many months.

Instead of searching for a needle in a haystack, Dr. Owens' technique makes a stack of needles. It delivers the gene directly to the desired location, significantly speeding up the development process.

"JABSOM takes pride in nurturing talented researchers like Jesse Owens, whose work has the power to create a global impact," said Sam Shomaker, dean of the University of Hawai'i John A. Burns School of Medicine. "This research, conducted in our lab in the middle of the Pacific, has the potential to significantly improve the way we treat genetic diseases."

Dr. Owens' team is exploring how this technique could accelerate the development and manufacture of biologics and advanced therapies such as antibodies. Currently, finding the right cell line for efficient production can be a time-consuming process. However, Dr. Owens' new genome engineering tool can reduce the cell line development timeline and accelerate the manufacture of life-saving therapeutics.

Read more at Science Daily

Aug 29, 2024

Dark matter could have helped make supermassive black holes in the early universe

Supermassive black holes typically take billions of years to form. But the James Webb Space Telescope is finding them not that long after the Big Bang -- before they should have had time to form.

It takes a long time for supermassive black holes, like the one at the center of our Milky Way galaxy, to form. Typically, the birth of a black hole requires a giant star with the mass of at least 50 of our suns to burn out -- a process that can take a billion years -- and its core to collapse in on itself.

Even so, at only about 10 solar masses, the resulting black hole is a far cry from the 4 million-solar-masses black hole, Sagittarius A*, found in our Milky Way galaxy, or the billion-solar-mass supermassive black holes found in other galaxies. Such gigantic black holes can form from smaller black holes by accretion of gas and stars, and by mergers with other black holes, which take billions of years.

Why, then, is the James Webb Space Telescope discovering supermassive black holes near the beginning of time itself, eons before they should have been able to form? UCLA astrophysicists have an answer as mysterious as the black holes themselves: Dark matter kept hydrogen from cooling long enough for gravity to condense it into clouds big and dense enough to turn into black holes instead of stars. The finding is published in the journal Physical Review Letters.

"How surprising it has been to find a supermassive black hole with a billion solar mass when the universe itself is only half a billion years old," said senior author Alexander Kusenko, a professor of physics and astronomy at UCLA. "It's like finding a modern car among dinosaur bones and wondering who built that car in the prehistoric times."

Some astrophysicists have posited that a large cloud of gas could collapse to make a supermassive black hole directly, bypassing the long history of stellar burning, accretion and mergers. But there's a catch: Gravity will, indeed, pull a large cloud of gas together, but not into one large cloud. Instead, it gathers sections of the gas into little halos that float near each other but don't form a black hole.

The reason is because the gas cloud cools too quickly. As long as the gas is hot, its pressure can counter gravity. However, if the gas cools, pressure decreases, and gravity can prevail in many small regions, which collapse into dense objects before gravity has a chance to pull the entire cloud into a single black hole.

"How quickly the gas cools has a lot to do with the amount of molecular hydrogen," said first author and doctoral student Yifan Lu. "Hydrogen atoms bonded together in a molecule dissipate energy when they encounter a loose hydrogen atom. The hydrogen molecules become cooling agents as they absorb thermal energy and radiate it away. Hydrogen clouds in the early universe had too much molecular hydrogen, and the gas cooled quickly and formed small halos instead of large clouds."

Lu and postdoctoral researcher Zachary Picker wrote code to calculate all possible processes of this scenario and discovered that additional radiation can heat the gas and dissociate the hydrogen molecules, altering how the gas cools.

"If you add radiation in a certain energy range, it destroys molecular hydrogen and creates conditions that prevent fragmentation of large clouds," Lu said.

But where does the radiation come from?

Only a very tiny portion of matter in the universe is the kind that makes up our bodies, our planet, the stars and everything else we can observe. The vast majority of matter, detected by its gravitational effects on stellar objects and by the bending of light rays from distant sources, is made of some new particles, which scientists have not yet identified.

The forms and properties of dark matter are therefore a mystery that remains to be solved. While we don't know what dark matter is, particle theorists have long speculated that it could contain unstable particles which can decay into photons, the particles of light. Including such dark matter in the simulations provided the radiation needed for the gas to remain in a large cloud while it is collapsing into a black hole.

Dark matter could be made of particles that slowly decay, or it could be made of more than one particle species: some stable and some that decay at early times. In either case, the product of decay could be radiation in the form of photons, which break up molecular hydrogen and prevent hydrogen clouds from cooling too quickly. Even very mild decay of dark matter yielded enough radiation to prevent cooling, forming large clouds and, eventually, supermassive black holes.

Read more at Science Daily

Engineers develop all-in-one solution to catch and destroy 'forever chemicals'

Chemical engineers at the University of British Columbia have developed a new treatment that traps and treats PFAS substances -- widely known as "forever chemicals" -- in a single, integrated system.

Per- and polyfluoroalkyl substances (PFAS) are widely used in manufacturing consumer goods like waterproof clothing due to their resistance to heat, water and stains. However, they are also pollutants, often ending up in surface and groundwater worldwide, where they have been linked to cancer, liver damage and other health issues.

"PFAS are notoriously difficult to break down, whether they're in the environment or in the human body," explained lead researcher Dr. Johan Foster, an associate professor of chemical and biological engineering in the faculty of applied science. "Our system will make it possible to remove and destroy these substances in the water supply before they can harm our health."

Catch and destroy

The UBC system combines an activated carbon filter with a special, patented catalyst that traps harmful chemicals and breaks them down into harmless components on the filter material. Scientists refer to this trapping of chemical components as adsorption.

"The whole process is fairly quick, depending on how much water you're treating," said Dr. Foster. "We can put huge volumes of water through this catalyst, and it will adsorb the PFAS and destroy it in a quick two-step process. Many existing solutions can only adsorb while others are designed to destroy the chemicals. Our catalyst system can do both, making it a long-term solution to the PFAS problem instead of just kicking the can down the road."

No light? No problem


Like other water treatments, the UBC system requires ultraviolet light to work, but it does not need as much UV light as other methods.

During testing, the UBC catalyst consistently removed more than 85 per cent of PFOA (perfluorooctanoic acid, a type of forever chemical) even under low light conditions.

"Our catalyst is not limited by ideal conditions. Its effectiveness under varying UV light intensities ensures its applicability in diverse settings, including regions with limited sunlight exposure," said Dr. Raphaell Moreira, a professor at Universität Bremen who conducted the research while working at UBC.

For example, a northern municipality that gets little sun could still benefit from this type of PFAS solution.

"While the initial experiments focused on PFAS compounds, the catalyst's versatility suggests its potential for removing other types of persistent contaminants, offering a promising solution to the pressing issues of water pollution," explained Dr. Moreira.

From municipal water to industry cleanups

The team believes the catalyst could be a low-cost, effective solution for municipal water systems as well as specialized industrial projects like waste stream cleanup.

They have set up a company, ReAct Materials, to explore commercial options for their technology.

"Our catalyst can eliminate up to 90 per cent of forever chemicals in water in as little as three hours -- significantly faster than comparable solutions on the market. And because it can be produced from forest or farm waste, it's more economical and sustainable compared to the more complex and costly methods currently in use," said Dr. Foster.

Read more at Science Daily

Neuroscientists explore the intersection of music and memory

The soundtrack of this story begins with a vaguely recognizable and pleasant groove. But if I stop writing and just listen for a second, the music reveals itself completely. In Freddie Hubbard's comfortable, lilting trumpet solo over Herbie Hancock's melodic, repetitive piano vamping, I recognize "Cantaloupe Island." Then, with my fingers again poised at the keyboard, Freddie and Herbie fade into the background, followed by other instrumental music: captivating -- but not distracting -- sonic nutrition, feeding my concentration and productivity.

Somewhere, I think, Yiren Ren is studying, focused on her research that demonstrates how music impacts learning and memory. Possibly, she's listening to Norah Jones, or another musician she's comfortable with. Because that's how it works: The music we know and might love, music that feels predictable or even safe -- that music can help us study and learn. Meanwhile, Ren has also discovered, other kinds of music can influence our emotions and reshape old memories.

Ren, a sixth-year Ph.D. student in Georgia Tech's School of Psychology, explores these concepts as the lead author of two new research papers in the journals PLOS Oneand Cognitive, Affective, & Behavioral Neuroscience (CABN).

"These studies are connected because they both explore innovative applications of music in memory modulation, offering insights for both every day and clinical use," says Ren.

But the collective research explores music's impacts in very different ways, explains Ren's faculty advisor and co-author of the study, Thackery Brown.

"One paper looks at how music changes the quality of your memory when you're first forming it -- it's about learning," says Brown, a cognitive neuroscientist who runs the MAP (Memory, Affect, and Planning) Lab at Tech. "But the other study focuses on memories we already have and asks if we can change the emotions attached to them using music."

Making Moods With Music


When we watch a movie with a robust score -- music created to induce emotions -- what we're hearing guides us exactly where the composer wants us to go. In their CABN study, Ren, Brown, and their collaborators from the University of Colorado (including former Georgia Tech Assistant Professor Grace Leslie) report that this kind of "mood music" can also be powerful enough to change how we remember our past.

Their study included 44 Georgia Tech students who listened to film soundtracks while recalling a difficult memory. Ren is quick to point out that this was not a clinical trial, so these participants were not identified as people suffering from mood disorders: "We wanted to start off with a random group of people and see if music has the power to modulate the emotional level of their memories."

Turns out, it does. The participants listened to movie soundtracks and incorporated new emotions into their memories that matched the mood of the music. And the effect was lasting. A day later, when the participants recalled these same memories -- but without musical accompaniment -- their emotional tone still matched the tone of the music played the day before.

The researchers could watch all this happening with fMRI (functional magnetic resonance imaging). They could see the altered brain activity in the study participants, the increased connectivity between the amygdala, where emotions are processed, and other areas of the brain associated with memory and integrating information.

"This sheds light on the malleability of memory in response to music, and the powerful role music can play in altering our existing memories," says Ren.

Ren is herself a multi-instrumentalist who originally planned on being a professional musician. As an undergraduate at Boston University, she pursued a dual major in film production and sound design, and psychology.

She found a way to combine her interests in music and neuroscience and is interested in how music therapy can be designed to help people with mood disorders like post-traumatic stress disorder (PTSD) or depression, "particularly in cases where someone might overexaggerate the negative components of a memory," Ren says.

There is no time machine that will allow us to go back and insert happy music into the mix while a bad event is happening and a memory is being formed, "but we can retrieve old memories while listening to affective music," says Brown. "And perhaps we can help people shift their feelings and reshape the emotional tone attached to certain memories."

Embracing the Familiar


The second study asks a couple of old questions: Should we listen to music while we work or study? And if so, are there more beneficial types of music than others? The answer to both questions might lie, at least partially, within the expansive parameters of personal taste. But even so, there are limits.

Think back to my description of "Cantaloupe Island" at the beginning of this story and how a familiar old jazz standard helped keep this writer's brain and fingers moving. In the same way, Norah Jones helps Ren when she's working on new research around music and memory. But if, for some reason, I wanted to test my concentration, I'd play a different kind of jazz, maybe 1950s bebop with its frenetic pace and off-center tone, or possibly a chorus of screeching cats. Same effect. It would demand my attention, and no work would get done.

For this study, Ren combined her gifts as a musician and composer with her research interests in examining whether music can improve -- or impair -- our ability to learn or remember new information. "We wanted to probe music's potential as a mnemonic device that helps us remember information more easily," she says. (An example of a mnemonic device is "Every Good Boy Does Fine," which stands for E-G-B-D-F and helps new piano players learn the order of notes on a keyboard.)

This study's 48 participants were asked to learn sequences of abstract shapes while listening to different types of music. Ren played a piece of music, in a traditional or familiar pattern of tone, rhythm, and melody. She then played the exact same set of notes, but out of order, giving the piece an atonal structure.

When they listened to familiar, predictable music, participants learned and remembered the sequences of shapes quicker as their brains created a structured framework, or scaffold, for the new information. Meanwhile, music that was familiar but irregular (think of this writer and the bebop example) made it harder for participants to learn.

"Depending its familiarity and structure, music can help or hinder our memory," says Ren, who wants to deepen her focus on the neural mechanisms through which music influences human behavior.

She plans to finish her Ph.D. studies this December and is seeking postdoctoral research positions that will allow her to continue the work she's started at Georgia Tech. Building on that, Ren wants to develop music-based therapies for conditions like depression or PTSD, while also exploring new rehabilitation strategies for aging populations and individuals with dementia.

Read more at Science Daily

Bacterial cells transmit memories to offspring

Bacterial cells can "remember" brief, temporary changes to their bodies and immediate surroundings, a new Northwestern University and University of Texas-Southwestern study has found.

And, although these changes are not encoded in the cell's genetics, the cell still passes memories of them to its offspring -- for multiple generations.

Not only does this discovery challenge long-held assumptions of how the simplest organisms transmit and inherit physical traits, it also could be leveraged for new medical applications. For example, researchers could circumvent antibiotic resistance by subtly tweaking a pathogenic bacterium to render its offspring more sensitive to treatment for generations.

The study will be published Wednesday (Aug. 28) in the journal Science Advances.

"A central assumption in bacterial biology is that heritable physical characteristics are determined primarily by DNA," said Northwestern's Adilson Motter, the study's senior author. "But, from the perspective of complex systems, we know that information also can be stored at the level of the network of regulatory relationships among genes. We wanted to explore whether there are characteristics transmitted from parents to offspring that are not encoded in DNA, but rather in the regulatory network itself. We found that temporary changes to gene regulation imprint lasting changes within the network that are passed on to the offspring. In other words, the echoes of changes affecting their parents persist in the regulatory network while the DNA remains unchanged."

Motter is the Charles E. and Emma H. Morrison Professor of Physics at Northwestern's Weinberg College of Arts and Sciences and director of the Center for Network Dynamics. The study's co-first authors are postdoctoral fellow Thomas Wytock and graduate student Yi Zhao, who are both members of Motter's laboratory. The study also involves a collaboration with Kimberly Reynolds, a systems biologist at the University of Texas Southwestern Medical Center.

Learning from a model organism


Since researchers first identified the molecular underpinnings of genetic code in the 1950s, they have assumed traits are primarily -- if not exclusively -- transmitted through DNA. However, after the completion of the Human Genome Project in 2001, researchers have revisited this assumption.

Wytock cites the World War II Dutch famine as a famous example pointing to the possibility of heritable, non-genetic traits in humans. A recent study showed that the children of men, who were exposed to the famine in utero, exhibited an increased tendency to become overweight as adults. But isolating the ultimate causes for this type of non-genetic inheritance in humans has proved challenging.

"In the case of complex organisms, the challenge lies in disentangling confounding factors such as survivor bias," Motter said. "But perhaps we can isolate the causes for the simplest single-cell organisms, since we can control their environment and interrogate their genetics. If we observe something in this case, we can attribute the origin of non-genetic inheritance to a limited number of possibilities -- in particular, changes in gene regulation."

The regulatory network is analogous to a communication network that genes use to influence each other. The research team hypothesized that this network alone could hold the key to transmitting traits to offspring. To explore this hypothesis, Motter and his team turned to Escherichia coli (E. coli), a common bacterium and well-studied model organism.

"In the case of E. coli, the entire organism is a single cell," Wytock said. "It has many fewer genes than a human cell, some 4,000 genes as opposed to 20,000. It also lacks the intracellular structures known to underlie the persistence of DNA organization in yeast and the multiplicity of cell types in higher organisms.Because E. coli is a well-studied model organism, we know the organization of the gene regulatory network in some detail."

Reversible stress, irreversible change

The research team used a mathematical model of the regulatory network to simulate the temporary deactivation (and subsequent reactivation) of individual genes in E. coli. They discovered these transient perturbations can generate lasting changes, which are projected to be inherited for multiple generations. The team currently is working to validate their simulations in laboratory experiments using a variation of CRISPR that deactivates genes temporarily rather than permanently.

But if the changes are encoded in the regulatory network rather than the DNA, the research team questioned how a cell can transmit them across generations. They propose that the reversible perturbation sparks an irreversible chain reaction within the regulatory network. As one gene deactivates, it affects the gene next to it in the network. By the time the first gene is reactivated, the cascade is already in full swing because the genes can form self-sustaining circuits that become impervious to outside influences once activated.

"It's a network phenomenon," said Motter, who is an expert in the dynamic behaviors of complex systems. "Genes interact with each other. If you perturb one gene, it affects others."

Although his team is deactivating genes to test the hypothesis, Motter is clear that different types of perturbations could cause a similar effect. "We also could have changed the cell's environment," he said. "It could be the temperature, the availability of nutrients or the pH."

The study also suggests that other organisms have the necessary elements to exhibit non-genetic heritability. "In biology, it's dangerous to assume anything is universal," Motter contends. "But, intuitively, I do expect the effect to be common because E. coli'sregulatory network is similar or simpler than those found in other organisms."

Read more at Science Daily

Aug 28, 2024

Six new rogue worlds: Star birth clues

The James Webb Space Telescope has spotted six likely rogue worlds -- objects with planetlike masses but untethered from any star's gravity -- including the lightest ever identified with a dusty disk around it.

The elusive objects offer new evidence that the same cosmic processes that give birth to stars may also play a common role in making objects only slightly bigger than Jupiter.

"We are probing the very limits of the star forming process," said lead author Adam Langeveld, an astrophysicist at Johns Hopkins University. "If you have an object that looks like a young Jupiter, is it possible that it could have become a star under the right conditions? This is important context for understanding both star and planet formation."

The findings come from Webb's deepest survey of the young nebula NGC1333, a star-forming cluster about a thousand light-years away in the Perseus constellation. A new image from the survey released today by the European Space Agency shows NGC1333 glowing with dramatic displays of interstellar dust and clouds. A paper detailing the survey's findings has been accepted for publication in The Astronomical Journal.

Webb's data suggest the discovered worlds are gas giants 5-10 times more massive than Jupiter. That means they are among the lowest-mass objects ever found to have grown from a process that would generally produce stars and brown dwarfs, objects straddling the boundary between stars and planets that never ignite hydrogen fusion and fade over time.

"We used Webb's unprecedented sensitivity at infrared wavelengths to search for the faintest members of a young star cluster, seeking to address a fundamental question in astronomy: How light an object can form like a star?" said Johns Hopkins Provost Ray Jayawardhana, an astrophysicist and senior author of the study. "It turns out the smallest free-floating objects that form like stars overlap in mass with giant exoplanets circling nearby stars."

The telescope's observations revealed no objects lower than five Jupiter masses despite possessing sufficient sensitivity to detect such bodies. That's a strong indication that any stellar objects lighter than this threshold are more likely to form the way planets do, the authors concluded.

"Our observations confirm that nature produces planetary mass objects in at least two different ways -- from the contraction of a cloud of gas and dust, the way stars form, and in disks of gas and dust around young stars, as Jupiter in our own solar system did," Jayawardhana said.

The most intriguing of the starless objects is also the lightest, having an estimated mass of five Jupiters (about 1,600 Earths). The presence of a dusty disk means the object almost certainly formed like a star, as space dust generally spins around a central object in the early stages of star formation, said Langeveld, a postdoctoral researcher in Jayawardhana's group.

Disks are also a prerequisite for the formation of planets, suggesting the observations may also have important implications for potential "mini" planets.

"Those tiny objects with masses comparable to giant planets may themselves be able to form their own planets," said co-author Aleks Scholz, an astrophysicist at the University of St Andrews. "This might be a nursery of a miniature planetary system, on a scale much smaller than our solar system."

Using the NIRISS instrument on Webb, the astronomers measured the infrared light profile (or spectrum) of every object in the observed portion of the star cluster and reanalyzed 19 known brown dwarfs. They also discovered a new brown dwarf with a planetary-mass companion, a rare finding that challenges theories of how binary systems form.

"It's likely that such a pair formed the way binary star systems do, from a cloud fragmenting as it contracted," Jayawardhana said. "The diversity of systems that nature has produced is remarkable and pushes us to refine our models of star and planet formation."

Rogue worlds may originate from collapsing molecular clouds that lack the mass for the nuclear fusion that powers stars. They can also form when gas and dust in disks around stars coalesce into planetlike orbs that are eventually ejected from their star systems, probably because of gravitational interactions with other bodies.

These free-floating objects blur classifications of celestial bodies because their masses overlap with gas giants and brown dwarfs. Even though such objects are considered rare in the Milky Way galaxy, the new Webb data show they account for about 10% of celestial bodies in the targeted star cluster.

In the coming months, the team will study more of the faint objects' atmospheres and compare them to heavier brown dwarfs and gas giant planets. They have also been awarded time on the Webb telescope to study similar objects with dusty disks to explore the possibility of forming mini planetary systems resembling Jupiter's and Saturn's numerous moons.

Read more at Science Daily

What microscopic fossilized shells tell us about ancient climate change

At the end of the Paleocene and beginning of the Eocene epochs, between 59 to 51 million years ago, Earth experienced dramatic warming periods, both gradual periods stretching millions of years and sudden warming events known as hyperthermals.

Driving this planetary heat up were massive emissions of carbon dioxide (CO2) and other greenhouse gases, but other factors like tectonic activity may have also been at play.

New research led by University of Utah geoscientists pairs sea surface temperatures with levels of atmospheric CO2 during this period, showing the two were closely linked. The findings also provide case studies to test carbon cycle feedback mechanisms and sensitivities critical for predicting anthropogenic climate change as we continue pouring greenhouse gases into the atmosphere on an unprecedented scale in the planet's history.

"The main reason we are interested in these global carbon release events is because they can provide analogs for future change," said lead author Dustin Harper, a postdoctoral researcher in the Department of Geology & Geophysics. "We really don't have a perfect analog event with the exact same background conditions and rate of carbon release."

But the study published Monday in the Proceedings of the National Academy of Sciences, or PNAS, suggests emissions during two ancient "thermal maxima" are similar enough with today's anthropogenic climate change to help scientists forecast its consequences.

The research team analyzed microscopic fossils -- recovered in drilling cores taken from an undersea plateau in the Pacific -- to characterize surface ocean chemistry at the time the shelled creatures were alive. Using a sophisticated statistical model, they reconstructed sea surface temperatures and atmospheric CO2 levels over a 6-million-year period that covered two hyperthermals, the Paleocene-Eocene Thermal Maximum, or PETM, 56 million years ago and Eocene Thermal Maximum 2, ETM-2, 54 million years ago.

The findings indicate that as atmospheric levels of CO2 rose, so too did global temperatures.

"We have multiple ways that our planet, that our atmosphere is being influenced by CO2 additions, but in each case, regardless of the source of CO2, we're seeing similar impacts on the climate system," said co-author Gabriel Bowen, a U professor of geology & geophysics.

"We're interested in how sensitive the climate system was to these changes in CO2. And what we see in this study is that there's some variation, maybe a little lower sensitivity, a lower warming associated with a given amount of CO2 change when we look at these very long-term shifts. But that overall, we see a common range of climate sensitivities."

Today, human activities associated with fossil fuels are releasing carbon 4 to 10 times more rapidly than occurred during these ancient hyperthermal events. However, the total amount of carbon released during the ancient events is similar to the range projected for human emissions, potentially giving researchers a glimpse of what could be in store for us and future generations.

First scientists must determine what happened to the climate and oceans during these episodes of planetary heating more than 50 million years ago.

"These events might represent a mid- to worst-case scenario kind of case study," Harper said. "We can investigate them to answer what's the environmental change that happens due to this carbon release?"

Earth was very warm during the PETM. No ice sheets covered the poles and ocean temperatures in the mid-90s degrees Fahrenheit.

To determine oceanic CO2 levels the researchers turned to fossilized remains of foraminifera, a shelled single-cell organism akin to plankton. The research team based the study on cores previously extracted by the International Ocean Discovery Program at two locations in Pacific.

The foram shells accumulate small amounts of boron, the isotopes of which are a proxy reflecting CO2 concentrations in the ocean at the time the shells formed, according to Harper.

"We measured the boron chemistry of the shells, and we're able to translate those values using modern observations to past seawater conditions. We can get at seawater CO2 and translate that into atmospheric CO2," Harper said. "The goal of the target study interval was to establish some new CO2 and temperature records for the PETM and ETM-2, which represent two of the best analogs in terms of modern change, and also provide a longer-term background assessment of the climate system to better contextualize those events."

The cores Harper studied were extracted from Shatsky Rise in the subtropical North Pacific, which is an ideal location for recovering ocean-bottom sediments that reflect conditions in the ancient past.

Carbonate shells dissolve if they settle into deep ocean, so scientists must look to underwater plateaus like Shatsky Rise, where the water depths are relatively shallow. While their inhabitants were living millions of years ago, the foraminifera shells record the sea surface conditions.

Read more at Science Daily

Study shows reduced inflammation in residents after adding trees to their neighborhoods

The University of Louisville's groundbreaking Green Heart Louisville Project has found that people living in neighborhoods where the number of trees and shrubs was more than doubled showed lower levels of a blood marker of inflammation than those living outside the planted areas. General inflammation is an important risk indicator for heart disease and other chronic diseases.

The Christina Lee Brown Envirome Institute launched the first-of-its-kind project in 2018 in partnership with The Nature Conservancy, Washington University in St. Louis, Hyphae Design Laboratory and others to study whether and how living among more densely greened surroundings contributes to better heart health. The design of the study closely mirrors clinical trials which test whether medical treatments are effective. The team applied the treatment -- the addition of large trees and shrubs -- to some participants' neighborhoods but not to others. They then compared residents' health data to see how the addition of the trees affected their health.

"The Green Heart Louisville Project is an excellent example of how our university's innovative and collaborative researchers are working to improve lives in our community and far beyond," UofL President Kim Schatzel said. "Trees are beautiful, but these results show that the trees around us are also beneficial to individual and community health. Through this and many other projects, the Envirome Institute is improving health at the community level, not just for individuals, but for everyone living in a neighborhood."

To understand the state of community's health at the start of the study, researchers took blood, urine, hair and nail samples and documented health data from 745 people living in a four-square-mile area of south Louisville. The researchers also took detailed measurements of tree coverage and levels of air pollution in the area.

Following this baseline data collection, the Envirome Institute worked with The Nature Conservancy and a host of local partners and contractors to plant more than 8,000 large trees and shrubs in designated neighborhoods within the project area. Those living in the greened area were considered the treated population and the results obtained from this population were compared with residents of adjacent neighborhoods, where the project team did not plant any trees.

After the plantings, the research team reassessed residents' health. They found that those living in the greened area had 13-20% lower levels of a biomarker of general inflammation, a measure called high-sensitivity C-reactive protein (hsCRP) than those living in the areas that did not receive any new trees or shrubs. Higher levels of hsCRP are strongly associated with a risk of cardiovascular disease and are an even stronger indicator of heart attack than cholesterol levels. Higher CRP levels also indicate a higher risk of diabetes and certain cancers.

A reduction of hsCRP by this percentage corresponds to nearly 10-15% reduction in the risk of heart attacks, cancer or dying from any disease.

"These results from the Green Heart Louisville Project indicate that trees contribute more to our lives than beauty and shade. They can improve the health of the people living around them," said Aruni Bhatnagar, director of the Envirome Institute and UofL professor of medicine. "Although several previous studies have found an association between living in areas of high surrounding greenness and health, this is the first study to show that a deliberate increase in greenness in the neighborhood can improve health. With these results and additional studies that we hope to report soon, we are closer to understanding the impact of local tree cover on residents' health. This finding will bolster the push to increase urban greenspaces."

As more is known about the health impacts of increased tree cover, increased greening in cities may emerge as a key method to improve public health.

"Most of us intuitively understand that nature is good for our health. But scientific research testing, verifying and evaluating this connection is rare," said Katharine Hayhoe, chief scientist of The Nature Conservancy. "These recent findings from the Green Heart Project build the scientific case for the powerful connections between the health of our planet and the health of all of us."

Earlier in August, the Green Heart Louisville Project was awarded an additional $4.6 million in funding from the National Institute of Environmental Health Sciences to support continued research over the next five years.

Read more at Science Daily

Discovery of how blood clots harm brain and body in COVID-19 points to new therapy

In a study that reshapes what we know about COVID-19 and its most perplexing symptoms, scientists have discovered that the blood coagulation protein fibrin causes the unusual clotting and inflammation that have become hallmarks of the disease, while also suppressing the body's ability to clear the virus.

Importantly, the team also identified a new antibody therapy to combat all of these deleterious effects.

Published in Nature, the study by Gladstone Institutes and collaborators overturns the prevailing theory that blood clotting is merely a consequence of inflammation in COVID-19. Through experiments in the lab and with mice, the researchers show that blood clotting is instead a primary effect, driving other problems -- including toxic inflammation, impaired viral clearance, and neurological symptoms prevalent in those with COVID-19 and long COVID.

The trigger is fibrin, a protein in the blood that normally enables healthy blood coagulation, but has previously been shown to have toxic inflammatory effects. In the new study, scientists found that fibrin becomes even more toxic in COVID-19 as it binds to both the virus and immune cells, creating unusual clots that lead to inflammation, fibrosis, and loss of neurons.

"Knowing that fibrin is the instigator of inflammation and neurological symptoms, we can build a new path forward for treating the disease at the root," says Katerina Akassoglou, PhD, a senior investigator at Gladstone and the director of the Center for Neurovascular Brain Immunology at Gladstone and UC San Francisco. "In our experiments in mice, neutralizing blood toxicity with fibrin antibody therapy can protect the brain and body after COVID infection."

From the earliest months of the pandemic, irregular blood clotting and stroke emerged as puzzling effects of COVID-19, even among patients who were otherwise asymptomatic. Later, as long COVID became a major public health issue, the stakes grew even higher to understand the cause of this disease's other symptoms, including its neurological effects. More than 400 million people worldwide have had long COVID since the start of the pandemic, with an estimated economic cost of about $1 trillion each year.

Flipping the Conversation

Many scientists and medical professionals have hypothesized that inflammation from the immune system's rapid reaction to the COVID-causing virus is what leads to blood clotting and stroke. But even at the dawn of the pandemic in 2020, that explanation didn't sound right to Akassoglou and her scientific collaborators.

"We know of many other viruses that unleash a similar cytokine storm in response to infection, but without causing blood clotting activity like we see with COVID," says Warner Greene, MD, PhD, senior investigator and director emeritus at Gladstone, who co-led the study with Akassoglou.

"We began to wonder if blood clots played a principal role in COVID -- if this virus evolved in a way to hijack clotting for its own benefit," Akassoglou adds.

Indeed, through multiple experiments in mice, the researchers found that the virus spike protein directly binds to fibrin, causing structurally abnormal blood clots with enhanced inflammatory activity. The team leveraged genetic tools to create a specific mutation that blocks only the inflammatory properties of fibrin without affecting the protein's beneficial blood-clotting abilities.

When mice were genetically altered to carry the mutant fibrin or had no fibrin in their bloodstream, the scientists found that inflammation, oxidative stress, fibrosis, and clotting in the lungs didn't occur or were much reduced after COVID-19 infection.

In addition to discovering that fibrin sets off inflammation, the team made another important discovery: fibrin also suppresses the body's "natural killer," or NK, cells, which normally work to clear the virus from the body. Remarkably, when the scientists depleted fibrin in the mice, NK cells were able to clear the virus.

These findings support that fibrin is necessary for the virus to harm the body.

Mechanism Not Triggered by Vaccines

The fibrin mechanism described in the paper is not related to the extremely rare thrombotic complication with low platelets that has been linked to adenoviral DNA COVID-19 vaccines, which are no longer available in the U.S.

By contrast, in a study of 99 million COVID-vaccinated individuals led by The Global COVID Vaccine Safety Project, vaccines that leverage mRNA technology to produce spike proteins in the body exhibited no excessive clotting or blood-based disorders that met the threshold for safety concerns. Instead, mRNA vaccines protect from clotting complications otherwise induced by infection.

Protecting the Brain

Akassoglou's lab has long investigated how fibrin that leaks into the brain triggers neurologic diseases, such as Alzheimer's disease and multiple sclerosis, essentially by hijacking the brain's immune system and setting off a cascade of harmful, often irreversible, effects.

The team now showed that in COVID-infected mice, fibrin is responsible for the harmful activation of microglia, the brain's immune cells involved in neurodegeneration. After infection, the scientists found fibrin together with toxic microglia and when they inhibited fibrin, the activation of these toxic cells in the brains of mice was significantly reduced.

"Fibrin that leaks into the brain may be the culprit for COVID-19 and long COVID patients with neurologic symptoms, including brain fog and difficulty concentrating," Akassoglou says. "Inhibiting fibrin protects neurons from harmful inflammation after COVID-19 infection."

The team tested its approach on different strains of the virus that causes COVID-19, including those that can infect the brain and those that do not. Neutralizing fibrin was beneficial in both types of infection, pointing to the harmful role of fibrin in brain and body in COVID-19 and highlighting the broad implications of this study.

A New Potential Therapy

This study demonstrates that fibrin is damaging in at least two ways: by activating a chronic form of inflammation and by suppressing a beneficial NK cell response capable of clearing virally infected cells.

"We realized if we could neutralize both of these negative effects, we could potentially resolve the severe symptoms we're seeing in patients with COVID-19 and possibly long COVID," Greene says.

Akassoglou's lab previously developed a drug, a therapeutic monoclonal antibody, that acts only on fibrin's inflammatory properties without adverse effects on blood coagulation and protects mice from multiple sclerosis and Alzheimer's disease.

In the new study, the team showed that the antibody blocked the interaction of fibrin with immune cells and the virus. By administering the immunotherapy to infected mice, the team was able to prevent and treat severe inflammation, reduce fibrosis and viral proteins in the lungs, and improve survival rates. In the brain, the fibrin antibody therapy reduced harmful inflammation and increased survival of neurons in mice after infection.

A humanized version of Akassoglou's first-in-class fibrin-targeting immunotherapy is already in Phase 1 safety and tolerability clinical trials in healthy people by Therini Bio. The drug cannot be used on patients until it completes this Phase 1 safety evaluation, and then would need to be tested in more advanced trials for COVID-19 and long COVID.

Looking ahead to such trials, Akassoglou says patients could be selected based on levels of fibrin products in their blood -- a measure believed to be a predictive biomarker of cognitive impairment in long COVID.

"The fibrin immunotherapy can be tested as part of a multipronged approach, along with prevention and vaccination, to reduce adverse health outcomes from long COVID," Greene adds.

The Power of Team Science

The study's findings intersect the scientific areas of immunology, hematology, virology, neuroscience, and drug discovery -- and required many labs across institutions to work together to execute experiments required to solve the blood-clotting mystery. Akassoglou founded the Center for Neurovascular Brain Immunology at Gladstone and UCSF in 2021 specifically for the purpose of conducting multidisciplinary, collaborative studies that address complex problems.

"I don't think any single lab could have accomplished this on their own," says Melanie Ott, MD, PhD, director of the Gladstone Institute of Virology and co-author of the study, noting important contributions from teams at Stanford, UC San Francisco, UC San Diego, and UCLA. "This tour-de-force study highlights the importance of collaboration in tackling these big questions."

Not only did this study address a big question, but it did so in a way that paves a clear clinical path for helping patients who have few options today, says Lennart Mucke, MD, director of the Gladstone Institute of Neurological Disease.

Read more at Science Daily

Aug 27, 2024

Hidden magmatism discovered at the Chang'e-6 lunar landing site

Lunar igneous activities including intrusive and extrusive magmatism, and their products contain significant information about the lunar interior and its thermal state. Their distribution is asymmetrical on the nearside and farside, reflecting the global lunar dichotomy. In addition to previously returned lunar samples all from nearside (Apollo, Luna, and Chang'e-5), samples from the South Pole-Aitken (SPA) basin on the farside have long been thought to hold the key to rebalancing the asymmetrical understandings of the Moon and disclosing the lunar dichotomy conundrum.

Earlier this year, the Chang'e-6 mission of the Chinese Lunar Exploration Program, successfully launched on May 3, landed on the lunar surface on June 2, and returned to the Earth on June 25 carrying a total of 1935.3g of lunar soils. It is the world's first lunar farside sample-return mission, which landed in the south of the Apollo basin within the SPA basin on the farside. These precious samples would open a window to solve the long-standing question of lunar dichotomy, even reshape human's knowledge of our closest neighbour. However, compared with the well-known mare volcanism surrounding the Chang'e-6 landing site, the intrusive magmatic activities have a much more obscure presence and origin, impeding future sample analyses when they are available for application.

In a recent research paper published in The Astrophysical Journal Letters, Dr Yuqi QIAN, Professor Joseph MICHALSKI and Professor Guochun ZHAO from the Department of Earth Sciences at The University of Hong Kong (HKU) and their domestic and international collaborators have comprehensively studied the intrusive magmatism of the Chang'e-6 landing site and its surroundings based on remote sensing data. The study revealed their extensive distributions and obscure nature with significant implications for the petrogenesis of lunar plutonic rocks and the Chang'e-6 mission, which will facilitate scientists' further study of lunar farside.

The study has found that intrusive magmatism is widespread in the SPA basin. They occur in various forms including sills beneath floor-modified craters, linear and ring dikes shown by gravity data, and Mg-suite intrusions with characteristic spectral absorptions. These observations agree with the intermediate-thick crust of SPA where intrusion is favored. Landing in the SPA basin, Chang'e-6 likely collected plutonic rocks, excavated and transported by adjacent impact craters to the sampling site, that could be examined by the ongoing sample studies. They have discovered two heavily degraded floor-fractured craters, inspiring to identify more similar features on the Moon. All indicate that intrusive magmatism is abundant in the Chang'e-6 sampling region.

This study has traced potential plutonic materials in the Chang'e-6 samples and found that Mg-suite materials highly likely exist, primarily from the western peak ring of the Apollo basin delivered by Chaffee S crater. These Mg-rich materials contain crucial information on the origin of mysterious KREEP-poor Mg-suite rocks. Samples from both the intrusive and extrusive magmatism from the never sampled farside, especially the mysterious Mg-suite, will shed further light on solving the lunar dichotomy conundrum and a series of fundamental scientific questions relating to secondary crust building and early evolution of the Moon.

Professor Xianhua LI, an academician of the Chinese Academy of Science (CAS), and a leader of China's lunar sample studies from the Institute of Geology and Geophysics (CAS), said: 'The results of this research set a significant geological framework to study plutonic rocks in the Chang'e-6 samples, especially Mg-suite rocks.' Professor Li emphasised: 'Their petrogenesis and timing are unclear, and this research would dramatically help to understand their origin mechanism.'

'This research is an excellent example of HKU's deep involvement in the China's Lunar Exploration Program,' said Professor Guochun ZHAO, an academician of the Chinese Academy of Science and Chair Professor of Earth Sciences (HKU). 'Lunar and space exploration programs are an important component of China's goal to become a scientific and technological power, and HKU's proactive involvement in these programs will bring additional resources for Hong Kong to become an international centre for science and innovation,' he continued.

Read more at Science Daily

Coastal cities must adapt faster to climate change

Coastal cities play a key role in the global economy and have important functions for society at large. At the same time, they are severely affected by the impact of climate change. That is why their role in global climate adaptation is crucial. To find out how coastal cities are adapting, an international team led by Professor Matthias Garschagen, a geographer at Ludwig-Maximilians-Universität München (LMU), has now analyzed the current state of adaptation.

Based on studies of 199 cities across 54 countries, the researchers investigated whether and how cities take certain risk factors into account in their adaptation efforts. Climate factors like rising sea levels, storms, flooding and heat were among the key parameters considered. Other aspects were also taken into account in the analysis, such as the exposure and vulnerability of the population, the infrastructure and the ecosystems in the respective region.

Climate measures are mostly inadequate

Most of the measures taken to adapt to climate change relate primarily to sea level rise, flooding and, to a lesser extent, storm surges, cyclones and erosion. Technical and institutional measures such as large-scale levees or urban planning innovations are more common in wealthier regions like North America and Europe. In less prosperous regions such as in many parts of Africa and Asia, behavior-related measures are the dominant type, with affected households and companies being largely left to their own devices.

Overall, the LMU researchers found that most adaptation measures are inadequate in their depth, scope and speed -- regardless of the region or its prosperity. The researchers also found little evidence of a sustainable reduction in risk as a result of the measures taken.

"Our findings reveal that there is plenty of work still to be done on all levels," explains Prof. Matthias Garschagen. "There has been little truly far-reaching change involving a fundamental rethink of risk management. Cities often attempt to optimize their disaster management on the basis of past experience without fundamentally questioning whether these approaches are still going to be viable in the future," says Garschagen.

Global research on climate change needs to be done in all regions of the world

The research also found that it is rare for adaptation planning to be based on quantifiable factors. Although cities do take future natural risks such as flooding and heat into account, they rarely consider socioeconomic factors such as future trends in societal vulnerability or spatial growth and exposure. "But those trends those are important," says Garschagen, "because the Lagos or Jakarta of today is not the same as it's going to be in 20 years' time. There are certainly big research aps and we need better scenarios and better modeling methods. Another important question is about when it makes more sense to abandon coastal protection measures and consider resettling the population instead."

Read more at Science Daily

Public trust in drinking water safety is low globally

A new study finds more than half of adults surveyed worldwide expect to be seriously harmed by their water within the next two years. Led by global health experts at Northwestern University and the University of North Carolina at Chapel Hill, the study sought to understand public perceptions of drinking water safety.

Because perceptions shape attitudes and behaviors, distrust in water quality has a negative impact on people's health, nutrition, psychological and economic well-being -- even when the water meets safety standards.

"If we think our water is unsafe, we will avoid using it," said Sera Young, professor of anthropology and global health at Northwestern and senior author of the new study.

"When we mistrust our tap water, we buy packaged water, which is wildly expensive and hard on the environment; drink soda or other sugar-sweetened beverages, which is hard on the teeth and the waistline; and consume highly processed prepared foods or go to restaurants to avoid cooking at home, which is less healthy and more expensive," Young said. "Individuals exposed to unsafe water also experience greater psychological stress and are at greater risk of depression."

Young is a Morton O. Schapiro Faculty Fellow at the Institute for Policy Research, a faculty fellow at the Paula M. Trienens Institute for Sustainability and Energy, and co-lead of the Making Water Insecurity Visible Working Group at the Buffett Institute for Global Affairs.

Using nationally representative data from 148,585 adults in 141 countries from the 2019 Lloyd's Register Foundation World Risk Poll, the authors found a high prevalence of anticipated harm from water supply, with the highest in Zambia, the lowest in Singapore and an overall mean of 52.3%.

They also identified key characteristics of those who thought they would be harmed by their drinking water. Women, city dwellers, individuals with more education, and those struggling on their current income were more likely to anticipate being harmed by their drinking water.

The researchers found that, surprisingly, higher corruption perception index scores were the strongest predictor of anticipated harm from drinking water, more so than factors like infrastructure and Gross Domestic Product.

Further, even within countries with consistent access to basic drinking water services, doubts about the safety of water were widespread. This includes the U.S. where 39% of those polled anticipated serious harm from drinking water in the short term.

"Our research highlights that it is imperative both to deliver safe drinking water and to make sure that people have confidence in their water source," said Joshua Miller, a doctoral student at the UNC Gillings School of Global Public Health and the study's first author.

The researchers note that it is difficult for consumers to judge the hazards and safety of their water supply because many contaminants are invisible, odorless and tasteless. Without adequate information, many are left to evaluate the safety of their water based on prior experiences, media reports, and personal values and beliefs.

"It's also possible that people correctly judge the safety of their water," Young said. "The good people of Flint didn't trust their water and they were spot on."

The co-authors suggest actions officials can take to improve public trust around drinking water, including efforts to make testing more readily available, translate test results, replace lead pipes and provide at-home water filters when contaminants are detected, as well as provide improved access to safe drinking water.

"This is the kind of work that can catalyze greater attention and political will to prioritize these services in national development plans and strategies, and get us closer to achieving universal access to safe drinking water," said Aaron Salzberg, director of the Water Institute at the UNC Gillings School of Global Public Health.

Read more at Science Daily

Matching dinosaur footprints found on opposite sides of the Atlantic Ocean

An international team of researchers led by SMU paleontologist Louis L. Jacobs has found matching sets of Early Cretaceous dinosaur footprints on what are now two different continents.

More than 260 footprints were discovered in Brazil and in Cameroon, showing where land-dwelling dinosaurs were last able to freely cross between South America and Africa millions of years ago before the two continents split apart.

"We determined that in terms of age, these footprints were similar," Jacobs said. "In their geological and plate tectonic contexts, they were also similar. In terms of their shapes, they are almost identical."

The footprints, impressed into mud and silt along ancient rivers and lakes, were found more than 3,700 miles, or 6,000 kilometers, away from each other. Dinosaurs made the tracks 120 million years ago on a single supercontinent known as Gondwana -- which broke off from the larger landmass of Pangea, Jacobs said.

"One of the youngest and narrowest geological connections between Africa and South America was the elbow of northeastern Brazil nestled against what is now the coast of Cameroon along the Gulf of Guinea," Jacobs explained. "The two continents were continuous along that narrow stretch, so that animals on either side of that connection could potentially move across it."

Most of the dinosaur fossils were created by three-toed theropod dinosaurs.. A few were also likely made by sauropods or ornithischians, said Diana P. Vineyard, who is a research associate at SMU and co-author of the study.

Other co-authors of the study were Lawrence J. Flynn in the Department of Human Evolutionary Biology at Harvard University, Christopher R. Scotese in the Department of Earth and Planetary Sciences at Northwestern University and Ismar de Souza Carvalho at the Universidade Federal do Rio de Janeiro and Centro de Geociências.

The study was published by New Mexico Museum of Natural History & Science in a tribute to the late paleontologist Martin Lockley, who spent much of his career studying dinosaurs tracks and footprints.

Dinosaur footprints tell the whole story

Africa and South America started to split around 140 million years ago, causing gashes in Earth's crust called rifts to open up along pre-existing weaknesses. As the tectonic plates beneath South America and Africa moved apart, magma from the Earth's mantle rose to the surface, creating new oceanic crust as the continents moved away from each other. And eventually, the South Atlantic Ocean filled the void between these two newly-shaped continents.

Signs of some of those major events were evident between both locations where the dinosaur footprints were found -- at the Borborema region in the northeast part of Brazil and the Koum Basin in northern Cameroon. Half-graben basins -- geologic structures formed during rifting as the Earth's crust pulls apart and faults form -- are found in both areas and contain ancient river and lake sediments. Along with dinosaur tracks, these sediments contain fossil pollen that indicate an age of 120 million years.

Read more at Science Daily

Aug 25, 2024

NASA's DART impact permanently changed the shape and orbit of asteroid moon

When NASA's Double Asteroid Redirection Test (DART) spacecraft collided with an asteroid moon called Dimorphos in 2022, the moon was significantly deformed -- creating a large crater and reshaping it so dramatically that the moon derailed from its original evolutionary progression -- according to a new study. The study's researchers believe that Dimorphos may start to "tumble" chaotically in its attempts to move back into gravitational equilibrium with its parent asteroid named Didymos.

"For the most part, our original pre-impact predictions about how DART would change the way Didymos and its moon move in space were correct," said Derek Richardson, a professor of astronomy at the University of Maryland and a DART investigation working group lead. "But there are some unexpected findings that help provide a better picture of how asteroids and other small bodies form and evolve over time."

The paper published in Planetary Science Journal on August 23, 2024 by a team led by Richardson detailed notable post-impact observations and described possible implications for future asteroid research.

One of the biggest surprises was how much the impact with DART changed the shape of Dimorphos. According to Richardson, the asteroid moon was originally oblate (shaped like a hamburger) but became more prolate (stretched out like a football) after the DART spacecraft collided with it.

"We were expecting Dimorphos to be prolate pre-impact simply because that's generally how we believed the central body of a moon would gradually accumulate material that's been shed off a primary body like Didymos. It would naturally tend to form an elongated body that would always point its long axis toward the main body," Richardson explained. "But this result contradicts that idea and indicates that something more complex is at work here. Furthermore, the impact-induced change in Dimorphos' shape likely changed how it interacts with Didymos."

Richardson noted that although DART only hit the moon, the moon and the main body are connected through gravity. The debris scattered by the spacecraft on impact also played a role in the disturbed equilibrium between the moon and its asteroid, shortening Dimorphos' orbit around Didymos. Interestingly, Didymos' shape remained the same -- a finding that indicates that the larger asteroid's body is firm and rigid enough to maintain its form even after losing mass to create its moon.

According to Richardson, Dimorphos' changes have important implications for future exploration efforts, including the European Space Agency's follow-up mission to the Didymos system slated for October 2024.

"Originally, Dimorphos was probably in a very relaxed state and had one side pointing toward the main body, Didymos, just like how Earth's moon always has one face pointing toward our planet," Richardson explained. "Now, it's knocked out of alignment, which means it may wobble back and forth in its orientation. Dimorphos might also be 'tumbling,' meaning that we may have caused it to rotate chaotically and unpredictably."

The team is now waiting to find out when the ejected debris will clear from the system, whether Dimorphos is still tumbling in space and when it will eventually regain its previous stability.

"One of our biggest questions now is if Dimorphos is stable enough for spacecraft to land and install more research equipment on it," he said. "It could take a hundred years to see noticeable changes in the system, but it's only been a few years since the impact. Learning about how long it takes Dimorphos to regain its stability tells us important things about its internal structure, which in turn informs future attempts to deflect hazardous asteroids."

Richardson and his team hope that Hera will provide more information about DART's impact. By late 2026, Hera will arrive at the binary asteroid system containing Dimorphos and Didymos to assess the internal properties of both asteroids for the first time, providing a more detailed analysis of the DART mission and its implications for the future.

Read more at Science Daily

Fisheries research overestimates fish stocks

The state of fish stocks in the world's ocean is worse than previously thought. While overfishing has long been blamed on fisheries policies that set catch limits higher than scientific recommendations, a new study by four Australian research institutions reveals that even these scientific recommendations were often too optimistic. The result? Far more global fish stocks are overfished or have collapsed than we thought. Dr Rainer Froese from the GEOMAR Helmholtz Centre for Ocean Research Kiel and Dr Daniel Pauly from the University of British Columbia have provided their insights on the study. In their Perspective Paper, published today in the journal Science alongside the new study, the two fisheries experts call for simpler yet more accurate models and, when in doubt, a more conservative approach to stock assessments.

Many fish stocks around the world are either threatened by overfishing or have already collapsed. One of the main reasons for this devastating trend is that policymakers have often ignored the catch limits calculated by scientists, which were intended to be strict thresholds to protect stocks. But it has now become clear that even these scientific recommendations were often too high.

In the European Union (EU), for example, fisheries are primarily managed through allowable catch limits, known as quotas, which are set by the European Council of Agriculture Ministers on the basis of scientific advice and recommendations from the European Commission. A new study by Australian scientists (Edgar et al.) shows that already the scientific advice has been recommending catch limits that were too high.

The journal Science, where the study is published today, asked two of the world's most cited fisheries experts, Dr Rainer Froese from the GEOMAR Helmholtz Centre for Ocean Research Kiel and Dr Daniel Pauly from the University of British Columbia, to interpret the findings. In their Perspective Paper, they advocate for simpler, yet more realistic models based on ecological principles, and call for more conservative stock assessments and management when uncertainties arise.

For the study, Edgar et al. analysed data from 230 fish stocks worldwide and found that stock assessments have often been overly optimistic. They overestimated the abundance of fish and how quickly stocks could recover. Particularly affected are stocks that have already shrunk due to overfishing. The overestimates led to so-called phantom recoveries, where stocks were classified as recovered while, in reality, they continued to decline. "This resulted in insufficient reductions in catch limits when they were most urgently needed," explains Dr Rainer Froese. "Unfortunately, this is not just a problem of the past. Known overestimates of stock sizes in recent years are still not used to corrected this error in current stock assessments."

The research by Edgar et al. also shows that almost a third of stocks classified by the Food and Agriculture Organization (FAO) as "maximally sustainably fished" have instead crossed the threshold into the "overfished" category. Moreover, the number of collapsed stocks (those with less than ten per cent of their original biomass) within the overfished category is likely to be 85 per cent higher than previously estimated.

But what causes these distortions in stock assessments? Standard stock assessments use models that can include more than 40 different parameters, such as fish life history, catch details, and fishing effort. This large number of parameters makes the assessments unnecessarily complex, write Froese and Pauly. The results can only be reproduced by a few experts with access to the original models, data and settings. Moreover, many of the required input parameters are unknown or difficult to estimate, leading modelers to use less reliable values that have worked in the past. Froese notes: "Such practices can skew the results towards the modelers' expectations."

Read more at Science Daily

Mosquitoes sense infrared from body heat to help track humans down

While a mosquito bite is often no more than a temporary bother, in many parts of the world it can be scary. One mosquito species, Aedes aegypti, spreads the viruses that cause over 100,000,000 cases of dengue, yellow fever, Zika and other diseases every year. Another, Anopheles gambiae, spreads the parasite that causes malaria. The World Health Organization estimates that malaria alone causes more than 400,000 deaths every year. Indeed, their capacity to transmit disease has earned mosquitoes the title of deadliest animal.

Male mosquitoes are harmless, but females need blood for egg development. It's no surprise that there's over 100 years of rigorous research on how they find their hosts. Over that time, scientists have discovered there is no one single cue that these insects rely on. Instead, they integrate information from many different senses across various distances.

A team led by researchers at UC Santa Barbara has added another sense to the mosquito's documented repertoire: infrared detection. Infrared radiation from a source roughly the temperature of human skin doubled the insects' overall host-seeking behavior when combined with CO2 and human odor. The mosquitoes overwhelmingly navigated toward this infrared source while host seeking. The researchers also discovered where this infrared detector is located and how it works on a morphological and biochemical level. The results are detailed in the journalNature.

"The mosquito we study, Aedes aegypti, is exceptionally skilled at finding human hosts," said co-lead author Nicolas DeBeaubien, a former graduate student and postdoctoral researcher at UCSB in Professor Craig Montell's laboratory. "This work sheds new light on how they achieve this."

Guided by thermal infrared

It is well established that mosquitoes like Aedes aegypti use multiple cues to home in on hosts from a distance. "These include CO2 from our exhaled breath, odors, vision, [convection] heat from our skin, and humidity from our bodies," explained co-lead author Avinash Chandel, a current postdoc at UCSB in Montell's group. "However, each of these cues have limitations." The insects have poor vision, and a strong wind or rapid movement of the human host can throw off their tracking of the chemical senses. So the authors wondered if mosquitoes could detect a more reliable directional cue, like infrared radiation.

Within about 10 cm, these insects can detect the heat rising from our skin. And they can directly sense the temperature of our skin once they land. These two senses correspond to two of the three kinds of heat transfer: convection, heat carried away by a medium like air, and conduction, heat via direct touch. But energy from heat can also travel longer distances when converted into electromagnetic waves, generally in the infrared (IR) range of the spectrum. The IR can then heat whatever it hits. Animals like pit vipers can sense thermal IR from warm prey, and the team wondered whether mosquitoes, like Aedes aegypti, could as well.

The researchers put female mosquitoes in a cage and measured their host-seeking activity in two zones. Each zone was exposed to human odors and CO2 at the same concentration that we exhale. However, only one zone was also exposed to IR from a source at skin temperature. A barrier separated the source from the chamber prevented heat exchange through conduction and convection. They then counted how many mosquitoes began probing as if they were searching for a vein.

Adding thermal IR from a 34º Celcius source (about skin temperature) doubled the insects' host-seeking activity. This makes infrared radiation a newly documented sense that mosquitoes use to locate us. And the team discovered it remains effective up to about 70 cm (2.5 feet).

"What struck me most about this work was just how strong of a cue IR ended up being," DeBeaubien said. "Once we got all the parameters just right, the results were undeniably clear."

Previous studies didn't observe any effect of thermal infrared on mosquito behavior, but senior author Craig Montell suspects this comes down to methodology. An assiduous scientist might try to isolate the effect of thermal IR on insects by only presenting an infrared signal without any other cues. "But any single cue alone doesn't stimulate host-seeking activity. It's only in the context of other cues, such as elevated CO2 and human odor that IR makes a difference," said Montell, the Duggan and Distinguished Professor of Molecular, Cellular, and Developmental Biology. In fact, his team found the same thing in tests with only IR: infrared alone has no impact.

A trick for sensing infrared

It isn't possible for mosquitoes to detect thermal infrared radiation the same way they would detect visible light. The energy of IR is far too low to activate the rhodopsin proteins that detect visible light in animal eyes. Electromagnetic radiation with a wavelength longer than about 700 nanometers won't activate rhodopsin, and IR generated from body heat is around 9,300 nm. In fact, no known protein is activated by radiation with such long wavelengths, Montell said. But there is another way to detect IR.

Consider heat emitted by the sun. The heat is converted into IR, which streams through empty space. When the IR reaches Earth, it hits atoms in the atmosphere, transferring energy and warming the planet. "You have heat converted into electromagnetic waves, which is being converted back into heat," Montell said. He noted that the IR coming from the sun has a different wavelength from the IR generated by our body heat, since the wavelength depends on the temperature of the source.

The authors thought that perhaps our body heat, which generates IR, might then hit certain neurons in the mosquito, activating them by heating them up. That would enable the mosquitoes to detect the radiation indirectly.

Scientists have known that the tips of a mosquito's antennae have heat-sensing neurons. And the team discovered that removing these tips eliminated the mosquitoes' ability to detect IR.

Indeed, another lab found the temperature-sensitive protein, TRPA1, in the end of the antenna. And the UCSB team observed that animals without a functional trpA1 gene, which codes for the protein, couldn't detect IR.

The tip of each antenna has peg-in-pit structures that are well adapted to sensing radiation. The pit shields the peg from conductive and convective heat, enabling the highly directional IR radiation to enter and warm up the structure. The mosquito then uses TRPA1 -- essentially a temperature sensor -- to detect infrared radiation.

Diving into the biochemistry


The activity of the heat-activated TRPA1 channel alone might not fully explain the range over which mosquitoes were able to detect IR. A sensor that exclusively relied on this protein may not be useful at the 70 cm range the team had observed. At this distance there likely isn't sufficient IR collected by the peg-in-pit structure to heat it enough to activate TRPA1.

Fortunately, Montell's group thought there might be more sensitive temperature receptors based on their previous work on fruit flies in 2011. They had found a few proteins in the rhodopsin family that were quite sensitive to small increases in temperature. Although rhodopsins were originally thought of exclusively as light detectors, Montell's group found that certain rhodopsins can be triggered by a variety of stimuli. They discovered that proteins in this group are quite versatile, involved not just in vision, but also in taste and temperature sensing. Upon further investigation, the researchers discovered that two of the 10 rhodopsins found in mosquitoes are expressed in the same antennal neurons as TRPA1.

Knocking out TRPA1 eliminated the mosquito's sensitivity to IR. But insects with faults in either of the rhodopsins, Op1 or Op2, were unaffected. Even knocking out both the rhodopsins together didn't entirely eliminate the animal's sensitivity to IR, although it significantly weakened the sense.

Their results indicated that more intense thermal IR -- like what a mosquito would experience at closer range (for example, around 1 foot) -- directly activates TRPA1. Meanwhile, Op1 and Op2 can get activated at lower levels of thermal IR, and then indirectly trigger TRPA1. Since our skin temperature is constant, extending the sensitivity of TRPA1 effectively extends the range of the mosquito's IR sensor to around 2.5 ft.

A tactical advantage


Half the world's population is at risk for mosquito-borne diseases, and about a billion people get infected every year, Chandel said. What's more, climate change and worldwide travel have extended the ranges of Aedes aegypti beyond tropical and subtropical countries. These mosquitoes are now present in places in the US where they were never found just a few years ago, including California.

The team's discovery could provide a way to improve methods for suppressing mosquito populations. For instance, incorporating thermal IR from sources around skin temperature could make mosquito traps more effective. The findings also help explain why loose-fitting clothing is particularly good at preventing bites. Not only does it block the mosquito from reaching our skin, it also allows the IR to dissipate between our skin and the clothing so the mosquitoes cannot detect it.

"Despite their diminutive size, mosquitoes are responsible for more human deaths than any other animal," DeBeaubien said. "Our research enhances the understanding of how mosquitoes target humans and offers new possibilities for controlling the transmission of mosquito-borne diseases."

Read more at Science Daily