Aug 5, 2023

New exoplanet discovery builds better understanding of planet formation

An international team of scientists have discovered an unusual Jupiter-sized planet orbiting a low-mass star called TOI-4860, located in the Corvus constellation.

The newly discovered gas giant, named TOI-4860 b, is an unusual planet for two reasons: stars of such low mass are not expected to host planets like Jupiter, and the planet appears to be particularly enriched by heavy elements.

The study, led by University of Birmingham astronomers, is published today (Friday 4th August) in a letter published within the Monthly Notices of the Royal Astronomical Society.

The planet was initially identified using NASA's Transiting Exoplanet Survey Satellite as a drop of brightness while transiting in front of its host star, but that data alone was insufficient to confirm that it was a planet.

The team used the SPECULOOS South Observatory, located in the Atacama Desert in Chile, to measure the planetary signal in several wavelengths and validated the planetary nature. The astronomers also observed the planet just before and after it disappeared behind its host star, noticing that there was no change in light, meaning the planet was not emitting any. Finally, the team collaborated with a Japanese group using the Subaru Telescope in Hawai'i. Together they measured the mass of the planet to fully confirm it.

Following this star and confirming its planet was the initiative of a group of PhD students within the SPECULOOS project.

George Dransfield, one of those PhD students, who recently submitted her thesis at the University of Birmingham, explains: "Under the canonical planet formation model, the less mass a star has, the less massive is the disc of material around that star.

"Since planets are created from that disc, high-mass planets like Jupiter, were widely expected not to form. However, we were curious about this and wanted to check planetary candidates to see if it was possible. TOI-4860 is our first confirmation and also the lowest mass star hosting such a high mass planet."

Amaury Triaud, Professor of Exoplanetology at the University of Birmingham, who led the study said: "I am ever thankful to the bright PhD students of our team for proposing to observe systems like TOI-4860. Their work has really paid off since planets like TOI-4860 are vital to deepening our understanding of planet formation.

"A hint of what might have happened is hidden in the planetary properties, which appear particularly enriched in heavy elements. We have detected something similar in the host star too, so it is likely that an abundance of heavy elements catalysed the planet formation process."

The new gas giant takes about 1.52 days to complete a full orbit around its host star, but because its host is a cold low mass star, the planet itself can be referred to as a 'Warm Jupiter'. This is a subclass of planet that holds particular interest for astronomers looking to build on their initial observations and learn more about how these kinds of planets are formed.

Mathilde Timmermans, another student of the SPECULOOS project, working at the University of Liege in Belgium concludes: "Thanks to its very short orbital period, and to the properties of its host star, the discovery of TOI-4860 b provides a brilliant opportunity to study the atmospheric properties of a warm Jupiter and learn more about how gas giants are formed."

Read more at Science Daily

Oldest known species of swimming jellyfish identified

Royal Ontario Museum (ROM) announces the oldest swimming jellyfish in the fossil record with the newly named Burgessomedusa phasmiformis. These findings are announced in the journal Proceedings of the Royal Society B.

Jellyfish belong to medusozoans, or animals producing medusae, and include today's box jellies, hydroids, stalked jellyfish and true jellyfish. Medusozoans are part of one of the oldest groups of animals to have existed, called Cnidaria, a group which also includes corals and sea anemones. Burgessomedusa unambiguously shows that large, swimming jellyfish with a typical saucer or bell-shaped body had already evolved more than 500 million years ago.

Burgessomedusa fossils are exceptionally well preserved at the Burgess Shale considering jellyfish are roughly 95% composed of water. ROM holds close to two hundred specimens from which remarkable details of internal anatomy and tentacles can be observed, with some specimens reaching more than 20 centimetres in length. These details enable classifying Burgessomedusa as amedusozoan. By comparison with modern jellyfish, Burgessomedusa would also have been capable of free-swimming and the presence of tentacles would have enabled capturing sizeable prey.

"Although jellyfish and their relatives are thought to be one of the earliest animal groups to have evolved, they have been remarkably hard to pin down in the Cambrian fossil record. This discovery leaves no doubt they were swimming about at that time," said co-author Joe Moysiuk, a Ph.D. candidate in Ecology & Evolutionary Biology at the University of Toronto, who is based at ROM.

This study, identifying Burgessomedusa, is based on fossil specimens discovered at the Burgess Shale and mostly found in the late 1980s and 1990s under former ROM Curator of Invertebrate Palaeontology Desmond Collins. They show that the Cambrian food chain was far more complex than previously thought, and that predation was not limited to large swimming arthropods like Anomalocaris (see field image showing Burgessomedusa and Anomalocaris preserved on the same rock surface).

"Finding such incredibly delicate animals preserved in rock layers on top of these mountains is such a wonderous discovery. Burgessomedusa adds to the complexity of Cambrian foodwebs, and like Anomalocaris which lived in the same environment, these jellyfish were efficient swimming predators," said co-author, Dr. Jean-Bernard Caron, ROM's Richard Ivey Curator of Invertebrate Palaeontology. "This adds yet another remarkable lineage of animals that the Burgess Shale has preserved chronicling the evolution of life on Earth."

Cnidarians have complex life cycles with one or two body forms, a vase-shaped body, called a polyp, and in medusozoans, a bell or saucer-shaped body, called a medusa or jellyfish, which can be free-swimming or not. While fossilized polyps are known in ca. 560-million-year-old rocks, the origin of the free-swimming medusa or jellyfish is not well understood. Fossils of any type of jellyfish are extremely rare. As a consequence, their evolutionary history is based on microscopic fossilized larval stages and the results of molecular studies from living species (modelling of divergence times of DNA sequences). Though some fossils of comb-jellies have also been found at the Burgess Shale and in other Cambrian deposits, and may superficially resemble medusozoan jellyfish from the phylum Cnidaria, comb-jellies are actually from a quite separate phylum of animals called Ctenophora. Previous reports of Cambrian swimming jellyfish are reinterpreted as ctenophores.

Read more at Science Daily

Scientists uncover a startling -- and exploitable -- coordination of gene expression in tumors

A Ludwig Cancer Research study has identified a pair of genes whose expression by a type of immune cell within tumors is predictive of outcomes for cancer patients and is linked to a vast network of gene expression programs, engaged by multiple cell types in the tumor microenvironment, that control human cancers.

Researchers led by Ludwig Lausanne's Mikaël Pittet report in the current issue of Science that patients with higher expression of the gene CXCL9 in their tumor-associated macrophages had far better clinical outcomes than those with higher expression of a gene named SPP1 by the immune cells. Macrophages expressing the former gene, they show, are invariably poised to attack cancer cells, while those expressing SPP1 are in a state supportive of tumor growth. Most intriguing, however, is the discovery that when the ratio of CXCL9 to SPP1 is high in the tumor microenvironment (TME), gene expression programs in other TME cells indicate a similarly anti-tumor slant; a low CS ratio, on the other hand, invariably accompanies pro-tumor gene expression signatures across the TME.

"We were very surprised to find that just this one parameter -- the ratio of two genes primarily expressed by macrophages -- could tell us so much else about the tumor," said Pittet. "This is true for multiple types of solid tumors. It means that, despite their enormous complexity, the microenvironments of tumors are governed by a clear set of rules. We have described one of them in this study."

With further validation in prospective clinical studies, Pittet noted, the CS ratio could be an easily measured molecular marker of likely patient prognosis and a useful tool for the management of therapy. Beyond that, the networks of linked gene expression signatures across cell types identified by the study expose several potential molecular targets for the development of drugs that might tip the TME into a state more susceptible to treatments like immunotherapy.

Noncancerous cells of the TME play a critical role in the growth and viability of tumors. These include fibroblasts, which churn out the molecular filler of tissues, endothelial cells that build blood vessels, epithelial cells that line body cavities and a menagerie of immune cell species that variously help or hinder tumor growth. The possibility of targeting these cells to treat cancers is tantalizing because, unlike malignant cells, they do not mutate rapidly and are thus unlikely to evolve resistance to therapies.

Pittet and his colleagues were interested in how much the TME varies between tumors. To find out, they conducted an unbiased analysis of 52 primary and metastatic tumors from 51 patients with head and neck cancers, examining how global gene expression captured in individual cells but statistically analyzed across tumors as a whole corresponded to patient outcomes.

This approach identified CXCL9 and SPP1 -- whose expression is mutually exclusive in individual macrophages -- as being tightly linked to prognosis, and this turned out to be true for other solid cancers as well. The expression of the two genes, Pittet and colleagues show, is also more categorically associated with the anti-tumor or pro-tumor "polarity" of macrophages than currently used markers.

Notably, the ratio of CXCL9 and SPP1 expression (termed CShi or CSlow) was broadly consistent with the state of other types of TME cells in head and neck tumors and with several phenomena associated with pro- and anti-tumor effects. CShi tumors, for example, tended to be infiltrated with B and T lymphocytes and dendritic cells, which all drive anti-tumor immunity. Further, other cell types in these tumors engaged signaling molecules and pathways that fuel inflammation or otherwise instigate immune responses.

CSlow tumors, meanwhile, bore gene expression signatures associated with cancer growth and progression, such as adaptations to oxygen starvation, the formation of new blood vessels and the induction of cellular transformations that propel cancer metastasis.

"Just by looking at the ratio of these two genes in macrophages, you can deduce the molecular activity of tumor cells, endothelial cells, fibroblasts -- you name it," said Pittet. "This startling coherence means that tumors are not a chaotic place, that all these cell states within the TME are coordinated. This information has the potential to be very useful for the development of precision medicine strategies for cancer therapy."

Pittet and his colleagues will next examine whether the gene expression networks identified in their study can be used to prospectively predict patient outcomes or gauge likely responses to various therapies. They will also be looking in more detail at other coordinated axes of gene expression in the TME, how they interact with the CS ratio and how each influences the other.

"The big question is, what are the best ways to interfere therapeutically with this network, with the goal being benefit to the patient?" said Pittet.

Read more at Science Daily

Aug 4, 2023

Gas streamers feed triple baby stars

New observations and simulations of three spiral arms of gas feeding material to three protostars forming in a trinary system have clarified the formation of multi-star systems.

Most stars with a mass similar to the Sun form in multi-star systems together with other stars. So an understanding of multi-star system formation is important to an overall theory of star formation. However, the complexity and lack of high-resolution, high-sensitivity data left astronomers uncertain about the formation scenario. In particular, recent observations of protostars often reported structures called "streamers" of gas flows toward the protostars, but it has been unclear how these streamers form.

An international team led by Jeong-Eun Lee, a professor at Seoul National University, used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe the trinary protostar system IRAS 04239+2436 located 460 light-years away in the constellation Taurus.

The team found that emissions from sulfur monoxide (SO) molecules trace three spiral arms around the three protostars forming in the system.

Comparison with simulations led by Tomoaki Matsumoto, a professor at Hosei University using the supercomputers "ATERUI" and "ATERUI II" in the Center for Computational Astrophysics at the National Astronomical Observatory of Japan (NAOJ) indicate that the three spiral arms are streamers feeding material to the three protostars. The combination of observations and simulations revealed, for the first time, how the streamers are created and contribute to the growth of the protostars at the center.

Read more at Science Daily

Exploring the origins of life

Catalytic molecules can form metabolically active clusters by creating and following concentration gradients -- this is the result of a new study by scientists from the Max Planck Institute for Dynamics and Self-Organization (MPI-DS). Their model predicts the self-organization of molecules involved in metabolic pathways, adding a possible new mechanism to the theory of the origin of life. The results can help to better understand how molecules participating in complex biological networks can form dynamic functional structures, and provide a platform for experiments on the origins of life.

One possible scenario for the origin of life is the spontaneous organization of interacting molecules into cell-like droplets. These molecular species would form the first self-replicating metabolic cycles, which are ubiquitous in biology and common throughout all organisms. According to this paradigm, the first biomolecules would need to cluster together through slow and overall inefficient processes. Such slow cluster formation seems incompatible with how quickly life has appeared. Scientists from the department of Living Matter Physics from MPI-DS have now proposed an alternative model that explains such cluster formation and thus the fast onset of the chemical reactions required to form life.

"For this, we considered different molecules, in a simple metabolic cycle, where each species produces a chemical used by the next one," says Vincent Ouazan-Reboul, the first author of the study. "The only elements in the model are the catalytic activity of the molecules, their ability to follow concentration gradients of the chemicals they produce and consume, as well as the information on the order of molecules in the cycle," he continues. Consequently, the model showed the formation of catalytic clusters including various molecular species. Furthermore, the growth of clusters happens exponentially fast. Molecules hence can assemble very quickly and in large numbers into dynamic structures.

"In addition, the number of molecule species which participate in the metabolic cycle plays a key role in the structure of the formed clusters," Ramin Golestanian, director at MPI-DS, summarizes: "Our model leads to a plethora of complex scenarios for self-organization and makes specific predictions about functional advantages that arise for odd or even number of participating species. It is remarkable that non-reciprocal interactions as required for our newly proposed scenario are generically present in all metabolic cycles."

In another study, the authors found that self-attraction is not required for clustering in a small metabolic network. Instead, network effects can cause even self-repelling catalysts to aggregate. With this, the researchers demonstrate new conditions in which complex interactions can create self-organized structures.

Read more at Science Daily

Hartshorn salt and 'baking' solves a serious environmental problem

Polyester is the second most used textile in the world and an environmental menace, especially because most of it never gets recycled. The fabric, a blend of plastic and cotton, has been difficult for the industry to separate and therefore recycle. Now, a group of young chemists from the University of Copenhagen has invented a green and surprisingly simple solution using a single household ingredient.

From clothes to sofas to curtains, polyester dominates our everyday lives, with a staggering 60 million tons of this popular fabric produced annually. However, polyester production takes a toll on the climate and the environment, as only a mere 15% of it is recycled, while the rest ends up in landfills or incinerated, being responsible of more carbon emission.

Recycling polyester poses a significant challenge, particularly in separating the plastic and cotton fibers that the blend fabric is made of without losing either of them in the process. Conventional recycling methods often prioritize preserving the plastic component, resulting in a loss of cotton fibers. Moreover, these methods are costly, complex, and generate metal waste due to the use of metal catalysts, which can be cytotoxic and contaminate the process.

In a remarkable breakthrough, a group of young chemists has unveiled a surprisingly simple solution to this pressing problem, potentially revolutionizing the sustainability of the textile industry.

"The textile industry urgently requires a better solution to handle blended fabrics like polyester/cotton. Currently, there are very few practical methods capable of recycling both cotton and plastic -- it's typically an either-or scenario. However, with our newly discovered technique, we can depolymerize polyester into its monomers while simultaneously recovering cotton on a scale of hundreds of grams, using an incredibly straightforward and environmentally friendly approach. This traceless catalytic methodology could be the game-changer," explains postdoc Yang Yang of the Jiwoong Lee group at the University of Copenhagen's Department of Chemistry, who serves as the lead author of the scientific research article.

Hartshorn salt and 24 hours in the 'oven'

The new method requires no special equipment -- just heat, a non-toxic solvent, and an ordinary household ingredient.

"For example, we can take a polyester dress, cut it up into small pieces and place it in a container. Then, add a bit of mild solvent, and thereafter hartshorn salt, which many people know as a leavening agent in baked goods. We then heat it all up to 160 degrees Celsius and leave it for 24 hours. The result is a liquid in which the plastic and cotton fibers settle into distinct layers. It's a simple and cost-effective process," explains Shriaya Sharma, a doctoral student of the Jiwoong Lee group at the Department of Chemistry and study co-author.

In the process, the hartshorn salt, also called ammonium bicarbonate, is broken down into ammonia, CO2 and water. The combination of ammonia and CO2 acts as a catalyst, triggering a selective depolymerization reaction that breaks down the polyester while preserving the cotton fibers. Although ammonia is toxic in isolation, when combined with CO2, it becomes both environmentally friendly and safe for use. Due to the mild nature of the chemicals involved, the cotton fibers remain intact and in excellent condition.

Previously, the same research group demonstrated that CO2 could serve as a catalyst for breaking down nylon, among other things, without leaving any trace. This discovery inspired them to explore the use of hartshorn salt. Nevertheless, the researchers were pleasantly surprised when their simple recipe yielded successful results.

"At first, we were excited to see it work so well on the PET bottles alone. Then, when we discovered that it worked on polyester fabric as well, we were just ecstatic. It was indescribable. That it was so simple to perform was nearly too good to be true," says Carlo Di Bernardo, doctoral student and study co-author.

While the method has only been tested at the laboratory level thus far, the researchers point to its scalability and are now in contact with companies to test the method on an industrial scale.

"We're hoping to commercialize this technology that harbors such great potential. Keeping this knowledge behind the walls of the university would be a huge waste," concludes Yang Yang.

Read more at Science Daily

Humans unable to detect over a quarter of deepfake speech samples

The study, published today in PLOS ONE, is the first to assess human ability to detect artificially generated speech in a language other than English.

Deepfakes are synthetic media intended to resemble a real person's voice or appearance. They fall under the category of generative artificial intelligence (AI), a type of machine learning (ML) that trains an algorithm to learn the patterns and characteristics of a dataset, such as video or audio of a real person, so that it can reproduce original sound or imagery.

While early deepfake speech algorithms may have required thousands of samples of a person's voice to be able to generate original audio, the latest pre-trained algorithms can recreate a person's voice using just a three-second clip of them speaking. Open-source algorithms are freely available and while some expertise would be beneficial, it would be feasible for an individual to train them within a few days.

Tech firm Apple recently announced software for iPhone and iPad that allows a user to create a copy of their voice using 15 minutes of recordings.

Researchers at UCL used a text-to-speech (TTS) algorithm trained on two publicly available datasets, one in English and one in Mandarin, to generate 50 deepfake speech samples in each language. These samples were different from the ones used to train the algorithm to avoid the possibility of it reproducing the original input.

These artificially generated samples and genuine samples were played for 529 participants to see whether they could detect the real thing from fake speech. Participants were only able to identify fake speech 73% of the time, which improved only slightly after they received training to recognise aspects of deepfake speech.

Kimberly Mai (UCL Computer Science), first author of the study, said: "Our findings confirm that humans are unable to reliably detect deepfake speech, whether or not they have received training to help them spot artificial content. It's also worth noting that the samples that we used in this study were created with algorithms that are relatively old, which raises the question whether humans would be less able to detect deepfake speech created using the most sophisticated technology available now and in the future."

The next step for the researchers is to develop better automated speech detectors as part of ongoing efforts to create detection capabilities to counter the threat of artificially generated audio and imagery.

Though there are benefits from generative AI audio technology, such as greater accessibility for those whose speech may be limited or who may lose their voice due to illness, there are growing fears that such technology could be used by criminals and nation states to cause significant harm to individuals and societies.

Documented cases of deepfake speech being used by criminals include one 2019 incident where the CEO of a British energy company was convinced to transfer hundreds of thousands of pounds to a false supplier by a deepfake recording of his boss's voice.

Read more at Science Daily

Aug 3, 2023

Gravitational arcs in 'El Gordo' galaxy cluster

A new image of the galaxy cluster known as "El Gordo" is revealing distant and dusty objects never seen before, and providing a bounty of fresh science. The infrared image, taken by NASA's James Webb Space Telescope, displays a variety of unusual, distorted background galaxies that were only hinted at in previous Hubble Space Telescope images.

El Gordo is a cluster of hundreds of galaxies that existed when the universe was 6.2 billion years old, making it a "cosmic teenager." It's the most massive cluster known to exist at that time. ("El Gordo" is Spanish for the "Fat One.")

The team targeted El Gordo because it acts as a natural, cosmic magnifying glass through a phenomenon known as gravitational lensing. Its powerful gravity bends and distorts the light of objects lying behind it, much like an eyeglass lens.

"Lensing by El Gordo boosts the brightness and magnifies the sizes of distant galaxies. This lensing effect provides a unique window into the distant universe," said Brenda Frye of the University of Arizona. Frye is co-lead of the PEARLS-Clusters branch of the Prime Extragalactic Areas for Reionization and Lensing Science (PEARLS) team and lead author of one of four papers analyzing the El Gordo observations.

The Fishhook

Within the image of El Gordo, one of the most striking features is a bright arc represented in red at upper right. Nicknamed "El Anzuelo" (The Fishhook) by one of Frye's students, the light from this galaxy took 10.6 billion years to reach Earth. Its distinctive red color is due to a combination of reddening from dust within the galaxy itself and cosmological redshift due to its extreme distance.

By correcting for the distortions created by lensing, the team was able to determine that the background galaxy is disk-shaped but only 26,000 light-years in diameter -- about one-fourth the size of the Milky Way. They also were able to study the galaxy's star formation history, finding that star formation was already rapidly declining in the galaxy's center, a process known as quenching.

"We were able to carefully dissect the shroud of dust that envelops the galaxy center where stars are actively forming," said Patrick Kamieneski of Arizona State University, lead author on a second paper. "Now, with Webb, we can peer through this thick curtain of dust with ease, allowing us to see firsthand the assembly of galaxies from the inside out."

The Thin One

Another prominent feature in the Webb image is a long, pencil-thin line at left of center. Known as "La Flaca" (the Thin One), it is another lensed background galaxy whose light also took nearly 11 billion years to reach Earth.

Not far from La Flaca is another lensed galaxy. When the researchers examined that galaxy closely, they found a single red giant star that they nicknamed Quyllur, which is the Quechua term for star.

Previously, Hubble has found other lensed stars (such as Earendel), but they were all blue supergiants. Quyllur is the first individual red giant star observed beyond 1 billion light-years from Earth. Such stars at high redshift are only detectable using the infrared filters and sensitivity of Webb.

"It's almost impossible to see lensed red giant stars unless you go into the infrared. This is the first one we've found with Webb, but we expect there will be many more to come," said Jose Diego of the Instituto de Física de Cantabria in Spain, lead author of a third paper on El Gordo.

Galaxy Group and Smudges

Other objects within the Webb image, while less prominent, are equally interesting scientifically. For example, Frye and her team (which includes nine students from high school to graduate students) identified five multiply lensed galaxies which appear to be a baby galaxy cluster forming about 12.1 billion years ago. There are another dozen candidate galaxies which may also be part of this distant cluster.

"While additional data are required to confirm that there are 17 members of this cluster, we may be witnessing a new galaxy cluster forming right before our eyes, just over a billion years after the big bang," said Frye.

A final paper examines very faint, smudge-like galaxies known as ultra-diffuse galaxies. As their name suggests, these objects, which are scattered throughout the El Gordo cluster, have their stars widely spread out across space. The team identified some of the most distant ultra-diffuse galaxies ever observed, whose light traveled 7.2 billion years to reach us.

"We examined whether the properties of these galaxies are any different than the ultra-diffuse galaxies we see in the local universe, and we do actually see some differences. In particular, they are bluer, younger, more extended, and more evenly distributed throughout the cluster. This suggests that living in the cluster environment for the past 6 billion years has had a significant effect on these galaxies," explained Timothy Carleton of Arizona State University, lead author on the fourth paper.

Read more at Science Daily

Winter storms over Labrador Sea influence Gulf Stream system

The Gulf Stream, which brings warm water from the Gulf of Mexico to Europe and keeps the climate mild, is only part of a larger system of oceanic currents called the Atlantic Meridional Overturning Circulation, or AMOC for short. It runs through the Atlantic like a giant climate machine: as warm water from the tropics is transported northwards at the surface, the current reverses in the North Atlantic -- the water cools, becomes heavier and flows south at depth.

Where exactly these sinking processes take place is the subject of current research, and recent measurement programmes have located them to the east of Greenland. A team of scientists from the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, has now conducted a modelling study focusing on the Labrador Sea southwest of Greenland. In their study, now published in the journal Nature Communications, the researchers used complex computer simulations to show that fluctuations in the Labrador Sea can have a significant influence on the strength of sinking processes east of Greenland. An important link is a little-noticed system of deep currents that ensures rapid spreading of Labrador Sea water into the deep-sea basin between Greenland and Iceland.

"We oceanographers have long had our eyes on the Labrador Sea between Canada and Greenland," says Professor Dr Claus Böning, who led the study. "Winter storms with icy air cool the ocean temperatures to such an extent that the surface water becomes heavier than the water below. The result is deep winter mixing of the water column, whereby the volume and density of the resulting water mass can vary greatly from year to year."

In the model simulations of the past 60 years, the years 1990 to 1994 stood out, when the Labrador Sea cooled particularly strongly. "The unusually large volume of very dense Labrador Sea water that formed following extremely harsh winters led to significantly increased sinking between Greenland and Iceland in the following years," explains Claus Böning. As a result, the model simulations calculated an increase in Atlantic overturning transport of more than 20%, peaking in the late 1990s. The measurements of the circulation in the North Atlantic, which have only been carried out continuously since 2004, would then fall exactly in the decay phase of the simulated transport maximum.

"According to our model results, the observed weakening of the Atlantic circulation during this period can therefore be interpreted, at least in part, as an aftereffect of the extreme Labrador Sea winters of the 1990s," summarises Professor Dr Arne Biastoch, head of the Ocean Dynamics Research Unit at GEOMAR and co-author of the study. However, he clarifies: "Although we cannot yet say whether a longer-term weakening of the overturning is already occurring, all climate models predict a weakening as a result of human-induced climate change as 'very likely' for the future.

Read more at Science Daily

Butterfly-inspired films create vibrant colors while passively cooling objects

On a hot summer day, white clothing feels cooler than other colors due to reflecting -- not absorbing -- sunlight. Other colors like blue or black, will undergo a heating effect as they absorb light. To circumvent this heating effect in colored cooling films, researchers drew inspiration from nanostructures in butterfly wings.

The new films, which don't absorb any light, could be used on the outside of buildings, vehicles and equipment to reduce the energy needed for cooling while preserving vivid color properties.

"In buildings, large amounts of energy are used for cooling and ventilation, and running the air conditioner in electric cars can reduce the driving range by more than half," said research team leader Wanlin Wang from Shenzhen University in China. "Our cooling films could help advance energy sustainability and carbon neutrality."

In Optica, Optica Publishing Group's journal for high-impact research, the researchers show that the films they developed lower the temperature of colorful objects to about 2 °C below the ambient temperature. They also found that when left outside all day, the blue version of the films was approximately 26°C cooler than traditional blue car paint. This represents an annual energy savings of approximately 1377 MJ/m2 per year.

"With our new films, excellent cooling performance can be achieved, no matter the desired color, saturation or brightness," said Wang. "They could even be used on textiles to create clothes of any color that are comfortable in hot temperatures."

Inspired by nature

A car with blue paint appears blue because it absorbs yellow light and reflects blue light. The large amount of light that is absorbed heats the car. Morpho butterflies, however, produce their highly saturated blue color based on the nanostructure of their wings. The design of the cooling nanofilm mimics these structures to produce vibrant colors that don't absorb light like traditional paint.

To create their Morpho-inspired nanofilms, the researchers placed a disordered material (rough frosted glass) under a multilayer material made of titanium dioxide and aluminum dioxide. They then placed this structure on a silver layer that reflects all light, thus preventing the absorption of solar radiation and the heating associated with that absorption.

The film's color is determined by how components within its multilayered structure reflect light. To create blue, for example, the multilayer material is designed to reflect yellow light in a very narrow range of angles while the disordered structure diffuses the blue light across a broad area.

Although this type of passive photonic thermal management has been accomplished before, it has only been used with white or clear objects because it is difficult to maintain a wide viewing angle and high color saturation.

Passive cooling of colorful objects


"Thanks to the layered structure we developed, we were able to extend the passive cooling method from colorless objects to colorful ones while preserving color performance," said Wang. "In other words, our blue film looks blue across a large range of viewing angles and doesn't heat up because it reflects all the light. In addition, high saturation and brightness can be achieved by optimizing the structure."

To test the new technology, the researchers created blue, yellow and colorless films, which they placed outdoors at Shenzhen University, on surfaces such as roofs, cars, cloth and cell phones, from 9 a.m. to 4 p.m. in both winter and summer. Using thermocouple sensors and infrared cameras to measure temperature, they found that the cooling films were more than about 15 ? cooler than the surfaces they were placed on in the winter and about 35 ? cooler in the summer.

Read more at Science Daily

Social media algorithms exploit how humans learn from their peers

In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant to foster cooperation, which can lead to large-scale polarization and misinformation.

"Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there's a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation," says first author William Brady, a social psychologist in the Kellogg School of Management at Northwestern.

"We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions."

Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their ingroup and those they perceive to be prestigious. In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective survival.

In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of the content's accuracy or representativeness of a group's opinions. As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the majority opinion of different groups.

"It's not that the algorithm is designed to disrupt cooperation," says Brady. "It's just that its goals are different. And in practice, when you put those functions together, you end up with some of these potentially negative effects."

To address this problem, the research group first proposes that social media users need to be more aware of how algorithms work and why certain content shows up on their feed. Social media companies don't typically disclose the full details of how their algorithms select for content, but one start might be offering explainers for why a user is being shown a particular post. For example, is it because the user's friends are engaging with the content or because the content is generally popular? Outside of social media companies, the research team is developing their own interventions to teach people how to be more conscious consumers of social media.

In addition, the researchers propose that social media companies could take steps to change their algorithms, so they are more effective at fostering community. Instead of solely favoring PRIME information, algorithms could set a limit on how much PRIME information they amplify and prioritize presenting users with a diverse set of content. These changes could continue to amplify engaging information while preventing more polarizing or politically extreme content from becoming overrepresented in feeds.

Read more at Science Daily

Aug 2, 2023

Dune patterns reveal environmental change on Earth and other planets

Dunes, the mounds of sand formed by the wind that vary from ripples on the beach to towering behemoths in the desert, are incarnations of surface processes, climate change, and the surrounding atmosphere. For decades, scientists have puzzled over why they form different patterns.

Now, Stanford researchers have found a way to interpret the meaning of these patterns. Their results, published in Geology Aug. 1, can be used as a new tool for understanding environmental changes on any planetary body that harbors dunes, including Venus, Earth, Mars, Titan, Io, and Pluto.

"When you look at other planets, all you have is pictures taken from hundreds to thousands of kilometers away from the surface. You can see dunes -- but that's it. You don't have access to the surface," said senior study author Mathieu Lapôtre, an assistant professor of Earth and planetary sciences in the Stanford Doerr School of Sustainability. "These findings offer a really exciting new tool to decipher the environmental history of these other planets where we have no data."

The scientists analyzed satellite images of 46 dune fields on Earth and Mars and studied how the dunes interact, or exchange sand. Physically, dune interactions manifest themselves as locations where the crestlines of two dunes get very close to each other. Through such interactions, dunes evolve toward a pattern that is free of defects, reflecting a state of equilibrium with local conditions. Thus, the researchers hypothesized that a high number of interactions, in turn, must signal recent or local changes in those boundary conditions. To test their hypothesis, they used data from Earth and Mars to verify how known changes in environmental conditions, such as wind direction or the amount of sand available, affected dune interactions in the dune fields.

Finding a pattern

In a part of China's Tengger Desert, researchers once flattened a dune field to have a baseline for understanding its subsequent reformation. The study authors analyzed satellite images of the dune field from 2016 to 2022 to see how it grew from a flat bed to large dunes in equilibrium with their environment.

"When the dunes and their patterns were not in equilibrium with their current conditions, the interaction density was high, and through time we could see it decreased consistently, as is expected from our hypothesis," Lapôtre said.

Next, they investigated dunes migrating through a valley in the Namib Desert to see how changes in the wind conditions, triggered by topography, impacted dune patterns. They found that dunes outside the valley displayed few defects in their patterns, but as they migrated through the valley -- which starts very wide, then narrows, then becomes wide again -- dunes interacted more with each other.

"As both sand and winds get funneled into the valley, the dunes feel a change in their boundary conditions, and their pattern needs to adjust," said lead study author Colin Marvin, a PhD student in Earth and planetary sciences. "They move into the portion outside the valley and they again readjust to their unconfined conditions, and we see a drop in the number of interactions. This trend is exactly what we expected to see."

They also found that pattern to be true on Mars, where a big dune field occurs around the north pole. There, the migrating dunes have settled into their current conditions -- they're well spaced, they look the same, they're the same size -- and because of that, they interact very little with one another. But further downwind, the winds become more variable and frost locally makes it harder for grains to be blown away. There, the dunes react to that change until they have migrated far enough into these new conditions for their pattern to have once again matured, decreasing the number of dune interactions.

Testing the tool

"We have an upper bound on the time that it takes for a given dune to adjust to changes in environmental conditions, and that is the time it takes for a dune to migrate by a distance of one dune length," Marvin said. "We can use this to diagnose recent changes in environmental conditions on planetary bodies where we don't have any information other than images taken from orbit or radar for example."

Understanding the recent climate of Mars by analyzing current dune patterns could possibly help scientists better pinpoint, for example, the latitudes and depth where future astronauts might be able to find water ice in the subsurface, Lapôtre added. The study also informs experts about the mechanics of dunes on Earth, which can help them better interpret Earth's rock record, and thus, our planet's distant past. On Saturn's moon Titan, this approach could reveal information about topography around the equator and tropics, which is near where the Dragonfly Mission is going to land in the mid 2030s.

"Topography can tell you about a lot of different things; for example, the geological history of the planet: Does Titan have tectonics? How does the interior of Titan work, and how is it coupled with the surface? Is there significant erosion?" Lapôtre said. "Interpretations of dune patterns could trigger kind of a chain reaction, where you provide a new constraint, and it's going to be useful to a bunch of people to make a bunch of discoveries down the line."

Because other planets have various sizes, gravities, temperatures, and compositions, their geological processes will differ. Compared with a rover that lands on one point of a planet to collect information, the satellite data of entire dune fields can greatly increase scientists' understanding of these extraterrestrial bodies and how they can inform our understanding of Earth.

"If we want to understand what happened in the past, or if we want to predict what will happen in the future, it's hard to do when all you have to create those models is one data point, or just one planet," Lapôtre said. "Ultimately, this kind of information allows us to make much better interpretations of Earth's past and also predictions of Earth's future."

Read more at Science Daily

Using gemstones' unique characteristics to uncover ancient trade routes

Since ancient times, gemstones have been mined and traded across the globe, sometimes traveling continents from their origin. Gems are geologically defined as minerals celebrated for beauty, strength, and rarity. Their unique elemental composition and atomic orientation act as a fingerprint, enabling researchers to uncover the stones' past, and with it, historical trade routes.

In AIP Advances, from AIP Publishing, Khedr et al. employed three modern spectroscopic techniques to rapidly analyze gems found in the Arabian-Nubian Shield and compare them with similar gems from around the world. Using laser-induced breakdown spectroscopy (LIBS), Fourier transform infrared (FTIR) spectroscopy, and Raman spectroscopy, the authors identified elements that influence gems' color, differentiated stones found within and outside the region, and distinguished natural from synthetic.

The Arabian-Nubian Shield is an exposure of mineral deposits that sandwiches the Red Sea in current-day Egypt and Saudi Arabia. The deposits date back to the Earth's earliest geological age, and the precious metals and gemstones have been harvested for thousands of years.

"We showed the main spectroscopic characteristics of gemstones from these Middle East localities to distinguish them from their counterparts in other world localities," said author Adel Surour. "This includes a variety of silicate gems such as emerald from the ancient Cleopatra's mines in Egypt, in addition to amethyst, peridot, and amazonite from other historical sites, which mostly date to the Roman times."

The various spectroscopic techniques they employed revealed different information about the stones. LIBS quickly characterizes chemical composition, while FTIR determines functional groups connected to the structure and indicates the presence of water and other hydrocarbons. Even for chemically identical materials, Raman spectroscopy shows the unique crystalline structure of the gems' atoms.

The authors identified that iron content correlates to amethysts' signature purple hue, and other elements such as copper, chromium, and vanadium are also responsible for colorization. A signature water peak exposes lab-grown synthetic gems, which are useful for scientific purposes and identical to natural gems but are less expensive.

Read more at Science Daily

Novel molecules fight viruses by bursting their bubble-like membranes

Antiviral therapies are notoriously difficult to develop, as viruses can quickly mutate to become resistant to drugs. But what if a new generation of antivirals ignores the fast-mutating proteins on the surface of viruses and instead disrupts their protective layers?

"We found an Achilles heel of many viruses: their bubble-like membranes. Exploiting this vulnerability and disrupting the membrane is a promising mechanism of action for developing new antivirals," said Kent Kirshenbaum, professor of chemistry at NYU and the study's senior author.

In a new study published Aug. 2 in the journal ACS Infectious Diseases, the researchers show how a group of novel molecules inspired by our own immune system inactivates several viruses, including Zika and chikungunya. Their approach may not only lead to drugs that can be used against many viruses, but could also help overcome antiviral resistance.

The urgent need for new antivirals

Viruses have different proteins on their surfaces that are often the targets of therapeutics like monoclonal antibodies and vaccines. But targeting these proteins has limitations, as viruses can quickly evolve, changing the properties of the proteins and making treatments less effective. These limitations were on display when new SARS-CoV-2 variants emerged that evaded both the drugs and the vaccines developed against the original virus.

"There is an urgent need for antiviral agents that act in new ways to inactivate viruses," said Kirshenbaum. "Ideally, new antivirals won't be specific to one virus or protein, so they will be ready to treat new viruses that emerge without delay and will be able to overcome the development of resistance."

"We need to develop this next generation of drugs now and have them on the shelves in order to be ready for the next pandemic threat -- and there will be another one, for sure," added Kirshenbaum.

Drawing inspiration from our immune systems

Our innate immune system combats pathogens by producing antimicrobial peptides, the body's first line of defense against bacteria, fungi, and viruses. Most viruses that cause disease are encapsulated in membranes made of lipids, and antimicrobial peptides work by disrupting or even bursting these membranes.

While antimicrobial peptides can be synthesized in the lab, they are rarely used to treat infectious diseases in humans because they break down easily and can be toxic to healthy cells. Instead, scientists have developed synthetic materials called peptoids, which have similar chemical backbones to peptides but are better able to break through virus membranes and are less likely to degrade.

"We began to think about how to mimic natural peptides and create molecules with many of the same structural and functional features as peptides, but are composed of something that our bodies won't be able to rapidly degrade," said Kirshenbaum.

The researchers investigated seven peptoids, many originally discovered in the lab of Annelise Barron at Stanford, a co-author of the study. The NYU team studied the antiviral effects of the peptoids against four viruses: three enveloped in membranes (Zika, Rift Valley fever, and chikungunya) and one without (coxsackievirus B3).

"We were particularly interested in studying these viruses as they have no available treatment options," said Patrick Tate, a chemistry PhD student at NYU and the study's first author.

How peptoids disrupt viral membranes and avoid other cells

The membranes surrounding viruses are made of different molecules than the virus itself, as lipids are acquired from the host to form membranes. One such lipid, phosphatidylserine, is present in the membrane on the outside of viruses, but is sequestered towards the interior of human cells under normal conditions.

"Because phosphatidylserine is found on the exterior of viruses, it can be a specific target for peptoids to recognize viruses, but not recognize -- and therefore spare -- our own cells," said Tate. "Moreover, because viruses acquire lipids from the host rather than encoding from their own genomes, they have better potential to avoid antiviral resistance."

The researchers tested seven peptoids against the four viruses. They found that the peptoids inactivated all three enveloped viruses -- Zika, Rift Valley fever, and chikungunya -- by disrupting the virus membrane, but did not disrupt coxsackievirus B3, the only virus without a membrane.

Moreover, chikungunya virus containing higher levels of phosphatidylserine in its membrane was more susceptible to the peptoids. In contrast, a membrane formed exclusively with a different lipid named phosphatidylcholine was not disrupted by the peptoids, suggesting that phosphatidylserine is crucial in order for peptoids to reduce viral activity.

"We're now starting to understand how peptoids actually exert their antiviral effect -- specifically, through the recognition of phosphatidylserine," said Tate.

The researchers are continuing pre-clinical studies to evaluate the potential of these molecules in fighting viruses and to understand if they can overcome the development of resistance. Their peptoid-focused approach may hold promise for treating a wide range of viruses with membranes that can be difficult to treat, including Ebola, SARS-CoV-2, and herpes.

Read more at Science Daily

Sweet smell of success: Simple fragrance method produces major memory boost

When a fragrance wafted through the bedrooms of older adults for two hours every night for six months, memories skyrocketed. Participants in this study by University of California, Irvine neuroscientists reaped a 226% increase in cognitive capacity compared to the control group. The researchers say the finding transforms the long-known tie between smell and memory into an easy, non-invasive technique for strengthening memory and potentially deterring dementia.

The team's study appears in Frontiers in Neuroscience.

The project was conducted through the UCI Center for the Neurobiology of Learning & Memory. It involved men and women aged 60 to 85 without memory impairment. All were given a diffuser and seven cartridges, each containing a single and different natural oil. People in the enriched group received full-strength cartridges. Control group participants were given the oils in tiny amounts. Participants put a different cartridge into their diffuser each evening prior to going to bed, and it activated for two hours as they slept.

People in the enriched group showed a 226% increase in cognitive performance compared to the control group, as measured by a word list test commonly used to evaluate memory. Imaging revealed better integrity in the brain pathway called the left uncinate fasciculus. This pathway, which connects the medial temporal lobe to the decision-making prefrontal cortex, becomes less robust with age. Participants also reported sleeping more soundly.

Scientists have long known that the loss of olfactory capacity, or ability to smell, can predict development of nearly 70 neurological and psychiatric diseases. These include Alzheimer's and other dementias, Parkinson's, schizophrenia and alcoholism. Evidence is emerging about a link between smell loss due to COVID and ensuing cognitive decrease. Researchers have previously found that exposing people with moderate dementia to up to 40 different odors twice a day over a period of time boosted their memories and language skills, eased depression and improved their olfactory capacities. The UCI team decided to try turning this knowledge into an easy and non-invasive dementia-fighting tool.

"The reality is that over the age of 60, the olfactory sense and cognition starts to fall off a cliff," said Michael Leon, professor of neurobiology & behavior and a CNLM fellow. "But it's not realistic to think people with cognitive impairment could open, sniff and close 80 odorant bottles daily. This would be difficult even for those without dementia."

The study's first author, project scientist Cynthia Woo, said: "That's why we reduced the number of scents to just seven, exposing participants to just one each time, rather than the multiple aromas used simultaneously in previous research projects. By making it possible for people to experience the odors while sleeping, we eliminated the need to set aside time for this during waking hours every day."

The researchers say the results from their study bear out what scientists learned about the connection between smell and memory.

"The olfactory sense has the special privilege of being directly connected to the brain's memory circuits," said Michael Yassa, professor and James L. McGaugh Chair in the Neurobiology of Learning & Memory. The director of CNLM, he served as collaborating investigator. "All the other senses are routed first through the thalamus. Everyone has experienced how powerful aromas are in evoking recollections, even from very long ago. However, unlike with vision changes that we treat with glasses and hearing aids for hearing impairment, there has been no intervention for the loss of smell."

Read more at Science Daily

Aug 1, 2023

New algorithm ensnares its first 'potentially hazardous' asteroid

An asteroid discovery algorithm -- designed to uncover near-Earth asteroids for the Vera C. Rubin Observatory's upcoming 10-year survey of the night sky -- has identified its first "potentially hazardous" asteroid, a term for space rocks in Earth's vicinity that scientists like to keep an eye on. The roughly 600-foot-long asteroid, designated 2022 SF289, was discovered during a test drive of the algorithm with the ATLAS survey in Hawaii. Finding 2022 SF289, which poses no risk to Earth for the foreseeable future, confirms that the next-generation algorithm, known as HelioLinc3D, can identify near-Earth asteroids with fewer and more dispersed observations than required by today's methods.

"By demonstrating the real-world effectiveness of the software that Rubin will use to look for thousands of yet-unknown potentially hazardous asteroids, the discovery of 2022 SF289 makes us all safer," said Rubin scientist Ari Heinze, the principal developer of HelioLinc3D and a researcher at the University of Washington.

The solar system is home to tens of millions of rocky bodies ranging from small asteroids not larger than a few feet, to dwarf planets the size of our moon. These objects remain from an era over four billion years ago, when the planets in our system formed and took their present-day positions.

Most of these bodies are distant, but a number orbit close to the Earth, and are known as near-Earth objects, or NEOs. The closest of these -- those with a trajectory that takes them within about 5 million miles of Earth's orbit, or about 20 times the distance from Earth to the moon -- warrant special attention. Such "potentially hazardous asteroids," or PHAs, are systematically searched for and monitored to ensure they won't collide with Earth, a potentially devastating event.

Scientists search for PHAs using specialized telescope systems like the NASA-funded ATLAS survey, run by a team at the University of Hawaii's Institute for Astronomy. They do so by taking images of parts of the sky at least four times every night. A discovery is made when they notice a point of light moving unambiguously in a straight line over the image series. Scientists have discovered about 2,350 PHAs using this method, but estimate that at least as many more await discovery.

From its peak in the Chilean Andes, the Vera C. Rubin Observatory is set to join the hunt for these objects in early 2025. Funded primarily by the U.S. National Science Foundation and the U.S. Department of Energy, Rubin's observations will dramatically increase the discovery rate of PHAs. Rubin will scan the sky unprecedentedly quickly with its 8.4-meter mirror and massive 3,200-megapixel camera, visiting spots on the sky twice per night rather than the four times needed by present telescopes. But with this novel observing "cadence," researchers need a new type of discovery algorithm to reliably spot space rocks.

Rubin's solar system software team at the University of Washington's DiRAC Institute has been working to just develop such codes. Working with Smithsonian senior astrophysicist and Harvard University lecturer Matthew Holman, who in 2018 pioneered a new class of heliocentric asteroid search algorithms, Heinze and Siegfried Eggl, a former University of Washington researcher who is now an assistant professor at the University of Illinois at Urbana-Champaign, developed HelioLinc3D: a code that could find asteroids in Rubin's dataset. With Rubin still under construction, Heinze and Eggl wanted to test HelioLinc3D to see if it could discover a new asteroid in existing data, one with too few observations to be discovered by today's conventional algorithms.

John Tonry and Larry Denneau, lead ATLAS astronomers, offered their data for a test. The Rubin team set HelioLinc3D to search through this data and on July 18, 2023 it spotted its first PHA: 2022 SF289, initially imaged by ATLAS on September 19, 2022 at a distance of 13 million miles from Earth.

In retrospect, ATLAS had observed 2022 SF289 three times on four separate nights, but never the requisite four times on one night to be identified as a new NEO. But these are just the occasions where HelioLinc3D excels: It successfully combined fragments of data from all four nights and made the discovery.

"Any survey will have difficulty discovering objects like 2022 SF289 that are near its sensitivity limit, but HelioLinc3D shows that it is possible to recover these faint objects as long as they are visible over several nights," said Denneau. "This in effect gives us a 'bigger, better' telescope."

Other surveys had also missed 2022 SF289, because it was passing in front of the rich starfields of the Milky Way. But by now knowing where to look, additional observations from Pan-STARRS and Catalina Sky Survey quickly confirmed the discovery. The team used B612 Asteroid Institute's ADAM platform to recover further unrecognized observations by the NSF-supported Zwicky Transient Facility telescope.

2022 SF289 is classified as an Apollo-type NEO. Its closest approach brings it within 140,000 miles of Earth's orbit, closer than the moon. Its diameter of 600ft is large enough to be classified as "potentially hazardous." But despite its proximity, projections indicate that it poses no danger of hitting Earth for the foreseeable future. Its discovery has been announced in the International Astronomical Union's Minor Planet Electronic Circular MPEC 2023-O26.

Currently, scientists know of 2,350 PHAs but expect there are more than 3,000 yet to be found.

Read more at Science Daily

How to distinguish slow and fast earthquakes

Researchers from the University of Tokyo and Stanford University show what differentiates slow and fast earthquakes and how their magnitudes vary with time.

Normally, earthquakes last up to a few minutes and radiate strong seismic waves. But around 23 years ago, scientists discovered an unusual slow-slip phenomena called slow earthquakes. Slow earthquakes last days or even months. Though they involve significant tectonic movement, you may never feel them. Since slow earthquakes could indicate future fast earthquakes, monitoring and understanding them helps accurately forecast devastating earthquakes and tsunamis.

Understanding them requires knowing how they change over time. For that, researchers use scaling laws which define the relationship between two quantities over a wide interval. In 2007, researchers proposed a controversial scaling law relating the magnitude and duration of earthquakes, which can help differentiate slow and fast earthquakes.

According to the scaling law, for slow earthquakes, as its magnitude (measured by a quantity called seismic moment) increases, the duration of the earthquake increases proportionately. For fast earthquakes, the relation is not linearly but cubically proportionate, which means the seismic moment increases very rapidly in a short time.

The scaling law received criticism from other researchers and raised questions about the likelihood of events in between slow and fast earthquakes that do not fall within the law. Seismologists Satoshi Ide of the University of Tokyo and Gregory Beroza of Stanford University now bolster the scaling law with more data, reinterpret the scaling relation, and address the controversy.

"Most of the challenges to the scaling law were problematic, but we have had no chance to disprove their challenges," says Ide. "A surprise was that totally erratic results were published in Nature, and believed by many scientists, who made further problematic numerical models."

With the advent of new seismic detection technology and data accumulated over 16 years, Ide and Beroza now reason that most arguments against the law had improper data calculations and were inconsistent given their data constraints. They suggest the presence of a speed limit to slow earthquakes and reveal physical processes that differentiate slow and fast earthquakes.

Many, but all the same

Since slow earthquakes include phenomena with different frequency bands, they are more diverse than fast earthquakes. They were named differently, such as low-frequency earthquakes, tectonic tremors, very low-frequency earthquakes, and slow slip events. So researchers observing one type of slow earthquake considered other types irrelevant. "Our study confirmed that all these phenomena are mutually connected, or rather regarded as a single phenomenon that radiates various signals," explains Ide.

Slow slips, but not so fast

Slow earthquakes are so subtle and inaccessible that detecting and monitoring them is challenging. Due to the detection bias, only large enough slow earthquakes are observed. That prompted Ide and Beroza to propose an upper limit to the speed of slow earthquakes. Based on that, the duo redefined the 2007 scaling law with the maximum value constraint. As they showed continuous evidence for the scaling law over a broad time scale of less than a second to more than a year, they put an end to the debate.

How are slow and fast earthquakes different?

When Ide's group proposed the scaling law in 2007, they were unsure of what makes these two earthquake types different. Now, with more data and theoretical models, Ide and Beroza show that their scaling differences dictate physical movement processes governing the events. Diffusion processes govern slow earthquakes, whereas seismic wave propagation dictates fast earthquakes. Because of this difference, the magnitude of slow earthquakes cannot be as large as fast earthquakes when the event lasts longer.

"We pointed out that 'diffusion' is important in slow earthquakes, but what is physically diffusing is not well understood," says Ide.

Read more at Science Daily

Sun 'umbrella' tethered to asteroid might help mitigate climate change

Earth is rapidly warming and scientists are developing a variety of approaches to reduce the effects of climate change. István Szapudi, an astronomer at the University of Hawaiʻi Institute for Astronomy, has proposed a novel approach -- a solar shield to reduce the amount of sunlight hitting Earth, combined with a tethered, captured asteroid as a counterweight. Engineering studies using this approach could start now to create a workable design that could mitigate climate change within decades.

The paper, "Solar radiation management with a tethered sun shield," is published in Proceedings of the National Academy of Sciences.

One of the simplest approaches to reducing the global temperature is to shade the Earth from a fraction of the Sun's light. This idea, called a solar shield, has been proposed before, but the large amount of weight needed to make a shield massive enough to balance gravitational forces and prevent solar radiation pressure from blowing it away makes even the lightest materials prohibitively expensive. Szapudi's creative solution consists of two innovations: a tethered counterweight instead of just a massive shield, resulting in making the total mass more than 100 times less, and the use of a captured asteroid as the counterweight to avoid launching most of the mass from Earth.

"In Hawaiʻi, many use an umbrella to block the sunlight as they walk about during the day. I was thinking, could we do the same for Earth and thereby mitigate the impending catastrophe of climate change?" Szapudi said.

Szapudi began with the goal of reducing solar radiation by 1.7%, an estimate of the amount needed to prevent a catastrophic rise in global temperatures. He found that placing a tethered counterbalance toward the Sun could reduce the weight of the shield and counterweight to approximately 3.5 million tons, about one hundred times lighter than previous estimates for an untethered shield.

While this number is still far beyond current launch capabilities, only 1% of the weight -- about 35,000 tons -- would be the shield itself, and that is the only part that needs to be launched from Earth. With newer, lighter materials, the mass of the shield can be reduced even further. The remaining 99% of the total mass would be asteroids or lunar dust used as a counterweight. Such a tethered structure would be faster and cheaper to build and deploy than other shield designs.

Read more at Science Daily

Half the population to have a mental health disorder by 75

A global study co-led by researchers from The University of Queensland and Harvard Medical School has found one in two people will develop a mental health disorder in their lifetime.

Professor John McGrath from UQ's Queensland Brain Institute, Professor Ronald Kessler from Harvard Medical School, and their colleagues from 27 other countries, analysed data from more than 150,000 adults across 29 countries between 2001 and 2022, taken from the largest ever coordinated series of face-to-face interviews -- the World Health Organisation's World Mental Health Survey initiative.

Lead author Professor McGrath said the results demonstrate the high prevalence of mental health disorders, with 50 per cent of the population developing at least one disorder by the age of 75.

"The most common were mood disorders such as major depression or anxiety," Professor McGrath said.

"We also found the risk of certain mental disorders differed by sex."

The 3 most common mental health disorders among women:
 

  • Depression
  • Specific phobia (a disabling anxiety that interferes with daily life)
  • Post-traumatic stress (PTSD)


The 3 most common mental health disorders among men:
 

  • Alcohol abuse
  • Depression
  • Specific phobia


The research also found mental health disorders typically first emerge in childhood, adolescence or young adulthood.

"The peak age of first onset was at 15 years old, with a median age of onset of 19 for men and 20 for women," Professor McGrath said.

"This lends weight to the need to invest in basic neuroscience to understand why these disorders develop."

Professor Kessler said investment was also needed in mental health services with a particular focus on young people.

"Services need to be able to detect and treat common mental disorders promptly, and be optimised to suit patients in these critical parts of their lives," Professor Kessler said.

"By understanding the age at which these disorders commonly arise, we can tailor public health interventions and allocate resources to ensure that appropriate and timely support is available to individuals at risk."

Read more at Science Daily

Jul 31, 2023

New clues on the source of the universe's magnetic fields

It isn't just your refrigerator that has magnets on it. The earth, the stars, galaxies, and the space between galaxies are all magnetized, too. The more places scientists have looked for magnetic fields across the universe, the more they've found them. But the question of why that is the case and where those magnetic fields originate from has remained a mystery and a subject of ongoing scientific inquiry.

A new paper by Columbia researchers offers insight into the source of these fields. The team used models to show that magnetic fields may spontaneously arise in turbulent plasma. Plasma is a kind of matter often found in ultra-hot environments like that near the surface of the sun, but plasma is also scattered across the universe in low-density environments, like the expansive space between galaxies; the team's research focused on those low-density environments. Their simulations showed that, in addition to generating new magnetic fields, the turbulence of those plasmas can also amplify magnetic fields once they've been generated, which helps explain how magnetic fields that originate on small scales can sometimes eventually reach to stretch across vast distances.

The paper was written by astronomy professor Lorenzo Sironi, astronomy research scientist Luca Comisso, and astronomy doctoral candidate Ryan Golant.

Read more at Science Daily

Insolation affected ice age climate dynamics

In past ice ages, the intensity of summer insolation affected the emergence of warm and cold periods and played an important role in triggering abrupt climate changes, a study by climate researchers, geoscientists, and environmental physicists suggests. Using stalagmites in the European Alps, they were able to demonstrate that warm phases appeared primarily when the summer insolation reached maxima in the Northern Hemisphere. Study participants included scientists from Germany, Austria, and Switzerland led by researchers from Heidelberg University and the GFZ German Research Centre for Geosciences Potsdam.

Past ice ages in the Northern Hemisphere were marked by sudden transitions between cold and warm phases, each lasting several thousand years. The reason for these fluctuations has yet to be resolved, but research does point to effects relating to the size of the continental ice sheets. Greenland ice records 25 such warm-cold cycles between 115,400 and 14,700 years ago. Investigating stalagmites in the Melchsee-Frutt cave system in the Swiss Alps, the researchers were able to investigate for the first time and with high precision 16 such fluctuations in the penultimate glacial period 185,000 to 130,000 years ago.

Stalagmites in caves are crucial archives in climate research and offer clues as to changes in temperature, precipitation, and vegetation cover. "We are able to precisely determine their age and hence analyse the chronological sequence of abrupt ice age climate fluctuations, which we identify using oxygen isotope values," explains Prof. Dr Norbert Frank of the Institute of Environmental Physics at Heidelberg University. "Our investigations targeted whether, in addition to ice volumes in the Northern Hemisphere, orbitally driven changes in the global distribution of insolation could have influenced the abrupt changes in climate," states study head Dr Jens Fohlmeister, who earned his doctorate in environmental physics at Heidelberg University and worked at the GFZ German Research Centre for Geosciences Potsdam and the Potsdam Institute for Climate Impact Research during the investigations.

The researchers studied the transitions of warm-cold cycles in the penultimate ice age by analysing the age and oxygen isotope composition of stalagmites from the Melchsee-Frutt cave system. "Based on the newly acquired data, we were able to show that warm phases occurred mainly during the peak phase of summer insolation in the Northern Hemisphere, even when the sea level, which is dependent on the volume of the continental ice sheets, remained close to its minimum during peak glacial periods," explains Dr Fohlmeister. Model simulations confirmed these findings. In accordance with the research data from the cave system, the simulations predict the frequency as well as the duration of warm phases at the corresponding sea level and existing insolation.

Read more at Science Daily

Researchers create total synthesis of HIV replication inhibitor

Having control over how a dish is cooked is always a good idea. Taking a hint from the kitchen, scientists appear to have discovered a way to produce a true structure of the rare but naturally-occurring anti-HIV compound Lancilactone C from start to finish.

Its non-cytotoxicity in mammals could make this triterpenoid an ideal candidate for treating AIDS if its biological activity were clear -- and if only it were abundant in nature.

Now, a research group at Kyoto University has succeeded in creating a domino-like synthesis of Lancilactone C's unique seven-membered ring structure.

"Our synthetic method revealed that the proposed structure of Lancilactone C was initially incorrect," says Chihiro Tsukano of Kyoto University's Graduate School of Agriculture. "But we successfully derived its true structure from our spectral data and understanding of its biosynthesis."

In addition to this revelation, Tsukano realized that the electrocyclization -- a rearrangement reaction in organic chemistry -- used in the total synthesis also occurs in biosynthesis. Ironically, it remains a mystery of whether the proposed structure containing an unsaturated seven-membered ring might exist in nature as an analog -- or equivalent compound -- and how it might affect the expression of biological activity.

Tsukano's team utilized the domino-like reaction to enable the total synthesis of lancilactones and related triterpenoids. This outcome has inspired the team to further their research in optimizing compound structures, leading to possible development of novel antivirals.

The endless loop of required medication and multi-drug therapies often correlates with a lower quality of life for economically burdened patients.

Read more at Science Daily

Routinely drinking alcohol may raise blood pressure even in adults without hypertension

Even in adults without hypertension, blood pressure readings may climb more steeply over the years as the number of daily alcoholic drinks rise, according to an analysis of seven international research studies published today in Hypertension, an American Heart Association journal.

With the statistical power of seven international research studies, this analysis confirms for the first time there was a continuous increase in blood pressure measures in both participants with low and high alcohol intake. Even low levels of alcohol consumption were associated with detectable increases in blood pressure levels that may lead to a higher risk of cardiovascular events.

"We found no beneficial effects in adults who drank a low level of alcohol compared to those who did not drink alcohol," said senior study author Marco Vinceti, M.D., Ph.D., a professor of epidemiology and public health in the Medical School of the University of Modena and Reggio Emilia University in Italy and an adjunct professor in the department of epidemiology at Boston University's School of Public Health. "We were somewhat surprised to see that consuming an already-low level of alcohol was also linked to higher blood pressure changes over time compared to no consumption -- although far less than the blood pressure increase seen in heavy drinkers."

"Our analysis was based on grams of alcohol consumed and not just on the number of drinks to avoid the bias that might arise from the different amount of alcohol contained in 'standard drinks' across countries and/or types of beverages," said study co-author Tommaso Filippini, M.D., Ph.D., an associate professor of epidemiology and public health in the Medical School of the University of Modena and Reggio Emilia in Italy, and affiliate researcher at the University of California Berkeley School of Public Health.

Researchers reviewed the health data for all participants across the seven studies for more than five years. They compared adults who drank alcohol regularly with non-drinkers and found:
 

  • Systolic (top number) blood pressure rose 1.25 millimeters of mercury (mm Hg) in people who consumed an average of 12 grams of alcohol per day, rising to 4.9 mm Hg in people consuming an average of 48 grams of alcohol per day. (In the U.S., 12 ounces of regular beer, 5 ounces of wine or a 1.5 ounce shot of distilled spirits contains about 14 grams of alcohol. Usual alcohol content differs in alcohol available in other countries.)
  • Diastolic (bottom number) blood pressure rose 1.14 mm Hg in people consuming an average of 12 grams of alcohol per day, rising to 3.1 mm Hg in people consuming an average of 48 grams of alcohol per day. These associations were seen in males but not in females. Diastolic blood pressure measures the force against artery walls between heartbeats and is not as strong a predictor of heart disease risk in comparison to systolic.


"Alcohol is certainly not the sole driver of increases in blood pressure; however, our findings confirm it contributes in a meaningful way. Limiting alcohol intake is advised, and avoiding it is even better," Vinceti said.

Although none of the participants had high blood pressure when they enrolled in the studies, their blood pressure measurements at the beginning did have an impact on the alcohol findings.

"We found participants with higher starting blood pressure readings, had a stronger link between alcohol intake and blood pressure changes over time. This suggests that people with a trend towards increased (although still not "high") blood pressure may benefit the most from low to no alcohol consumption," said study co-author Paul K. Whelton, M.D., M.Sc., the Show Chwan Chair in Global Public Health in the department of epidemiology at Tulane University's School of Public Health and Tropical Medicine in New Orleans and president of the World Hypertension League. Whelton is also the chair of the American Heart Association's 2017 Hypertension Practice Guidelines and a member of the writing committee for the Association's 2021 Scientific Statement on Management of Stage 1 Hypertension in Adults.

According to American Heart Association recommendations, if you don't drink already, don't start. If you do drink, talk with your doctor about the benefits and risks of consuming alcohol in moderation. The Association also does not recommend drinking any form of alcohol to gain potential health benefits. Instead, follow the Association's lifestyle and health metrics for optimal cardiovascular health called Life's Essential 8: eat healthy food, be physically active, don't smoke, get enough sleep, maintain a healthy weight, and control cholesterol, blood sugar and blood pressure levels.

Read more at Science Daily

Jul 30, 2023

Self-healing plastic becomes biodegradable

Imagine a plastic like this: harder than common plastics, non-flammable, and even with self-healing properties. But that is not all! It can be produced at room temperature in water, which is very energy-efficient and does not require toxic solvents. Before hardening, you can shape the plastic in any way you want -- like chewing gum. By adding water, it can also be converted back to its "chewing gum" form at any time, reshaped and thus recycled as often as desired.

Is that impossible? No, it is not! In 2016, the research team around Konstanz chemist Helmut Cölfen presented just such a material -- a mineral plastic. However, even though the plastic, with its novel manufacturing process and outstanding material properties, has since attracted great interest from industry, it still had a crucial shortcoming from the Konstanz chemists' point of view: due to its chemical composition, it was difficult to biodegrade.

A new ingredient for greater environmental compatibility

"Previously, we used polyacrylic acid to produce our mineral plastic. Chemically, this acid has the same backbone as polyethylene, which is known to cause major problems in the environment because it is hardly biodegradable," explains Cölfen. The research team led by Cölfen and Ilesha Avasthi, a postdoc in Cölfen's lab, therefore set to work looking for an alternative basic building block to develop an environmentally compatible mineral plastic that retains the intriguing properties of the original material. And they found what they were looking for.

In their current publication in the journal Small Methods, the Konstanz chemists present the next generation of their mineral plastic. Instead of petroleum-based ingredients such as polyacrylic acid, they now use polyglutamic acid. This natural biopolymer is readily available in large quantities and can even be obtained sustainably, for example from biotechnological production using microorganisms. A variety of microorganisms that already exist in the environment can degrade polyglutamic acid.

"Our new mineral plastic has the same positive properties as the previous one, but has the decisive advantage that its basic building block -- polyglutamic acid -- can be produced with the help of microorganisms and is completely biodegradable," says Helmut Cölfen.

Support from biologists

In order to prove that this biodegradability also applies to the new mineral plastic itself and not just to its individual components, the chemists enlisted the support of David Schleheck and postdoc Harry Lerner from the Department of Biology at the University of Konstanz. "Helmut Cölfen has created a new type of mineral plastic in his laboratory, and our task now was to make it disappear again with the help of microorganisms," says Schleheck with a smile.

Read more at Science Daily

'Time-traveling' pathogens in melting permafrost pose likely risk to environment

Ancient pathogens that escape from melting permafrost have real potential to damage microbial communities and might potentially threaten human health, according to a new study by Giovanni Strona of the European Commission Joint Research Centre and colleagues, published July 27 in the open-access journal PLOS Computational Biology.

The idea that "time-traveling" pathogens trapped in ice or hidden in remote laboratory facilities could break free to cause catastrophic outbreaks has inspired generations of novelists and screenwriters. While melting glaciers and permafrost are giving many types of dormant microbes the opportunity to re-emerge, the potential threats to human health and the environment posed by these microbes have been difficult to estimate.

In a new study, Strona's team quantified the ecological risks posed by these microbes using computer simulations. The researchers performed artificial evolution experiments where digital virus-like pathogens from the past invade communities of bacteria-like hosts. They compared the effects of invading pathogens on the diversity of host bacteria to diversity in control communities where no invasion occurred.

The team found that in their simulations, the ancient invading pathogens could often survive and evolve in the modern community, and about 3 percent became dominant. While most of the dominant invaders had little effect on the composition of the larger community, about 1 percent of the invaders yielded unpredictable results. Some caused up to one third of the host species to die out, while others increased diversity by up to 12 percent compared to the control simulations.

Read more at Science Daily

Gloomy climate calculation: Scientists predict a collapse of the Atlantic ocean current to happen mid-century

Important ocean currents that redistribute heat, cold and precipitation between the tropics and the northernmost parts of the Atlantic region will shut down around the year 2060 if current greenhouse gas emissions persist. This is the conclusion based on new calculations from the University of Copenhagen that contradict the latest report from the IPCC.

Contrary to what we may imagine about the impact of climate change in Europe, a colder future may be in store. In a new study, researchers from the University of Copenhagen's Niels Bohr Institute and Department of Mathematical Sciences predict that the system of ocean currents which currently distributes cold and heat between the North Atlantic region and tropics will completely stop if we continue to emit the same levels of greenhouse gases as we do today.

Using advanced statistical tools and ocean temperature data from the last 150 years, the researchers calculated that the ocean current, known as the Thermohaline Circulation or the Atlantic Meridional Overturning Circulation (AMOC), will collapse -- with 95 percent certainty -- between 2025 and 2095. This will most likely occur in 34 years, in 2057, and could result in major challenges, particularly warming in the tropics and increased storminess in the North Atlantic region.

"Shutting down the AMOC can have very serious consequences for Earth's climate, for example, by changing how heat and precipitation are distributed globally. While a cooling of Europe may seem less severe as the globe as a whole becomes warmer and heat waves occur more frequently, this shutdown will contribute to an increased warming of the tropics, where rising temperatures have already given rise to challenging living conditions," says Professor Peter Ditlevsen from the Niels Bohr Institute.

"Our result underscores the importance of reducing global greenhouse gas emissions as soon as possible," says the researcher.

The calculations, just published in the scientific journal, Nature Communications, contradict the message of the latest IPCC report, which, based on climate model simulations, considers an abrupt change in the thermohaline circulation very unlikely during this century.

Early warning signals present


The researchers' prediction is based on observations of early warning signals that ocean currents exhibit as they become unstable. These Early Warning Signals for the Thermohaline Circulation have been reported previously, but only now has the development of advanced statistical methods made it possible to predict just when a collapse will occur.

The researchers analysed sea surface temperatures in a specific area of the North Atlantic from 1870 to present days. These sea surface temperatures are "fingerprints" testifying the strength of the AMOC, which has only been measured directly for the past 15 years.

"Using new and improved statistical tools, we've made calculations that provide a more robust estimate of when a collapse of the Thermohaline Circulation is most likely to occur, something we had not been able to do before," explains Professor Susanne Ditlevsen of UCPH's Department of Mathematical Sciences.

The thermohaline circulation has operated in its present mode since the last ice age, where the circulation was indeed collapsed. Abrupt climate jumps between the present state of the AMOC and the collapsed state has been observed to happen 25 times in connection with iceage climate. These are the famed Dansgaard-Oeschger events first observed in ice cores from the Greenlandic ice sheet. At those events climate changes were extreme with 10-15 degrees changes over a decade, while present days climate change is 1.5 degrees warming over a century.

Read more at Science Daily