Jul 14, 2023

James Webb Telescope catches glimpse of possible first-ever 'dark stars'

Stars beam brightly out of the darkness of space thanks to fusion, atoms melding together and releasing energy. But what if there's another way to power a star?

A team of three astrophysicists -- Katherine Freese at The University of Texas at Austin, in collaboration with Cosmin Ilie and Jillian Paulin '23 at Colgate University -- analyzed images from the James Webb Space Telescope (JWST) and found three bright objects that might be "dark stars," theoretical objects much bigger and brighter than our sun, powered by particles of dark matter annihilating. If confirmed, dark stars could reveal the nature of dark matter, one of the deepest unsolved problems in all of physics.

"Discovering a new type of star is pretty interesting all by itself, but discovering it's dark matter that's powering this -- that would be huge," said Freese, director of the Weinberg Institute for Theoretical Physics and the Jeff and Gail Kodosky Endowed Chair in Physics at UT Austin.

Although dark matter makes up about 25% of the universe, its nature has eluded scientists. Scientists believe it consists of a new type of elementary particle, and the hunt to detect such particles is on. Among the leading candidates are Weakly Interacting Massive Particles. When they collide, these particles annihilate themselves, depositing heat into collapsing clouds of hydrogen and converting them into brightly shining dark stars. The identification of supermassive dark stars would open up the possibility of learning about the dark matter based on their observed properties.

The research is published in the Proceedings of the National Academy of Sciences.

Follow-up observations from JWST of the objects' spectroscopic properties -- including dips or excess of light intensity in certain frequency bands -- could help confirm whether these candidate objects are indeed dark stars.

Confirming the existence of dark stars might also help solve a problem created by JWST: There seem to be too many large galaxies too early in the universe to fit the predictions of the standard model of cosmology.

"It's more likely that something within the standard model needs tuning, because proposing something entirely new, as we did, is always less probable," Freese said. "But if some of these objects that look like early galaxies are actually dark stars, the simulations of galaxy formation agree better with observations."

The three candidate dark stars (JADES-GS-z13-0, JADES-GS-z12-0, and JADES-GS-z11-0) were originally identified as galaxies in December 2022 by the JWST Advanced Deep Extragalactic Survey (JADES). Using spectroscopic analysis, the JADES team confirmed the objects were observed at times ranging from about 320 million to 400 million years after the Big Bang, making them some of the earliest objects ever seen.

"When we look at the James Webb data, there are two competing possibilities for these objects," Freese said. "One is that they are galaxies containing millions of ordinary, population-III stars. The other is that they are dark stars. And believe it or not, one dark star has enough light to compete with an entire galaxy of stars."

Dark stars could theoretically grow to be several million times the mass of our sun and up to 10 billion times as bright as the sun.

"We predicted back in 2012 that supermassive dark stars could be observed with JWST," said Ilie, assistant professor of physics and astronomy at Colgate University. "As shown in our recently published PNAS article, we already found three supermassive dark star candidates when analyzing the JWST data for the four high redshift JADES objects spectroscopically confirmed by Curtis-Lake et al, and I am confident we will soon identify many more."

The idea for dark stars originated in a series of conversations between Freese and Doug Spolyar, at the time a graduate student at the University of California, Santa Cruz. They wondered: What does dark matter do to the first stars to form in the universe? Then they reached out to Paolo Gondolo, an astrophysicist at the University of Utah, who joined the team. After several years of development, they published their first paper on this theory in the journal Physical Review Letters in 2008.

Together, Freese, Spolyar and Gondolo developed a model that goes something like this: At the centers of early protogalaxies, there would be very dense clumps of dark matter, along with clouds of hydrogen and helium gas. As the gas cooled, it would collapse and pull in dark matter along with it. As the density increased, the dark matter particles would increasingly annihilate, adding more and more heat, which would prevent the gas from collapsing all the way down to a dense enough core to support fusion as in an ordinary star. Instead, it would continue to gather more gas and dark matter, becoming big, puffy and much brighter than ordinary stars. Unlike ordinary stars, the power source would be evenly spread out, rather than concentrated in the core. With enough dark matter, dark stars could grow to be several million times the mass of our sun and up to 10 billion times as bright as the sun.

Read more at Science Daily

Fear is in the eye of the beholder

Averting our eyes from things that scare us may be due to a specific cluster of neurons in a visual region of the brain, according to new research at the University of Tokyo. Researchers found that in fruit fly brains, these neurons release a chemical called tachykinin which appears to control the fly's movement to avoid facing a potential threat. Fruit fly brains can offer a useful analogy for larger mammals, so this research may help us better understand our own human reactions to scary situations and phobias. Next, the team want to find out how these neurons fit into the wider circuitry of the brain so they can ultimately map out how fear controls vision.

Do you cover your eyes during horror movies? Or perhaps the sight of a spider makes you turn and run? Avoiding looking at things which scare us is a common experience, for humans and animals. But what actually makes us avert our gaze from the things we fear? Researchers have found that it may be due to a group of neurons in the brain which regulates vision when feeling afraid.

"We discovered a neuronal mechanism by which fear regulates visual aversion in the brains of Drosophila (fruit flies). It appears that a single cluster of 20-30 neurons regulates vision when in a state of fear. Since fear affects vision across animal species, including humans, the mechanism we found may be active in humans as well," explained Assistant Professor Masato Tsuji from the Department of Biological Sciences at the University of Tokyo.

The team used puffs of air to simulate a physical threat and found that the flies' walking speed increased after being puffed at. The flies also would choose a puff-free route if offered, showing that they perceived the puffs as a threat (or at least preferred to avoid them). Next the researchers placed a small black object, roughly the size of a spider, 60 degrees to the right or left of the fly. On its own the object didn't cause a change in behavior, but when placed following puffs of air, the flies avoided looking at the object and moved so that it was positioned behind them.

To understand the molecular mechanism underlying this aversion behavior, the team then used mutated flies in which they altered the activity of certain neurons. While the mutated flies kept their visual and motor functions, and would still avoid the air puffs, they did not respond in the same fearful manner to visually avoid the object.

"This suggested that the cluster of neurons which releases the chemical tachykinin was necessary for activating visual aversion," said Tsuji. "When monitoring the flies' neuronal activity, we were surprised to find that it occurred through an oscillatory pattern, i.e., the activity went up and down similar to a wave. Neurons typically function by just increasing their activity levels, and reports of oscillating activity are particularly rare in fruit flies because up until recently the technology to detect this at such a small and fast scale didn't exist."

By giving the flies genetically encoded calcium indicators, the researchers could make the flies' neurons shine brightly when activated. Thanks to the latest imaging techniques, they then saw the changing, wavelike pattern of light being emitted, which was previously averaged out and missed.

Next, the team wants to figure out how these neurons fit into the broader circuitry of the brain. Although the neurons exist in a known visual region of the brain, the researchers do not yet know from where the neurons are receiving inputs and to where they are transmitting them, to regulate visual escape from objects perceived as dangerous.

Read more at Science Daily

Multiple ecosystems in hot water after marine heatwave surges across the Pacific

Rising ocean temperatures are sweeping the seas, breaking records and creating problematic conditions for marine life. Unlike heatwaves on land, periods of abrupt ocean warming can surge for months or years. Around the world these 'marine heatwaves' have led to mass species mortality and displacement events, economic declines and habitat loss. New research reveals that even areas of the ocean protected from fishing are still vulnerable to these extreme events fueled by climate change.

A study published today in Global Change Biology, led by researchers at UC Santa Barbara, found that while California's network of marine protected areas (MPAs) provide many social and ecological benefits, they are not resilient to the effects of ocean warming. MPAs are locations in the ocean where human activities such as fishing are restricted to conserve and protect marine ecosystems, habitats, species and cultural resources. The study, part of a 10-year review of California's MPA network conducted at UCSB's National Center for Ecological Analysis & Synthesis (NCEAS), found that marine heatwaves impact ecological communities regardless of whether they are protected inside MPAs.

"MPAs in California and around the world have many benefits, such as increased fish abundance, biomass and diversity," said Joshua Smith, who led the study while he was a postdoctoral researcher at NCEAS . "But they were never designed to buffer the impacts of climate change or marine heatwaves."

Smith and co-authors from all over the world were part of an NCEAS working group formed to synthesize decades of long-term ecological monitoring data from California's diverse ocean habitats. The group, co-led by Jenn Caselle, a researcher with UCSB's Marine Science Institute, and Kerry Nickols, a professor from Cal State University Northridge who now works with the non-profit Ocean Visions, aimed to provide actionable scientific results to California's policy makers and natural resource managers, as part of a statewide Decadal Evaluation of the MPA network. Their analyses spanned the largest marine heatwave on record, which rolled through the Pacific Ocean toward California from 2014-2016. The monster marine heatwave was formed from an environmental double-whammy -- unusual ocean warming nicknamed "The Blob," followed by a major El Niño event that prolonged the sweltering sea temperatures. The marine heatwave blanketed the West Coast from Alaska to Baja and left a wake of altered food webs, collapsed fisheries, and shifted populations of marine life among various other consequences.

As MPA managers around the world face increasing climate shocks, the extent to which MPAs can buffer the worst of these events has become an important question. The working group scientists asked how the ecological communities in California's protected areas fared after such a severe and prolonged heatwave: Would the communities shift and if so, how? Would they 'bounce back' when the marine heatwave subsided? Could the marine protected areas protect sensitive populations or facilitate recovery?

To find answers to their questions, they synthesized over a decade of data collected from 13 no-take MPAs located in a variety of ecosystems along the Central Coast: rocky intertidal zones, kelp forests, shallow and deep rocky reefs. The team looked at fish, invertebrates and seaweed populations inside and outside these areas, using data from before, during and after the heatwave.

They also focused on two of these habitats, rocky intertidal and kelp forests, at 28 MPAs across the full statewide network to gauge whether these locations promoted one particular form of climate resilience -- maintaining both population and community structure.

"We used no-take MPAs as a type of comparison to see whether the protected ecological communities fared better to the marine heatwave than places where fishing occurred," said Smith, now an Ocean Conservation Research Fellow at Monterey Bay Aquarium.

The results are somewhat sobering, though not altogether unexpected.

"The MPAs did not facilitate resistance or recovery across habitats or across communities," Caselle said. "In the face of this unprecedented marine heatwave, communities did change dramatically in most habitats. But, with one exception, the changes occurred similarly both inside and outside the MPAs. The novelty of this study was that we saw similar results across many different habitats and taxonomic groups, from deepwater to shallow reefs and from fishes to algae."

The implication of these findings, according to Smith, is that every part of the ocean is under threat from climate change. "MPAs are effective in many of the ways they were designed, but our findings suggest that MPAs alone are not sufficient to buffer the effects of climate change."

The key question now is what will happen in the future? At the time of this study using data through 2020, the ecological communities have not returned to their former, pre-heatwave state. According to the paper, these ecological communities shifted toward a "pronounced decline in the relative proportion of cold-water species and an increase in warm water species." For example, increases in the abundance of the señorita fish (Oxyjulis californica), a subtropical species with warm water affinity and previously rare in central California, had an outsized influence on the shift of communities. Whether these species persist in their new locations remains to be seen.

"This study makes it clear why long-term monitoring of California's MPAs is so critical," said Caselle. "Some of these time series are longer than 25 years at this point and the data are critical to understanding and readying human communities for the changes occurring in our marine communities." Continued study will show if future shifts in marine communities occur at different rates or to different base states in MPAs compared to fished areas.

Despite the limited ability of MPAs to resist the grip of the marine heatwave, they do confer benefits, not the least of which is the ability to study the complex effects of climate change in areas not impacted by fishing. As areas of minimal human interference that are regularly monitored, they present opportunities to study the response of marine ecosystems to shifting conditions and potentially tailor management techniques accordingly. Moreover, as Smith stated, "the ecological communities in MPAs are still being protected, even if they are different as a result of the heatwave. Given that marine heatwaves are anticipated to increase in frequency and magnitude into the future, swift climate action and nature-based solutions are needed as additional pathways to enhance the health of our oceans."

Kerry Nickols adds, "With the devastating impacts of climate change already apparent, it is very important that we are upfront about climate solutions -- as long as we are burning fossil fuels and warming the globe marine ecosystems will be at risk, even if they are protected from fishing."

Read more at Science Daily

Genes for learning and memory are 650 million years old

A team of scientists led by researchers from the University of Leicester have discovered that the genes required for learning, memory, aggression and other complex behaviours originated around 650 million years ago.

The findings led by Dr Roberto Feuda, from the Neurogenetic group in the Department of Genetics and Genome Biology and other colleagues from the University of Leicester and the University of Fribourg (Switzerland), have now been published in Nature Communications.

Dr Feuda said: "We've known for a long time that monoamines like serotonin, dopamine and adrenaline act as neuromodulators in the nervous system, playing a role in complex behaviour and functions like learning and memory, as well as processes such as sleep and feeding.

"However, less certain was the origin of the genes required for the production, detection, and degradation of these monoamines. Using the computational methods, we reconstructed the evolutionary history of these genes and show that most of the genes involved in monoamine production, modulation, and reception originated in the bilaterian stem group.

"This finding has profound implications on the evolutionary origin of complex behaviours such as those modulated by monoamines we observe in humans and other animals."

The authors suggest that this new way to modulate neuronal circuits might have played a role in the Cambrian Explosion -- known as the Big Bang -- which gave rise to the largest diversification of life for most major animal groups alive today by providing flexibility of the neural circuits to facilitate the interaction with the environment.

Read more at Science Daily

Jul 13, 2023

Webb celebrates first year of science with close-up on birth of sun-like stars

From our cosmic backyard in the solar system to distant galaxies near the dawn of time, NASA's James Webb Space Telescope has delivered on its promise of revealing the universe like never before in its first year of science operations. To celebrate the completion of a successful first year, NASA has released Webb's image of a small star-forming region in the Rho Ophiuchi cloud complex.

"In just one year, the James Webb Space Telescope has transformed humanity's view of the cosmos, peering into dust clouds and seeing light from faraway corners of the universe for the very first time. Every new image is a new discovery, empowering scientists around the globe to ask and answer questions they once could never dream of," said NASA Administrator Bill Nelson. "Webb is an investment in American innovation but also a scientific feat made possible with NASA's international partners that share a can-do spirit to push the boundaries of what is known to be possible. Thousands of engineers, scientists, and leaders poured their life's passion into this mission, and their efforts will continue to improve our understanding of the origins of the universe -- and our place in it."

The new Webb image released today features the nearest star-forming region to us. Its proximity at 390 light-years allows for a highly detailed close-up, with no foreground stars in the intervening space.

"On its first anniversary, the James Webb Space Telescope has already delivered upon its promise to unfold the universe, gifting humanity with a breathtaking treasure trove of images and science that will last for decades," said Nicola Fox, associate administrator of NASA's Science Mission Directorate in Washington. "An engineering marvel built by the world's leading scientists and engineers, Webb has given us a more intricate understanding of galaxies, stars, and the atmospheres of planets outside of our solar system than ever before, laying the groundwork for NASA to lead the world in a new era of scientific discovery and the search for habitable worlds."

Webb's image shows a region containing approximately 50 young stars, all of them similar in mass to the Sun, or smaller. The darkest areas are the densest, where thick dust cocoons still-forming protostars. Huge bipolar jets of molecular hydrogen, represented in red, dominate the image, appearing horizontally across the upper third and vertically on the right. These occur when a star first bursts through its natal envelope of cosmic dust, shooting out a pair of opposing jets into space like a newborn first stretching her arms out into the world. In contrast, the star S1 has carved out a glowing cave of dust in the lower half of the image. It is the only star in the image that is significantly more massive than the Sun.

"Webb's image of Rho Ophiuchi allows us to witness a very brief period in the stellar lifecycle with new clarity. Our own Sun experienced a phase like this, long ago, and now we have the technology to see the beginning of another's star's story," said Klaus Pontoppidan, who served as Webb project scientist at the Space Telescope Science Institute in Baltimore, Maryland, since before the telescope's launch and through the first year of operations.

Some stars in the image display tell-tale shadows indicating protoplanetary disks -- potential future planetary systems in the making.

A Full Year, Across the Full Sky

From its very first deep field image, unveiled by President Joe Biden, Vice President Kamala Harris, and Nelson live at the White House, Webb has delivered on its promise to show us more of the universe than ever before. However, Webb revealed much more than distant galaxies in the early universe.

"The breadth of science Webb is capable of exploring really becomes clear now, when we have a full year's worth of data from targets across the sky," said Eric Smith, associate director for research in the Astrophysics Division at NASA Headquarters and Webb program scientist. "Webb's first year of science has not only taught us new things about our universe, but it has revealed the capabilities of the telescope to be greater than our expectations, meaning future discoveries will be even more amazing." The global astronomy community has spent the past year excitedly poring over Webb's initial public data and getting a feel for how to work with it.

Beyond the stunning infrared images, what really has scientists excited are Webb's crisp spectra -- the detailed information that can be gleaned from light by the telescope's spectroscopic instruments. Webb's spectra have confirmed the distances of some of the farthest galaxies ever observed, and have discovered the earliest, most distant supermassive black holes. They have identified the compositions of planet atmospheres (or lack thereof) with more detail than ever before, and have narrowed down what kinds of atmospheres may exist on rocky exoplanets for the first time. They also have revealed the chemical makeup of stellar nurseries and protoplanetary disks, detecting water, organic carbon-containing molecules, and more. Already, Webb observations have resulted in hundreds of scientific papers answering longstanding questions and raising new ones to address with Webb.

The breadth of Webb science is also apparent in its observations of the region of space we are most familiar with -- our own solar system. Faint rings of gas giants appear out of the darkness, dotted by moons, while in the background Webb shows distant galaxies. By comparing detections of water and other molecules in our solar system with those found in the disks of other, much younger planetary systems, Webb is helping to build up clues about our own origins -- how Earth became the ideal place for life as we know it.

Read more at Science Daily

Marine fossils unearth story about Panama's deep past

Between 6.4 and 5.8 million years ago, most of the land bridge that connects North and South America had already emerged and the channels connecting both Pacific and Atlantic oceans were shallow. Recent fossil discoveries in the northern Panama Canal area suggest that marine species interchange persisted across these shallow waters during the final stages of formation of the isthmus.

In 2017 and 2019, Aldo Benites-Palomino was studying fossils collected in Caribbean Panama, when he came across some unexpected specimens. He was a biology student in Perú, where his training had been very classical. As an intern and later a fellow at the Smithsonian Tropical Research Institute (STRI), his mindset shifted. His mentor, STRI staff scientist and paleobiologist Carlos Jaramillo, encouraged his students to change their focus when looking at fossils: instead of thinking about specimens or methods, to think about the questions that the fossils could help answer.

"I wanted to go to STRI because it is the most important tropical biology center in the world," said Benites-Palomino. "There I was able to learn a lot about the way biology and ecology is done in the modern world."

The fossil remains belonged to small-sized cetaceans, a group of aquatic mammals that includes whales and dolphins, and the specimens were new for the region. Most of them had been collected by Carlos de Gracia from STRI and Jorge Velez Juarbe from the Los Angeles Museum of Natural History, both co-authors in a new paper published in Biology Letters. In the article, Benites-Palomino and his colleagues go beyond describing the specimens, they also unearth the story they reveal about the isthmus' deep past.

The fossils belonged to the Late Miocene, around 6.4 to 5.8 million years ago, when the final stages of formation of the isthmus had already started. This event affected oceanic waters and marine currents across the globe and triggered speciation events, where species separated by the land bridge developed their own unique characteristics on either ocean.

However, these cetaceans found in Caribbean Panama shared similarities with other Late Miocene species from the North and South Pacific Ocean, particularly the Pisco Formation in Peru, suggesting that some organisms were still able to disperse via the shallowing seaway at a time when deep water interchange between both oceans was no longer occurring.

The lack of fossil marine mammals from the western Caribbean has thus far hampered understanding of the region's deep past, so these new findings help strengthen current knowledge regarding the connectivity between the Pacific and Caribbean marine faunas during the final phases of formation of the isthmus.

Read more at Science Daily

One third of normal-weight individuals are obese, according to study based on body fat percentage

Researchers from the School of Public Health at TAU's Faculty of Medicine examined the anthropometric data of about 3,000 Israeli women and men and concluded that body fat percentage is a much more reliable indicator of an individual's overall health and cardiometabolic risk than the BMI index, widely used in clinics today. The researchers suggest that body fat percentage should become the gold standard in this respect and recommend equipping clinics all over Israel with suitable devices.

The study -- the largest of its kind ever conducted in Israel -- was led by Prof. Yftach Gepner and PhD student Yair Lahav, in collaboration with Aviv Kfir. It and was based on data from the Yair Lahav Nutrition Center in Tel Aviv. The paper was published in Frontiers in Nutrition.

Prof. Gepner: "Israel is a leader in childhood obesity and more than 60% of the country's adults are defined as overweight. The prevailing index in this respect is BMI, based on weight and height measures, which is considered a standard indicator of an individual's general health. However, despite the obvious intuitive connection between excess weight and obesity, the actual measure for obesity is the body's fat content, with the maximum normal values set at 25% for males and 35% for females. Higher fat content is defined as obesity and can cause a range of potentially life-threatening cardiometabolic diseases: heart disease, diabetes, fatty liver, kidney dysfunction, and more. The disparity between the two indexes has generated a phenomenon called 'the paradox of obesity with normal weight' -- higher than normal body fat percentage in normal-weight individuals. In this study we examined the prevalence of this phenomenon in Israel's adult population."

The researchers analyzed the anthropometric data of 3,000 Israeli women and men, accumulated over several years: BMI scores; DXA scans (using X-rays to measure body composition, including fat content); and cardiometabolic blood markers. About one third of the participants, 1,000 individuals, were found to be within the normal weight range. Of these, 38.5% of the women and 26.5% of the men were identified as 'obese with normal weight' -- having excess fat content despite their normal weight. Matching body fat percentage with blood markers for each of these individuals, the study found a significant correlation between 'obesity with normal weight' and high levels of sugar, fat, and cholesterol -- major risk factors for a range of cardiometabolic diseases. At the same time, 30% of the men and 10% of the women identified as overweight were found to have a normal body fat percentage.

Prof. Gepner: "Our findings were somewhat alarming, indicating that obesity with normal weight is much more common in Israel than we had assumed. Moreover, these individuals, being within the norm according to the prevailing BMI index, usually pass 'under the radar'. Unlike people who are identified as overweight, they receive no treatment or instructions for changing their nutrition or lifestyle -- which places them at an even greater risk for cardiometabolic diseases."

Based on their findings, the researchers concluded that body fat percentage is a more reliable indicator of an individual's general health than BMI. Consequently, they suggest that body fat percentage should become the prevailing standard of health, and recommend some convenient and accessible tools for this purpose: skinfold measurements that estimate body fat based on the thickness of the fat layer under the skin; and a user-friendly device measuring the body's electrical conductivity, already used in many fitness centers.

Read more at Science Daily

Detailed map of the heart provides new insights into cardiac health and disease

In a new study, published today (12 July) in Nature, researchers have produced the most detailed and comprehensive human Heart Cell Atlas to date, including the specialised tissue of the cardiac conduction system -- where the heartbeat originates.

The multi-centre team is led by the Wellcome Sanger Institute and the National Heart and Lung Institute at Imperial College London, and has also presented a new drug-repurposing computational tool called Drug2cell, which can provide insights into the effects of drugs on heart rate.

This study is part of the international Human Cell Atlas* (HCA) initiative, which is mapping every cell type in the human body, to transform our understanding of health and disease, and will form the foundation for a fully integrated HCA Human Heart Cell Atlas.

Charting eight regions of the human heart, the work describes 75 different cell states including the cells of the cardiac conduction system -- the group of cells responsible for the heartbeat -- not understood at such a detailed level in humans before. The human cardiac conduction system, the heart's 'wiring', sends electrical impulses from the top to the bottom of the heart and coordinates the heartbeat.

By using spatial transcriptomics, which gives a "map" of where cells sit within a tissue, researchers were also able to understand how these cells communicate with each other for the first time. This map acts as a molecular guidebook, showing what healthy cells look like, and providing a crucial reference to understand what goes wrong in disease. The findings will help understand diseases such as those affecting the heart rhythm.

The assembly of a Human Heart Cell Atlas is key given that cardiovascular diseases are the leading cause of death globally. Around 20,000 electronic pacemakers are implanted each year in the UK for these disorders. These can be ineffective and are prone to complications and side-effects. Understanding the biology of the cells of the conduction system and how they differ from muscle cells paves the way to therapies to boost cardiac health and develop targeted treatments for arrhythmias.

The team also presents a new computational tool called Drug2cell. The tool can predict drug targets as well as drug side effects. It leverages single-cell profiles and the 19 million drug-target interactions in the EMBL-EBI ChEMBL database.

Unexpectedly, this tool identified that pacemaker cells express the target of certain medications, such as GLP1 drugs, which are used for diabetes and weight loss and are known to increase the heart rate as a side-effect, the mechanism of which was unclear. This study suggests that the increase in heart rate might be partly due to a direct action of these drugs on pacemaker cells, a finding the team also showed in an experimental stem cell model of pacemaker cells.

Dr James Cranley, joint first author, a cardiologist specialising in heart rhythm disorders and PhD student at the Wellcome Sanger Institute, said: "The cardiac conduction system is critical for the regular and coordinated beating of our hearts, yet the cells which make it up are poorly understood. This study sheds new light by defining the profiles of these cells, as well as the multicellular niches they inhabit. This deeper understanding opens the door to better, targeted anti-arrhythmic therapies in the future."

Dr Kazumasa Kanemaru, joint first author and Postdoctoral Fellow in the Gene Expression Genomics team at the Wellcome Sanger Institute, said: "The mechanism of activating and suppressing pacemaker cell genes is not clear, especially in humans. This is important for improving cell therapy to facilitate the production of pacemaker cells or to prevent the excessive spontaneous firing of cells. By understanding these cells at an individual genetic level, we can potentially develop new ways to improve heart treatments."

The study unearthed an unexpected discovery: a close relationship between conduction system cells and glial cells. Glial cells are part of the nervous system and are traditionally found in the brain. They have been explored very little in the heart. This research suggests that glial cells are in physical contact with conduction system cells and may play an important supporting role: communicating with the pacemaker cells, guiding nerve endings to them, and supporting their release of glutamate, a neurotransmitter.

Another key finding of the study is an immune structure on the heart's outer surface. This contains plasma cells, which release antibodies into the space around the heart to prevent infection from the nearby lungs. The researchers also identified a cellular niche enriching for a hormone that could be interpreted as an early warning sign of heart failure.

Dr Michela Noseda, senior Lecturer in Cardiac Molecular Pathology at the National Heart and Lung Institute, Imperial College London, a Coordinator of the Human Cell Atlas Heart BioNetwork and a lead author, said: "We often don't fully know what impact a new treatment will have on the heart and its electrical impulses -- this can mean a drug is withdrawn or fails to make it to the market. Our team developed the Drug2cell platform to improve how we evaluate new treatments and how they can affect our hearts, and potentially other tissues too. This could provide us with an invaluable tool to identify new drugs which target specific cells, as well as help to predict any potential side-effects early on in drug development."

Professor Metin Avkiran, Associate Medical Director at the British Heart Foundation, which part-funded the research with the German Centre for Cardiovascular Research (DZHK), said: "Using cutting-edge technologies, this research provides further intricate detail about the cells that make up specialised regions of the human heart and how those cells communicate with each other. The new findings on the heart's electrical conduction system and its regulation are likely to open up new approaches to preventing and treating rhythm disturbances that can impair the heart's function and may even become life-threatening."

"International collaboration is key to scientific progress. This impactful study and other discoveries from the broader Human Cell Atlas initiative are excellent examples of what can be achieved when the international research community works together across borders. Our combined efforts can ultimately produce better outcomes for patients worldwide."

Read more at Science Daily

Jul 12, 2023

Satellite security lags decades behind the state of the art

Thousands of satellites are currently orbiting the Earth, and there will be many more in the future. Researchers from Ruhr University Bochum and the CISPA Helmholtz Center for Information Security in Saarbrücken have assessed the security of these systems from an IT perspective. They analysed three current low-earth orbit satellites and found that, from a technical point of view, hardly any modern security concepts were implemented. Various security mechanisms that are standard in modern mobile phones and laptops were not to be found: for example, there was no separation of code and data. Interviews with satellite developers also revealed that the industry relies primarily on security through obscurity.

The results were presented by a team headed by Johannes Willbold, a PhD student from Bochum, Dr. Ali Abbasi, a researcher from Saarbrücken, and Professor Thorsten Holz, formerly in Bochum, now in Saarbrücken, at the IEEE Symposium on Security and Privacy, which took place in San Francisco from 22 to 25 May 2023. The paper was awarded a Distinguished Paper Award at the conference.

Research satellites and commercial satellite put to the test

The examined satellites were two small models and one medium-sized model -- research satellites as well as a satellite of a commercial company -- which orbit the Earth at a short distance and are used to observe the Earth. Gaining access to satellites and their software was a challenge for the team, as commercial providers in particular rarely wish to reveal any details. The researchers eventually gained access through cooperation with the European Space Agency (ESA), various universities involved in the construction of satellites, and a commercial enterprise.

The team from Bochum and Saarbrücken conducted a thorough security analysis of the three models. They looked in detail at what the software running on the devices does and which communication protocols are used. They emulated the systems, i.e., rebuilt them virtually, so that they could test the software as if it were in a real satellite. "It was a very different world from the systems we usually study. For example, completely different communication protocols were used," as Thorsten Holz outlines the process.

Systems with specific requirements

Satellites orbiting the Earth can only be reached by their ground station on Earth within a time window of a few minutes. The systems must be robust against the radiation in space, and, since they can only consume a small amount of energy, they have a low power output. "The data rates are like those of modems in the 1990s," as Holz elaborates the challenges satellite developers face.

Based on the findings gained from the software analysis, the researchers worked out various attack scenarios. They showed that they could cut off the satellites from ground control and seize control of the systems, for example in order to take pictures with the satellite camera. "We were surprised that the technical security level is so low," points out Thorsten Holz, adding the following caveat with regard to potential ramifications: "It wouldn't be all that easy to steer the satellite to another location, for example, to crash it or have it collide with other objects."

Survey among developers

To find out how the people who develop and build satellites approach security, the research team compiled a questionnaire and submitted it to research institutions, the ESA, the German Aerospace Centre and various enterprises. Nineteen developers participated anonymously in the survey. "The results show us that the understanding of security in the industry is different than in many other areas, specifically that it's security by obscurity," concludes Johannes Willbold. Many of the respondents therefore assumed that satellites could not be attacked because there is no documentation of the systems, i.e., nothing is known about them. Only a few said that they encrypt data when communicating with satellites or use authentication in order to ensure that only the ground station is allowed to communicate with the satellite.

Read more at Science Daily

Widespread illegal trade of hazardous chemicals

54 chemicals and groups of chemicals are covered by the Rotterdam Convention due to their high potential to cause severe harm on human health and the environment. These include mercury compounds, various pesticides and five of the six types of asbestos. The Convention, also known as the PIC Convention (Prior Informed Consent), does not ban these hazardous substances. However, the parties may only trade them among themselves if the importing country has expressly consented to the import.

The PIC procedure is primarily intended to protect developing countries from the uncontrolled import of highly hazardous chemicals, since these countries often lack the necessary infrastructure to safely process and dispose of them. Now, a new study initiated by Empa scientists delivers sobering results: The PIC procedure is defaulted on in nearly half of the traded volume of these chemicals.

Worldwide violations

For the study, published on 10 July in the journal Nature Sustainability, researchers from China and Switzerland analyzed public trade data from the United Nations Comtrade database for 46 of the 54 listed chemicals. A total of 64.5 million tons were traded globally from 2004 to 2019. Of these, 27.5 million tons were traded illegally, i.e., exported to countries that had explicitly refused to import them.

Non-compliance with the Rotterdam Convention is a worldwide phenomenon, especially by many countries in Western, Central and Southern Europe, as well as South and Southeast Asia. At the same time, these regions were also the most affected by illegal imports, along with the Middle East and North Africa, as well as Latin America. "This prevalent illegal trade is highly concerning because it undermines global efforts to protect us and our environment from hazardous chemicals," says Empa researcher Zhanyun Wang, who initiated the study.

According to Wang and his co-authors, the result of the study is a rather conservative estimate of the illicit trade in hazardous chemicals, as situations such as smuggling and black markets were not includes in the analysis. In addition, the US, for example, exported about four million tons of chemicals to countries that refuse to import them under the Convention. However, this is not necessarily illegal -- because the US has not ratified the Rotterdam Convention and is subject to different rules.

Ongoing large-scale trade

Wang also considers the very high overall volume of hazardous substances being traded as problematic. Of the total 64.5 million tons, the majority -- 55.3 million tons -- is ethylene dichloride, a carcinogenic and organ-damaging solvent used in the production of polyvinyl chloride (PVC). In second place, with 6.3 million tons, is the toxic reagent, disinfectant and pesticide ethylene dioxide.

The other chemicals, which are predominantly pesticides, make up a relatively small portion of the total. "But we see that these highly toxic compounds are still being traded in significant quantities," Wang says. "Since the Rotterdam Convention came into force, trade has decreased only slightly. Yet for many of these substances, we've known for decades how harmful they are."

Surprisingly, the authors also discovered a brisk trade in some substances that have been severely restricted or even banned for years to decades. These include, for instance, the legacy toxic pesticides aldrin, chlordane, heptachlor and dieldrin, which have been banned worldwide as the "Dirty Dozen" under the Stockholm Convention since 2004. Also still traded, albeit in much smaller quantities of several thousand tons, are the notoriously neurotoxic compounds tetraethyl lead and tetramethyl lead. Despite decades of global efforts to phase them out in gasoline for normal cars, they seem to be still used in certain specialty fuels.

Strengthening national and international action

All of the data used in the study are public -- so why aren't the countries addressing the defaults? There are several reasons. "For many countries, the environmental ministry is responsible for implementing the Rotterdam Convention," Wang explains. "But trade is supervised by the customs authority." In addition, there are often insufficient resources available to monitor chemical trade, especially in developing countries.

The researchers recommend that international and national action needs to step up to address global trade of highly hazardous chemicals, particularly illegal trade. Among others, other problematic chemicals should be listed under the Convention, such as chrysotile asbestos. This type of asbestos is by far the most common -- and the only one of the six types of asbestos not yet covered by the Convention. "Switzerland has recently taken the initiative here to bring about changes, along with several other countries, but so far without success," Wang says.

The Rotterdam Convention, meanwhile, has only had a Compliance Committee to monitor and address its implementation since 2020. "We are hopeful that this, together with national efforts on reducing the production and use of highly hazardous chemicals, will greatly reduce illegal trade in the future," Wang says.

Read more at Science Daily

Size does matter: Group size and mating preferences drive deeper male voices

Deeper male voices in primates, including humans, offer more than sex appeal -- they may have evolved as another way for males to drive off competitors in large groups that favored polygyny, or mating systems where a male has multiple mates, according to researchers. The research is the most comprehensive investigation of differences in vocal pitch between sexes to date and has the potential to help to shed light on social behavior in humans and their closest living relatives.

The average speaking pitch of an adult male human is about half the average pitch, an octave lower, than that of an adult female human, said David Puts, professor of anthropology at Penn State and study co-author.

"It's a sex difference that emerges at sexual maturity across species and it probably influences mating success through attracting mates or by intimidating competitors," he said. "I thought it has to be a trait that's been subjected to sexual selection, in which mating opportunities influence which traits are passed down to offspring. Humans and many other primates are highly communicative, especially through vocal communication. So it seems like a really relevant trait for thinking about social behavior in humans and primates in general."

The researchers used specialized computer software to visualize vocalizations and measure voice pitch in recordings from 37 anthropoid primate species, or those most closely related to humans, including gorillas, chimpanzees and recordings of 60 humans evenly divided by sex. Samples for each species included at least two male and two female vocal recordings, for a total of 1,914 vocalizations. The team then calculated average male and female vocal fundamental frequency for each species to see how pronounced the difference was between sexes.

The scientists collected additional information for each species to help identify correlations between male versus female voice pitch and factors that could have contributed to the trait's evolution. Additional variables included body size and body mass differences between males and females, habitat type, adult sex ratios, mating competition intensity and testes size. They also categorized each species by mating system -- monogamous, in which males and females have one mate at a time; polygynandrous, in which males and females have multiple mating partners; and polygynous, in which some males have several mates.

The researchers used these data to test five hypotheses simultaneously to identify which factors may have played the strongest roles in driving sex differences in vocal pitch. The hypotheses were: intensity of mating competition, large group size, multilevel social organization, trade-off against the intensity of sperm competition, and poor acoustic habitats. Previous research has looked at one or two of these hypotheses at a time. The current study is the first to test multiple hypotheses simultaneously for vocal pitch differences using a robust dataset, ensuring data consistency and garnering convincing results, according to Puts.

The team found that fundamental frequency differences by sex increased in larger groups and those with polygynous mating systems, especially in groups with a higher female-to-male ratio. They reported their findings today (July 10) in Nature Communications.

"Our findings highlight the important role of sexual selection and offer possible evolutionary explanations for why males and females differ in voice pitch across primates," said Toe Aung, first author and assistant professor of psychology and counseling at Immaculata University, who worked on the study as part of his doctoral dissertation at Penn State. "This research also provides insight into sex differences in voice pitch in our common ancestors who lived millions of years ago."

Deeper male voices may act as an additional way to fend off mating competitors without having to engage in costly fighting by making males sound bigger, in addition to other physical traits like height and muscle size, according to the researchers. In adult humans, for instance, males vocalize at an average of 120 hertz whereas females vocalize at an average of about 220 hertz, putting humans right in the middle of polygynous societies, the researchers reported.

"Although social monogamy is really common in humans, mating and reproduction in our ancestors was substantially polygynous," Puts said. "Our findings help us to understand why male and female voices of our species differ so drastically. It may be a product of our evolutionary history, particularly our history of living in large groups in which some males reproduced with multiple females."

Read more at Science Daily

Capturing the immense potential of microscopic DNA for data storage

In a world first, a 'biological camera' bypasses the constraints of current DNA storage methods, harnessing living cells and their inherent biological mechanisms to encode and store data. This represents a significant breakthrough in encoding and storing images directly within DNA, creating a new model for information storage reminiscent of a digital camera.

Led by Principal Investigator Associate Professor Chueh Loo Poh from the College of Design and Engineering at the National University of Singapore, and the NUS Synthetic Biology for Clinical and Technological Innovation (SynCTI), the team's findings, which could potentially shake up the data-storage industry, were published in Nature Communications on 3 July 2023.

A new paradigm to address global data overload

As the world continues to generate data at an unprecedented rate, data has come to be seen as the 'currency' of the 21st century. Estimated to be 33 ZB in 2018, it has been forecasted that the Global Datasphere will reach 175 ZB by 2025. That has sparked a quest for a storage alternative that can transcend the confines of conventional data storage and address the environmental impact of resource-intensive data centres.

It is only recently that the idea of using DNA to store other types of information, such as images and videos, has garnered attention. This is due to DNA's exceptional storage capacity, stability, and long-standing relevance as a medium for information storage.

"We are facing an impending data overload. DNA, the key biomaterial of every living thing on Earth, stores genetic information that encodes for an array of proteins responsible for various life functions. To put it into perspective, a single gram of DNA can hold over 215,000 terabytes of data -- equivalent to storing 45 million DVDs combined," said Assoc Prof Poh.

"DNA is also easy to manipulate with current molecular biology tools, can be stored in various forms at room temperature, and is so durable it can last centuries," says Cheng Kai Lim, a graduate student working with Assoc Prof Poh.

Despite its immense potential, current research in DNA storage focuses on synthesising DNA strands outside the cells. This process is expensive and relies on complex instruments, which are also prone to errors.

To overcome this bottleneck, Assoc Prof Poh and his team turned to live cells, which contain an abundance of DNA that can act as a 'data bank', circumventing the need to synthesise the genetic material externally.

Through sheer ingenuity and clever engineering, the team developed 'BacCam' -- a novel system that merges various biological and digital techniques to emulate a digital camera's functions using biological components.

"Imagine the DNA within a cell as an undeveloped photographic film," explained Assoc Prof Poh. "Using optogenetics -- a technique that controls the activity of cells with light akin to the shutter mechanism of a camera, we managed to capture 'images' by imprinting light signals onto the DNA 'film'."

Next, using barcoding techniques akin to photo labelling, the researchers marked the captured images for unique identification. Machine-learning algorithms were employed to organise, sort, and reconstruct the stored images. These constitute the 'biological camera', mirroring a digital camera's data capture, storage, and retrieval processes.

The study showcased the camera's ability to capture and store multiple images simultaneously using different light colours. More crucially, compared to earlier methods of DNA data storage, the team's innovative system is easily reproducible and scalable.

"As we push the boundaries of DNA data storage, there is an increasing interest in bridging the interface between biological and digital systems," said Assoc Prof Poh.

Read more at Science Daily

Jul 11, 2023

Reinventing cosmology: New research puts age of universe at 26.7 -- not 13.7 -- billion years

Our universe could be twice as old as current estimates, according to a new study that challenges the dominant cosmological model and sheds new light on the so-called "impossible early galaxy problem."

"Our newly-devised model stretches the galaxy formation time by a several billion years, making the universe 26.7 billion years old, and not 13.7 as previously estimated," says author Rajendra Gupta, adjunct professor of physics in the Faculty of Science at the University of Ottawa.

For years, astronomers and physicists have calculated the age of our universe by measuring the time elapsed since the Big Bang and by studying the oldest stars based on the redshift of light coming from distant galaxies. In 2021, thanks to new techniques and advances in technology, the age of our universe was thus estimated at 13.797 billion years using the Lambda-CDM concordance model.

However, many scientists have been puzzled by the existence of stars like the Methuselah that appear to be older than the estimated age of our universe and by the discovery of early galaxies in an advanced state of evolution made possible by the James Webb Space Telescope. These galaxies, existing a mere 300 million years or so after the Big Bang, appear to have a level of maturity and mass typically associated with billions of years of cosmic evolution. Furthermore, they're surprisingly small in size, adding another layer of mystery to the equation.

Zwicky's tired light theory proposes that the redshift of light from distant galaxies is due to the gradual loss of energy by photons over vast cosmic distances. However, it was seen to conflict with observations. Yet Gupta found that "by allowing this theory to coexist with the expanding universe, it becomes possible to reinterpret the redshift as a hybrid phenomenon, rather than purely due to expansion."

In addition to Zwicky's tired light theory, Gupta introduces the idea of evolving "coupling constants," as hypothesized by Paul Dirac. Coupling constants are fundamental physical constants that govern the interactions between particles. According to Dirac, these constants might have varied over time. By allowing them to evolve, the timeframe for the formation of early galaxies observed by the Webb telescope at high redshifts can be extended from a few hundred million years to several billion years. This provides a more feasible explanation for the advanced level of development and mass observed in these ancient galaxies.

Read more at Science Daily

Carbon taxes that focus on luxury consumption are fairer than those that tax all emissions equally

Not all carbon emissions are made for the same reason -- they range from more essential purposes like heating a home to nonessential "luxury" activities like leisure travel. However, proposals for the implementations of carbon taxes tend to apply to all emissions at an equal rate. This can give rise to and exacerbate inequalities. A new analysis published on July 11 in the journal One Earth suggests taxing luxury carbon emissions at a higher rate instead; if all 88 countries analyzed in this study adopted the luxury-focused policy, this would achieve 75% of the emissions reduction needed to reach the Paris Agreement's goal of limiting climate change to well below 2°C by 2050.

"There is an injustice in terms of who uses energy, or carbon, for basic or luxury purposes, but it hasn't been translated into explicit policy yet," says Yannick Oswald, an economist at the University of Leeds. "In this study, we test policies derived from this knowledge for the first time."

Several countries -- such as Canada and Mexico -- have active carbon pricing policies. These policies either price all emissions at an equal rate or target one type of emission, such as heat or fuel. However, past research has shown that, in high-income countries, these policies tend to affect low-income households the most while failing to have a large impact on emissions. This might be because resources such as heat or fuel make up a greater portion of low-income spending and are difficult to do without.

To test the impact of a tax program that distinguishes between carbon emissions from basic or luxury activities, the researchers built a model based on household carbon footprints from 88 different countries. For each country, they designed a tax rate for different types of purchases, ensuring activities that make up a greater proportion of low-income spending would be taxed less relative to activities that make up a greater proportion of high-income spending. In the US, for example, vacation travel would be taxed at a higher rate than heating.

They used this model to test the outcome of either their luxury carbon tax rates or a uniform carbon tax rate. Under a uniform tax rate, 37% of global carbon tax revenue would come from luxury purchases. This increases to 52% under a luxury-focused tax program.

Not only was the luxury tax "fairer" based on household income -- affecting low-income households less and high-income households more -- it also was slightly better at reducing yearly household emissions in the very short-term. The researchers note that this might be because it is more feasible to forgo luxury purchases than an essential purchase if the price increases.

While the luxury tax proved fairer in all countries studied, the researchers found that, in low-income countries, a uniform tax could also be fair. In South Africa, for example, low-income households already spend much less on fuel or heating than high-income households. Thus, a uniform carbon tax is already targeting high-income groups by design. In contrast, the luxury carbon tax is most beneficial in terms of fairness when applied to high-income countries. This tax can better account for flexible, nonessential purchases in countries like the United States, where it is difficult to avoid carbon-emitting activities like driving a car in a low-income lifestyle.

While this type of policy could make significant progress towards reducing global emissions, the researchers also note that this goal might be difficult to achieve in practice. Few countries have a carbon tax scheme that is currently this rigorous. Luxury-focused carbon taxation also targets high-income groups, which may be the most equipped to lobby against such a policy going into effect.

Read more at Science Daily

Scientists discover 36-million-year geological cycle that drives biodiversity

Movement in the Earth's tectonic plates indirectly triggers bursts of biodiversity in 36-million-year cycles by forcing sea levels to rise and fall, new research has shown.

Researchers including geoscientists at the University of Sydney believe these geologically driven cycles of sea level changes have a significant impact on the diversity of marine species, going back at least 250 million years.

As water levels rise and fall, different habitats on the continental shelves and in shallow seas expand and contract, providing opportunities for organisms to thrive or die. By studying the fossil record, the scientists have shown that these shifts trigger bursts of new life to emerge.

The research has been published in the journal Proceedings of the National Academy of Sciences, led by Associate Professor Slah Boulila from Sorbonne University in Paris.

Study co-author Professor Dietmar Müller, from the School of Geosciences at the University of Sydney, said: "In terms of tectonics, the 36-million-year cycle marks alterations between faster and slower seafloor spreading, leading to cyclical depth changes in ocean basins and in the tectonic transfer of water into the deep Earth.

"These in turn have led to fluctuations in the flooding and drying up of continents, with periods of extensive shallow seas fostering biodiversity.

"This work was enabled by the GPlates plate tectonic software, developed by the EarthByte Group at the University of Sydney, supported by Australia's National Collaborative Research Infrastructure Strategy (NCRIS) via AuScope."

The team based their findings on the discovery of strikingly similar cycles in sea-level variations, Earth's interior mechanisms and marine fossil records.

Scientists now have overwhelming evidence that tectonic cycles and global sea level change driven by Earth's dynamics have played a crucial role in shaping the biodiversity of marine life over millions of years.

"This research challenges previous ideas about why species have changed over long periods," Professor Müller said.

"The cycles are 36 million years long because of regular patterns in how tectonic plates are recycled into the convecting mantle, the mobile part of the deep Earth, similar to hot, thick soup in a pot, that moves slowly."

Professor Müller said the Cretaceous Winton Formation in Queensland serves as a prime example of how sea-level changes have shaped ecosystems and influenced biodiversity in Australia.

The formation, renowned for its collection of dinosaur fossils and precious opal, provides a valuable window into a time when much of the Australian continent was flooded.

As sea levels rose and fell, the flooding of the continent created expanding and contracting ecological recesses in shallow seas, providing unique habitats for a wide range of species.

Read more at Science Daily

Revolutionary self-sensing electric artificial muscles

Researchers from Queen Mary University of London have made groundbreaking advancements in bionics with the development of a new electric variable-stiffness artificial muscle. Published in Advanced Intelligent Systems, this innovative technology possesses self-sensing capabilities and has the potential to revolutionize soft robotics and medical applications. The artificial muscle seamlessly transitions between soft and hard states, while also sensing forces and deformations. With flexibility and stretchability similar to natural muscle, it can be integrated into intricate soft robotic systems and adapt to various shapes. By adjusting voltages, the muscle rapidly changes its stiffness and can monitor its own deformation through resistance changes. The fabrication process is simple and reliable, making it ideal for a range of applications, including aiding individuals with disabilities or patients in rehabilitation training.

In a study published recently in Advanced Intelligent Systems, researchers from Queen Mary University of London have made significant advancements in the field of bionics with the development of a new type of electric variable-stiffness artificial muscle that possesses self-sensing capabilities. This innovative technology has the potential to revolutionize soft robotics and medical applications.

Muscle contraction hardening is not only essential for enhancing strength but also enables rapid reactions in living organisms. Taking inspiration from nature, the team of researchers at QMUL's School of Engineering and Materials Science has successfully created an artificial muscle that seamlessly transitions between soft and hard states while also possessing the remarkable ability to sense forces and deformations.

Dr. Ketao Zhang, a Lecturer at Queen Mary and the lead researcher, explains the importance of variable stiffness technology in artificial muscle-like actuators. "Empowering robots, especially those made from flexible materials, with self-sensing capabilities is a pivotal step towards true bionic intelligence," says Dr. Zhang.

The cutting-edge artificial muscle developed by the researchers exhibits flexibility and stretchability similar to natural muscle, making it ideal for integration into intricate soft robotic systems and adapting to various geometric shapes. With the ability to withstand over 200% stretch along the length direction, this flexible actuator with a striped structure demonstrates exceptional durability.

By applying different voltages, the artificial muscle can rapidly adjust its stiffness, achieving continuous modulation with a stiffness change exceeding 30 times. Its voltage-driven nature provides a significant advantage in terms of response speed over other types of artificial muscles. Additionally, this novel technology can monitor its deformation through resistance changes, eliminating the need for additional sensor arrangements and simplifying control mechanisms while reducing costs.

The fabrication process for this self-sensing artificial muscle is simple and reliable. Carbon nanotubes are mixed with liquid silicone using ultrasonic dispersion technology and coated uniformly using a film applicator to create the thin layered cathode, which also serves as the sensing part of the artificial muscle. The anode is made directly using a soft metal mesh cut, and the actuation layer is sandwiched between the cathode and the anode. After the liquid materials cure, a complete self-sensing variable-stiffness artificial muscle is formed.

The potential applications of this flexible variable stiffness technology are vast, ranging from soft robotics to medical applications. The seamless integration with the human body opens up possibilities for aiding individuals with disabilities or patients in performing essential daily tasks. By integrating the self-sensing artificial muscle, wearable robotic devices can monitor a patient's activities and provide resistance by adjusting stiffness levels, facilitating muscle function restoration during rehabilitation training.

"While there are still challenges to be addressed before these medical robots can be deployed in clinical settings, this research represents a crucial stride towards human-machine integration," highlights Dr. Zhang. "It provides a blueprint for the future development of soft and wearable robots."

Read more at Science Daily

Jul 10, 2023

Earth formed from dry, rocky building blocks

Billions of years ago, in the giant disk of dust, gas, and rocky material that orbited our young sun, larger and larger bodies coalesced to eventually give rise to the planets, moons, and asteroids we see today. Scientists are still trying to understand the processes by which planets, including our home planet, were formed. One way researchers can study how Earth formed is to examine the magmas that flow up from deep within the planet's interior. The chemical signatures from these samples contain a record of the timing and the nature of the materials that came together to form Earth -- analogous to how fossils give us clues about Earth's biological past.

Now, a study from Caltech shows that the early Earth accreted from hot and dry materials, indicating that our planet's water -- the crucial component for the evolution of life -- must have arrived late in the history of Earth's formation.

The study, involving an international team of researchers, was conducted in the laboratories of Francois Tissot, assistant professor of geochemistry and Heritage Medical Research Institute Investigator; and Yigang Zhang of the University of Chinese Academy of Sciences. A paper describing the research appears in the journal Science Advances. Caltech graduate student Weiyi Liu is the paper's first author.

Though humans do not have a way to journey into the interior of our planet, the rocks deep within the earth can naturally make their way to the surface in the form of lavas. The parental magmas of these lavas can originate from different depths within Earth, such as the upper mantle, which begins around 15 kilometers under the surface and extends for about 680 kilometers; or the lower mantle, which spans from a depth of 680 kilometers all the way to the core-mantle boundary at about 2,900 kilometers below our feet. Like sampling different layers of a cake -- the frosting, the filling, the sponge -- scientists can study magmas originating from different depths to understand the different "flavors" of Earth's layers: the chemicals found within and their ratios with respect to one another.

Because the formation of Earth was not instantaneous and instead involved materials accreting over time, samples from the lower mantle and upper mantle give different clues to what was happening over time during Earth's accretion. In the new study, the team found that the early Earth was primarily composed of dry, rocky materials: chemical signatures from deep within the planet showed a lack of so-called volatiles, which are easily evaporated materials like water and iodine. In contrast, samples of the upper mantle revealed a higher proportion of volatiles, three times of those found in the lower mantle. Based on these chemical ratios, Liu created a model that showed Earth formed from hot, dry, rocky materials, and that a major addition of life-essential volatiles, including water, only occurred during the last 15 percent (or less) of Earth's formation.

The study is a crucial contribution to theories of planet formation, a field which has undergone several paradigm shifts in recent decades and is still characterized by vigorous scientific debate. In this context, the new study makes important predictions for the nature of the building blocks of other terrestrial planets -- Mercury and Venus -- which would be expected to have formed from similarly dry materials.

"Space exploration to the outer planets is really important because a water world is probably the best place to look for extraterrestrial life," Tissot says. "But the inner solar system shouldn't be forgotten. There hasn't been a mission that's touched Venus's surface for nearly 40 years, and there has never been a mission to the surface of Mercury. We need to be able to study those worlds to better understand how terrestrial planets such as Earth formed."

Read more at Science Daily

Roots are capable of measuring heat on their own

Plant roots have their own thermometer to measure the temperature of the soil around them and they adjust their growth accordingly. Through extensive experiments, a team led by Martin Luther University Halle-Wittenberg (MLU), was able to demonstrate that roots have their own temperature sensing and response system. In a new study in The EMBO Journal, the scientists also provide a new explanation for how roots themselves detect and react to higher temperatures. The results could help develop new approaches for plant breeding.

The researchers used climate chambers to investigate how the plant model organism thale cress and the two crops cabbage and tomatoes react to rising ambient temperatures. They increased the ambient temperature from 20 to 28°C (68 to 82.4 degrees Fahrenheit). "Until now, it was assumed that the plant shoot controlled the process for the entire plant and acted as a long-distance transmitter that signalled to the root that it should alter its growth," says Professor Marcel Quint from the Institute of Agricultural and Nutritional Sciences at MLU. His team has now been able to disprove this through extensive experiments in cooperation with researchers from the Leibniz Institute of Plant Biochemistry (IPB), ETH Zurich and the Max Planck Institute for Plant Breeding Research in Cologne. In one experiment, scientists cut off the shoot of the plants but allowed the roots to continue to grow. "We found that the roots were not affected by this and grew at elevated temperatures in the same way as on plants with intact shoots. The higher temperature stimulated cell division and the roots became significantly longer," says Quint. The team also used mutant plants whose shoots could no longer detect and respond to higher temperatures. Those were grafted onto roots without this defect. Here, too, the roots were able to react to the heat in the soil, even though the shoot did nothing.

The researchers found in all of their experiments that root cells increased the production of the growth hormone auxin, which was then transported to the root tips. There, it stimulated cell division and enabled the roots to reach further down into the soil. "As heat and drought usually occur in tandem, it makes sense for the plants to tap into deeper and cooler soil layers that contain water," Quint explains.

Scientists have understood how plant shoots react to higher temperatures for some time. Their cells also produce more auxin, but the plant reacts differently than its roots. The cells in the shoot stretch, the stalk grows taller, and the leaves become narrower and grow farther apart.

The study also provides new insights for plant breeding. "In view of climate change, root growth is becoming more and more important for breeding. Understanding the molecular basis for temperature-dependent root growth might help to effectively equip plants against drought stress and achieve stable yields in the long term," says Quint. Quint's team will continue its work in this field of research in the coming years. A few weeks ago, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) granted him around 500,000 euros for a new research project on precisely this topic.

Read more at Science Daily

Human-made materials in nests can bring both risks and benefit for birds

We all discard a huge amount of plastic and other human-made materials into the environment, and these are often picked up by birds. New research has shown that 176 bird species around the world are now known to include a wide range of anthropogenic materials in their nests. All over the world, birds are using our left-over or discarded materials. Seabirds in Australia incorporate fishing nets into their nests, ospreys in North America include baler twine, birds living in cities in South America add cigarette butts, and common blackbirds in Europe pick up plastic bags to add to their nests.

This material found in birds' nests can be beneficial say researchers. For example, cigarette butts retain nicotine and other compounds that repel ectoparasites that attach themselves to nestling bird's skin and suck blood from them. Meanwhile, there are suggestions that harder human-made materials may help to provide structural support for birds' nests, while plastic films could help provide insulation and keep offspring warm. Despite such potential benefits, it is important to remember that such anthropogenic material can also be harmful to birds.

This research was published in a special issue of the Philosophical Transactions of the Royal Society B on "The evolutionary ecology of nests: a cross-taxon approach." The special issue was jointly organised by Mark Mainwaring, a Lecturer in Global Change Biology in the School of Natural Sciences at Bangor University.

Mark Mainwaring said, "The special issue highlights that the nests of a wide range of taxa -- from birds to mammals to fish to reptiles -- allow them to adapt to human-induced pressures. Those pressures range from the inclusion of anthropogenic materials into their nests through to providing parents and offspring with a place to protect themselves from increasingly hot temperatures in a changing climate."

Anthropogenic materials sometimes harm birds. Parents and offspring sometimes become fatally entangled in baler twine. Meanwhile, offspring sometimes ingest anthropogenic material after mistaking it for natural prey items. Finally, the inclusion of colourful anthropogenic materials into nests attracts predators to those nests who then prey upon the eggs or nestlings. This means that we need to reduce the amount of plastic and other anthropogenic material that we discard.

The lead author of the study, Zuzanna Jagiełło who is based at the Poznań University of Life Sciences in Poland, added, "A wide variety of bird species included anthropogenic materials into their nests. This is worrying because it is becoming increasingly apparent that such materials can harm nestlings and even adult birds."

Read more at Science Daily

Conservation in Indonesia is at risk, a team of researchers who study the region argues

Indonesia, home to the largest tropical rainforest in Southeast Asia and over 17,500 islands, is a country packed with biodiversity and endangered species. However, scientists studying the region's species and ecosystems are getting banned from Indonesia and conservation plans are being blocked. In a letter publishing in the journal Current Biology on July 10, a team of conservation researchers with long-term experience in Indonesia discuss scientific suppression and other research challenges they have witnessed while working in the region. They offer suggestions for how to promote nature conservation, protect data transparency, and share research with the public in this and other regions of the world.

"If you look at a heat map of the Earth, and where endangered species are located, Indonesia and that general region are just off the charts," says tropical environmental scientist William F. Laurance of James Cook University, who has been doing research on the environmental impacts of development in Southeast Asia for over a decade.

Laurance and his co-authors say they felt drawn to raise awareness about the issues facing conservation in Indonesia because during their time working in the region, they witnessed many instances when governments and corporations impeded research -- including their own.

For example, they write in the letter, in 2022, five leading conservation researchers were banned from working in Indonesia on the premise that they had "negative intentions" to "discredit the government." The researchers reference papers about forest conservation and wildlife management in Sumatra, for which the teams had multiple colleagues from Indonesia decline co-authorship "out of concerns that it might adversely impact their funding, research permits, or opportunities for commercial contracts in Indonesia."

"The researchers said, 'Well, no, you can't tell that story, even though it's true, and you can't identify me or include all the relevant details.' And this just kept happening over and over again. It's a climate of fear," says Laurance.

To protect environmental research in Indonesia and the contributors who work on it, Laurance and his team suggest that organizations funding research in the region require data transparency for studies that they support. They also recommend the implementation and usage of online "safe houses" (whistleblower websites designed to protect anonymity and information leakage) and anonymized journals (publications in which contributors are not named). They say these interventions could help researchers get information out to the public without worrying about the consequences of being personally tied to their findings.

The authors do note that several organizations are advocating for change, especially in Indonesia. Some examples of these groups include the Indonesian Caucus for Academic Freedom and the Jakarta Legal Aid Foundation, which are organizing to support conservation and thwart efforts to silence researchers. They also note that "scientific suppression is by no means unique to Indonesia."

Read more at Science Daily