Apr 13, 2024

Exoplanets true to size

A star's magnetic field must be considered in order to correctly determine the characteristics of their exoplanets from observations by space telescopes such as Kepler, James Webb, or PLATO. This is demonstrated by new model calculations presented today in the journal Nature Astronomy by a research group led by the Max Planck Institute for Solar System Research (MPS) in Germany. The researchers show that the distribution of the star's brightness over its disk depends on the star's level of magnetic activity. This, in turn, affects the signature of an exoplanet in observational data. The new model must be used in order to properly interpret the data from the latest generation of space telescopes pointed at distant worlds outside our Solar System.

700 light years away from Earth in the constellation Virgo, the planet WASP-39b orbits the star WASP-39. The gas giant, which takes little more than four days to complete one orbit, is one of the best-studied exoplanets: Shortly after its commissioning in July 2022, NASA's James Webb Space Telescope turned its high-precision gaze on the distant planet. The data revealed evidence of large quantities of water vapor, of methane and even, for the first time, of carbon dioxide in the atmosphere of WASP-39b. A minor sensation! But there is still one fly in the ointment: researchers have not yet succeeded in reproducing all the crucial details of the observations in model calculations. This stands in the way of an even more precise analyses of the data. In the new study led by the MPS, the authors, including researchers from the Massachusetts Institute of Technology (USA), the Space Telescope Science Institute (USA), Keele University (United Kingdom), and the University of Heidelberg (Germany), show a way to overcome this obstacle.

"The problems arising when interpreting the data from WASP-39b are well known from many other exoplanets -- regardless whether they are observed with Kepler, TESS, James Webb, or the future PLATO spacecraft," explains MPS scientist Dr. Nadiia Kostogryz, first author of the new study. "As with other stars orbited by exoplanets, the observed light curve of WASP-39 is flatter than previous models can explain," she adds.

Researchers define a light curve as a measurement of the brightness of a star over a longer period of time. The brightness of a star fluctuates constantly, for example because its luminosity is subject to natural fluctuations. Exoplanets can also leave traces in the light curve. If an exoplanet passes in front of its star as seen by an observer, it dims the starlight. This is reflected in the light curve as a regularly recurring drop in brightness. Precise evaluations of such curves provide information about the size and orbital period of the planet. Researchers can also obtain information about the composition of the planet's atmosphere, if the light from the star is split into its different wavelengths or colours.

A close look at a star's brightness distribution

The limb of a star, the edge of the stellar disk, plays a decisive role in the interpretation of its light curve. Just as in the case of the Sun, the limb appears darker to the observer than the inner area. However, the star does not actually shine less brightly further out. "As the star is a sphere and its surface curved, we look into higher and therefore cooler layers at the limb than in the center," explains coauthor and MPS-Director Prof. Dr. Laurent Gizon. "This area therefore appears darker to us," he adds.

It is known that the limb darkening affects the exact shape of the exoplanet signal in the light curve: The dimming determines how steeply the brightness of a star falls during a planetary transit and then rises again. However, it has not been possible to reproduce observational data accurately using conventional models of the stellar atmosphere. The decrease of brightness was always less abrupt than the model calculations suggested. "It was clear that we were missing a crucial piece of the puzzle to precisely understand the exoplanets' signal," says MPS-Director Prof. Dr. Sami Solanki, coauthor of the current study.

Magnetic field is the missing piece of the puzzle


As the calculations published today show, the missing piece of the puzzle is the stellar magnetic field. Like the Sun, many stars generate a magnetic field deep in their interior through enormous flows of hot plasma. For the first time, the researchers were now able to include the magnetic field in their models of limb darkening. They could show that the strength of the magnetic field has an important effect: The limb darkening is pronounced in stars with a weak magnetic field, while it is weaker in those with a strong magnetic field.

The researchers were also able to prove that the discrepancy between observational data and model calculations disappears if the star's magnetic field is included in the computations. To this end, the team turned to selected data from NASA's Kepler Space Telescope, which captured the light of thousands and thousands of stars from 2009 to 2018. In a first step, the scientists modeled the atmosphere of typical Kepler stars in the presence of a magnetic field. In a second step, they then generated "artificial" observational data from these calculations. As a comparison with the real data showed, by including the magnetic field, the Kepler data is successfully reproduced.

The team also extended its considerations to data from the James Webb Space Telescope. The telescope is able to split the light of distant stars into its various wavelengths and thus search for the characteristic signs of certain molecules in the atmosphere of the discovered planets. As it turns out, the magnetic field of the parent star influences the stellar limb darkening differently at different wavelengths -- and should therefore be taken into account in future evaluations in order to achieve even more precise results.

From telescopes to models

"In the past decades and years, the way to move forward in exoplanet research was to improve the hardware, the space telescopes designed to search for and characterize new worlds. The James Webb Space Telescope has pushed this development to new limits," says Dr. Alexander Shapiro, coauthor of the current study and head of an ERC-funded research group at the MPS. "The next step is now to improve and refine the models to interpret this excellent data," he adds.

Read more at Science Daily

Africa's iconic flamingos threatened by rising lake levels

It is one of the world's most spectacular sights -- huge flocks or "flamboyances" of flamingos around East Africa's lakes -- as seen in the film Out of Africa or David Attenborough's A Perfect Planet.

But new research led by King's College London has revealed how the lesser flamingo is at danger of being flushed out of its historic feeding grounds, with serious consequences for the future of the species.

For the first time satellite earth observation data has been used to study all the key flamingo feeding lakes in Ethiopia, Kenya and Tanzania over two decades and it identified how rising water levels are reducing the birds' main food source.

The authors warn the birds are likely to be pushed into new unprotected areas in the search for food, especially given predicted higher levels of rainfall linked to climate change.

They are now calling for coordinated conservation action across international borders, improved monitoring and more sustainable management of land surrounding important flamingo lakes.

Lead author Aidan Byrne, a PhD student jointly supervised by King's College London and the Natural History Museum, said the region was home to more than three quarters of the global population of lesser flamingos, but their numbers are declining.

"Lesser flamingos in East Africa are increasingly vulnerable, particularly with increased rainfall predicted for the region under climate change. Without improved lake monitoring and catchment management practices, the highly specialised species found in soda lake ecosystems -- including lesser flamingos -- could be lost," he said.

The study, published in the journal Current Biology, is the first in which satellite earth observation data has been used to study all 22 key flamingo feeding soda lakes in East Africa. This analysis was combined with climate records and bird observation data over more than two decades.

By studying on this large scale, for the first time researchers were able to see changing food availability across the whole network of lakes, including significant declines in recent years, and how bird numbers decreased as lake surface area increased. They also identified the lakes the birds might move to in the future.

Co-author, Dr Emma Tebbs, from King's College London said whilst flamingos naturally travel in search of food, the degradation of their historic feeding and breeding sites was a serious concern.

"East African populations could potentially move north or south away from the equator in search of food resources. And whilst six study lakes increased in habitat suitability from 2010 to 2022, only three of those have some level of conservation protection.

"Increases in water levels could lead to lesser flamingos becoming more reliant on lakes that are unprotected, outside of current nature reserves and protected sites, which has implications for conservation and ecotourism revenues."

Soda lakes are some of the harshest environments on Earth, being both highly saline and very alkaline. Despite this, many species have evolved to thrive in these conditions, including the flamingo and its phytoplankton prey, which they filter from the water using their sieve-like beaks.

The research found rising water levels across the region's soda lakes were diluting their normally salty and alkaline nature, leading to a decline in populations of phytoplankton, which was measured by the amount of a photosynthetic pigment called chlorophyll-a present in the lakes.

The team found that phytoplankton levels have been declining over the 23 years of study and linked this to increases in the surface areas of the lakes over the same period.

The largest losses in phytoplankton biomass occurred in the equatorial Kenyan lakes, notably at the important tourist lakes Bogoria, Nakuru and Elmenteita, and in the northern Tanzanian lakes that saw the largest increases in surface area.

Nakuru is one of the most important flamingo feeding lakes in East Africa, historically supporting over one million birds at a time. The lake increased in surface area by 91% from 2009 to 2022 whilst its mean chlorophyll-a concentrations halved.

Natron in Tanzania is the only regular breeding site for lesser flamingos in East Africa and it has experienced declining productivity alongside rising water levels in recent years. If phytoplankton biomass continues to decline there and at other nearby feeding lakes, it will no longer be a suitable breeding site.

Read more at Science Daily

What's quieter than a fish? A school of them

Swimming in schools makes fish surprisingly stealthy underwater, with a group able to sound like a single fish.

The new findings by Johns Hopkins University engineers working with a high-tech simulation of schooling mackerel, offers new insight into why fish swim in schools and promise for the design and operation of much quieter submarines and autonomous undersea vehicles.

"It's widely known that swimming in groups provides fish with added protection from predators, but we questioned whether it also contributes to reducing their noise," said senior author Rajat Mittal.

"Our results suggest that the substantial decrease in their acoustic signature when swimming in groups, compared to solo swimming, may indeed be another factor driving the formation of fish schools."

The work is newly published in Bioinspiration & Biomimetics.

The team created a 3D model based on the common mackerel to simulate different numbers of fish swimming, changing up their formations, how close they swam to one another, and the degrees to which their movements synched.

The model, which applies to many fish species, simulates one to nine mackerel being propelled forward by their tail fins.

The team found that a school of fish moving together in just the right way was stunningly effective at noise reduction: A school of seven fish sounded like a single fish.

"A predator, such as a shark, may perceive it as hearing a lone fish instead of a group," Mittal said.

"This could have significant implications for prey fish."

The single biggest key to sound reduction, the team found, was the synchronization of the school's tail flapping -- or actually the lack thereof.

If fish moved in unison, flapping their tail fins at the same time, the sound added up and there was no reduction in total sound.

But if they alternated tail flaps, the fish canceled out each other's sound, the researchers found.

"Sound is a wave," Mittal said. "Two waves can either add up if they are exactly in phase or they can cancel each other if they are exactly out of phase. That's kind of what's happening here though we're talking about faint sounds that would barely be audible to a human."

The tail fin movements that reduce sound also generate flow interaction between the fish that allow the fish to swim faster while using less energy, said lead author Ji Zhou, a Johns Hopkins graduate student studying mechanical engineering.

"We find that reduction in flow-generated noise does not have to come at the expense of performance," Zhou said.

"We found cases where significant reductions in noise are accompanied by noticeable increases in per capita thrust, due to the hydrodynamic interactions between the swimmers."

The team was surprised to find that the sound reduction benefits kick in as soon as one swimming fish joins another.

Noise reduction grows as more fish join a school, but the team expects the benefits to cap off at some point.

"Simply being together and swimming in any manner contributes to reducing the sound signature," Mittal said.

"No coordination between the fish is required."

Read more at Science Daily

Apr 12, 2024

Twinkle twinkle baby star, 'sneezes' tell us how you are

Kyushu University researchers have shed new light into a critical question on how baby stars develop. Using the ALMA radio telescope in Chile, the team found that in its infancy, the protostellar disk that surrounds a baby star discharges plumes of dust, gas, and electromagnetic energy. These 'sneezes,' as the researchers describe them, release the magnetic flux within the protostellar disk, and may be a vital part of star formation. Their findings were published in The Astrophysical Journal.

Stars, including our Sun, all develop from what are called stellar nurseries, large concentrations of gas and dust that eventually condense to form a stellar core, a baby star.

During this process, gas and dust form a ring around the baby star called the protostellar disk.

"These structures are perpetually penetrated by magnetic fields, which brings with it magnetic flux. However, if all this magnetic flux were retained as the star developed, it would generate magnetic fields many orders of magnitude stronger than those observed in any known protostar," explains Kazuki Tokuda of Kyushu University's Faculty of Sciences and first author of the study.

For this reason, researchers have hypothesized that there is a mechanism during star development that would remove that magnetic flux.

The prevailing view was that the magnetic field gradually weakened over time as the cloud is pulled into the stellar core.

To get to the bottom of this mysterious phenomenon, the team set their sights on MC 27, a stellar nursery located approximately 450 light-years from earth.

Observations were collected using the ALMA array, a collection of 66 high-precision radio telescope constructed 5,000 meters above seas level in northern Chile.

"As we analyzed our data, we found something quite unexpected. There were these 'spike-like' structures extending a few astronomical units from the protostellar disk. As we dug in deeper, we found that these were spikes of expelled magnetic flux, dust, and gas," continues Tokuda.

"This is a phenomenon called 'interchange instability' where instabilities in the magnetic field react with the different densities of the gases in the protostellar disk, resulting in an outward expelling of magnetic flux. We dubbed this a baby star's 'sneeze' as it reminded us of when we expel dust and air at high speeds."

Additionally, other spikes were observed several thousands of astronomical units away from the protostellar disk.

The team hypothesized that these were indications of other 'sneezes' in the past.

The team expects their findings will improve our understanding of the intricate processes that shape the universe that continue to captivate the interest of both the astronomical community and the public.

Read more at Science Daily

Nanoscale movies shed light on one barrier to a clean energy future

Left unchecked, corrosion can rust out cars and pipes, take down buildings and bridges, and eat away at our monuments.

Corrosion can also damage devices that could be key to a clean energy future. And now, Duke University researchers have captured extreme close-ups of that process in action.

"By studying how and why renewable energy devices break down over time, we might be able to extend their lifetime," said chemistry professor and senior author Ivan Moreno-Hernandez.

In his lab at Duke sits a miniature version of one such device. Called an electrolyzer, it separates hydrogen out of water, using electricity to power the reaction.

When the electricity to power electrolysis comes from renewable sources such as wind or solar, the hydrogen gas it churns out is considered a promising source of clean fuel, because it takes no fossil fuels to produce and it burns without creating any planet-warming carbon dioxide.

A number of countries have plans to scale up their production of so-called "green hydrogen" to help curb their dependence on fossil fuels, particularly in industries like steel- and cement-making.

But before hydrogen can go mainstream, some big obstacles need to be overcome.

Part of the trouble is electrolyzers require rare metal catalysts to function, and these are prone to corrosion. They're not the same after a year of operation than they were in the beginning.

In a study published April 10 in the Journal of the American Chemical Society, Moreno-Hernandez and his Ph.D. student Avery Vigil used a technique called liquid phase transmission electron microscopy to study the complex chemical reactions that go on between these catalysts and their environment that cause them to decay.

You might remember from high school that to make hydrogen gas, an electrolyzer splits water into its constituent hydrogen and oxygen molecules. For the current study, the team focused on a catalyst called ruthenium dioxide that speeds up the oxygen half of the reaction, since that's the bottleneck in the process.

"We essentially put these materials through a stress test," Vigil said.

They zapped nanocrystals of ruthenium dioxide with high-energy radiation, and then watched the changes wrought by the acidic environment inside the cell.

To take pictures of such tiny objects, they used a transmission electron microscope, which shoots a beam of electrons through nanocrystals suspended inside a super-thin pocket of liquid to create time-lapse images of the chemistry taking place at 10 frames per second.

The result: desktop-worthy close-ups of virus-sized crystals, more than a thousand times finer than a human hair, as they get oxidized and dissolve into the acidic liquid around them.

"We're actually able to see the process of this catalyst breaking down with nanoscale resolution," Moreno-Hernandez said.

Over the course of five minutes, the crystals broke down fast enough to "render a real device useless in a matter of hours," Vigil said.

Zooming in hundreds of thousands of times, the videos reveal subtle defects in the crystals' 3D shapes that create areas of strain, causing some to break down faster than others.

By minimizing such imperfections, the researchers say it could one day be possible to design renewable energy devices that last two to three times longer than they currently do.

Read more at Science Daily

First step to untangle DNA: Supercoiled DNA captures gyrase like a lasso ropes cattle

Picture in your mind a traditional "landline" telephone with a coiled cord connecting the handset to the phone. The coiled telephone cord and the DNA double helix that stores the genetic material in every cell in the body have one thing in common; they both supercoil, or coil about themselves, and tangle in ways that can be difficult to undo. In the case of DNA, if this overwinding is not dealt with, essential processes such as copying DNA and cell division grind to a halt. Fortunately, cells have an ingenious solution to carefully regulate DNA supercoiling.

In this study published in the journal Science, researchers at Baylor College of Medicine, Université de Strasbourg, Université Paris Cité and collaborating institutions reveal how DNA gyrase resolves DNA entanglements. The findings not only provide novel insights into this fundamental biological mechanism but also have potential practical applications. Gyrases are biomedical targets for the treatment of bacterial infections and the similar human versions of the enzymes are targets for many anti-cancer drugs. Better understanding of how gyrases work at the molecular level can potentially improve clinical treatments.

Some DNA supercoiling is essential to make DNA accessible to allow the cell to read and make copies of the genetic information, but either too little or too much supercoiling is detrimental. For example, the act of copying and reading DNA overwinds it ahead of the enzymes that read and copy the genetic code, interrupting the process. It's long been known that DNA gyrase plays a role in untangling the overwinding, but the details were not clear.

DNA minicircles and advanced imaging techniques reveal first step to untangle DNA

"We typically picture DNA as the straight double helix structure, but inside cells, DNA exists in supercoiled loops. Understanding the molecular interactions between the supercoils and the enzymes that participate in DNA functions has been technically challenging, so we typically use linear DNA molecules instead of coiled DNA to study the interactions," said study author Dr. Lynn Zechiedrich, Kyle and Josephine Morrow Chair in Molecular Virology and Microbiology and professor of theVerna and Marrs McLean Department of Biochemistry and Molecular Pharmacology at Baylor College of Medicine. "One goal of our laboratory has been to study these interactions using a DNA structure that more closely mimics the actual supercoiled and looped DNA form present in living cells."

After years of work, the Zechiedrich lab has created small loops of supercoiled DNA. In essence, they took the familiar straight linear DNA double helix and twisted it in either direction once, twice, three times or more and connected the ends together to form a loop. Their previous study looking at the 3-D structures of the resulting supercoiled minicircles revealed that these loops form a variety of shapes that they hypothesized enzymes such as gyrase would recognize.

In the current study, their hypothesis was proven correct. The team of researchers combined their expertise to study the interactions of DNA gyrase with DNA minicircles using recent technology advances in electron cryomicroscopy, an imaging technique that produces high-resolution 3-D views of large molecules, and other technologies.

"My lab has long been interested in understanding how molecular nanomachines operate in the cell. We have been studying DNA gyrases, very large enzymes that regulate DNA supercoiling," said co-corresponding author Dr. Valérie Lamour, associate professor at the Institut de Génétique et de Biologie Moléculaire et Cellulaire, Université de Strasbourg. "Among other functions, supercoiling is the cell's way of confining about 2 meters (6.6 feet) of linear DNA into the microscopic nucleus of the cell."

As the DNA supercoils inside the nucleus, it twists and folds in different forms. Imagine twisting that telephone cord mentioned at the beginning, several times on itself. It will overwind and form a loop by crossing over DNA chains, tightening the structure.

"We found, just as we had hypothesized, that gyrase is attracted to the supercoiled minicircle and places itself in the inside of this supercoiled loop," said co-author, Dr. Jonathan Fogg, senior staff scientist of molecular virology and microbiology, and biochemistry and molecular pharmacology in the Zechiedrich lab.

"This is the first step of the mechanism that prompts the enzyme for resolving DNA entanglements," Lamour said.

"DNA gyrase, now surrounded by a tightly supercoiled loop, will cut one DNA helix in the loop, pass the other DNA helix through the cut in the other, and reseal the break, which relaxes the overwinding and eases the tangles, regulating DNA supercoiling to control DNA activity," Zechiedrich said. "Imagine watching the rodeo. Like roping cattle with a lasso, supercoiled looped DNA captures gyrase in the first step. Gyrase then cuts one double-helix of the DNA lasso and passes the other helix through the break to get free."

Co-corresponding author, Dr. Marc Nadal, professor at the École Normale in Paris confirmed the observation of the path of the DNA wrapped in the loop around gyrase using magnetic tweezers, a biophysical technique that allows to measure the deformation and fluctuations in the length of a single molecule of DNA. Observing a single molecule provides information that is often obscured when looking at thousands of molecules in traditional so-called "ensemble" experiments in a test tube.

Interestingly, the "DNA strand inversion model" for gyrase activity was proposed in 1979 by Drs. Patrick O. Brown and the late Nicholas R. Cozzarelli, also in a Science paper, well before researchers had access to supercoiled minicircles or the 3-D molecular structure of the enzyme. "It's especially meaningful to me that 45 years later, we finally provide experimental evidence supporting their hypothesis because Nick was my postdoctoral mentor," Zechiedrich said.

"This work opens a myriad of perspectives to study the mechanism of this conserved class of enzymes, which are of great clinical value," Lamour said.

Read more at Science Daily

Parkinson's Disease: New theory on the disease's origins and spread

The nose or the gut? For the past two decades, the scientific community has debated the wellspring of the toxic proteins at the source of Parkinson's disease. In 2003, a German pathologist, Heiko Braak, MD, first proposed that the disease begins outside the brain. More recently, Per Borghammer, MD, with Aarhus University Hospital in Denmark, and his colleagues argue that the disease is the result of processes that start in either the brain's smell center (brain-first) or the body's intestinal tract (body-first).

A new hypothesis paper appearing in the Journal of Parkinson's Disease on World Parkinson's Day unites the brain- and body-first models with some of the likely causes of the disease-environmental toxicants that are either inhaled or ingested. The authors of the new study, who include Borghammer, argue that inhalation of certain pesticides, common dry cleaning chemicals, and air pollution predispose to a brain-first model of the disease. Other ingested toxicants, such as tainted food and contaminated drinking water, lead to body-first model of the disease.

"In both the brain-first and body-first scenarios the pathology arises in structures in the body closely connected to the outside world," said Ray Dorsey, MD, a professor of Neurology at the University of Rochester Medical Center and co-author of the piece. "Here we propose that Parkinson's is a systemic disease and that its initial roots likely begin in the nose and in the gut and are tied to environmental factors increasingly recognized as major contributors, if not causes, of the disease. This further reinforces the idea that Parkinson's, the world's fastest growing brain disease, may be fueled by toxicants and is therefore largely preventable."

Different pathways to the brain, different forms of disease

A misfolded protein called alpha-synuclein has been in scientists' sights for the last 25 years as one of the driving forces behind Parkinson's. Over time, the protein accumulates in the brain in clumps, called Lewy bodies, and causes progressive dysfunction and death of many types of nerve cells, including those in the dopamine-producing regions of the brain that control motor function. When first proposed, Braak thought that an unidentified pathogen, such as a virus, may be responsible for the disease.

The new piece argues that toxins encountered in the environment, specifically the dry cleaning and degreasing chemicals trichloroethylene (TCE) and perchloroethylene (PCE), the weed killer paraquat, and air pollution, could be common causes for the formation of toxic alpha-synuclein. TCE and PCE contaminates thousands of former industrial, commercial, and military sites, most notably the Marine Corps base Camp Lejeune, and paraquat is one of the most widely used herbicides in the US, despite being banned for safety concerns in more than 30 countries, including the European Union and China. Air pollution was at toxic levels in nineteenth century London when James Parkinson, whose 269th birthday is celebrated today, first described the condition.

The nose and the gut are lined with a soft permeable tissue, and both have well established connections to the brain. In the brain-first model, the chemicals are inhaled and may enter the brain via the nerve responsible for smell. From the brain's smell center, alpha-synuclein spreads to other parts of the brain principally on one side, including regions with concentrations of dopamine-producing neurons. The death of these cells is a hallmark of Parkinson's disease. The disease may cause asymmetric tremor and slowness in movement and, a slower rate of progression after diagnosis, and only much later, significant cognitive impairment or dementia.

When ingested, the chemicals pass through the lining of the gastrointestinal tract. Initial alpha-synuclein pathology may begin in the gut's own nervous system from where it can spread to both sides of the brain and spinal cord. This body-first pathway is often associated with Lewy body dementia, a disease in the same family as Parkinson's, which is characterized by early constipation and sleep disturbance, followed by more symmetric slowing in movements and earlier dementia, as the disease spreads through both brain hemispheres.

New models to understand and study brain diseases


"These environmental toxicants are widespread and not everyone has Parkinson's disease," said Dorsey. "The timing, dose, and duration of exposure and interactions with genetic and other environmental factors are probably key to determining who ultimately develops Parkinson's. In most instances, these exposures likely occurred years or decades before symptoms develop."

Pointing to a growing body of research linking environmental exposure to Parkinson's disease, the authors believe the new models may enable the scientific community to connect specific exposures to specific forms of the disease. This effort will be aided by increasing public awareness of the adverse health effects of many chemicals in our environment. The authors conclude that their hypothesis "may explain many of the mysteries of Parkinson's disease and open the door toward the ultimate goal-prevention."

In addition to Parkinson's, these models of environmental exposure may advance understanding of how toxicants contribute to other brain disorders, including autism in children, ALS in adults, and Alzheimer's in seniors. Dorsey and his colleagues at the University of Rochester have organized a symposium on the Brain and the Environment in Washington, DC, on May 20 that will examine the role toxicants in our food, water, and air are playing in all these brain diseases.

Read more at Science Daily

Apr 11, 2024

The hidden role of the Milky Way in ancient Egyptian mythology

Ancient Egyptians were known for their religious beliefs and astronomical knowledge of the Sun, Moon, and planets, but up until now it has been unclear what role the Milky Way played in Egyptian religion and culture.

A new study by a University of Portsmouth astrophysicist sheds light on the relationship between the Milky Way and the Egyptian sky-goddess Nut.

Nut is goddess of the sky, who is often depicted as a star-studded woman arched over her brother, the earth god Geb.

She protects the earth from being flooded by the encroaching waters of the void, and plays a key role in the solar cycle, swallowing the Sun as it sets at dusk and giving birth to it once more as it rises at dawn.

The paper draws on ancient Egyptian texts and simulations to argue that the Milky Way might have shone a spotlight, as it were, on Nut's role as the sky.

It proposes that in winter, the Milky Way highlighted Nut's outstretched arms, while in summer, it traced her backbone across the heavens.

Associate Professor in Astrophysics, Dr Or Graur, said: "I chanced upon the sky-goddess Nut when I was writing a book on galaxies and looking into the mythology of the Milky Way. I took my daughters to a museum and they were enchanted by this image of an arched woman and kept asking to hear stories about her.

"This sparked my interest and I decided to combine both astronomy and Egyptology to do a double analysis -- astronomical and cross-cultural -- of the sky-goddess Nut, and whether she really could be linked to the Milky Way."

Dr Graur drew from a rich collection of ancient sources including the Pyramid Texts, Coffin Texts, and the Book of Nut and compared them alongside sophisticated simulations of the Egyptian night sky.

He found compelling evidence that the Milky Way highlighted Nut's divine presence.

Furthermore, Dr Graur connected Egyptian beliefs with those of other cultures, showing similarities in how different societies interpret the Milky Way.

He said: "My study also shows that Nut's role in the transition of the deceased to the afterlife and her connection to the annual bird migration are consistent with how other cultures understand the Milky Way. For example, as a spirits' road among different peoples in North and Central America or as the Birds' Path in Finland and the Baltics.

Read more at Science Daily

Tiny plastic particles are found everywhere

It's not the first study on microplastics in Antarctica that researchers from the University of Basel and the Alfred-Wegener Institute (AWI) have conducted. But analysis of the data from an expedition in spring 2021 shows that environmental pollution from these tiny plastic particles is a bigger problem in the remote Weddell Sea than was previously known.

The total of 17 seawater samples all indicated higher concentrations of microplastics than in previous studies. "The reason for this is the type of sampling we conducted," says Clara Leistenschneider, doctoral candidate in the Department of Environmental Sciences at the University of Basel and lead author of the study.

The current study focused on particles measuring between 11 and 500 micrometers in size. The researchers collected them by pumping water into tanks, filtering it, and then analyzing it using infrared spectroscopy. Previous studies in the region had mostly collected microplastic particles out of the ocean using fine nets with a mesh size of around 300 micrometers. Smaller particles would simply pass through these plankton nets.

The results of the new study indicate that 98.3 percent of the plastic particles present in the water were smaller than 300 micrometers, meaning that they were not collected in previous samples. "Pollution in the Antarctic Ocean goes far beyond what was reported in past studies," Leistenschneider notes. The study appears in the journal Science of the Total Environment.

What role do ocean currents play?

The individual samples were polluted to different extents. The offshore samples, which were collected north of the continental slope and the Antarctic Slope Current, contained the highest concentrations of microplastics. The reasons for this are not conclusively known. It may be that the ice that tends to form near the coast retains the tiny plastic particles, and they are only released back into the water when the ice melts. It could also be the case that ocean currents play a role. "They might work like a barrier, reducing water exchange between the north and south," suggests Gunnar Gerdts from the AWI in Heligoland, Germany.

What is certainly true is that ocean currents are an important factor and the subject of many open questions in the field. So far the researchers have only examined water samples from the ocean surface, but not from lower depths. This is primarily due to limited time on the ship expeditions for taking samples and to equipment with insufficient pumping capacity. "It would nonetheless be revealing to analyze such data, since the deep currents differ greatly from the surface currents and thermohaline circulation leads to exchange with water masses from northern regions," Leistenschneider says.

It is also still unclear how the microplastics make their way to the Weddell Sea in the first place and whether they ever leave the region. The strong Antarctic Circumpolar Current, which flows all the way around the Antarctic Ocean at a latitude of about 60° south, might prevent their departure. The researchers are also not yet able to say conclusively where the microplastics originate. Possible sources include regional ship traffic from the tourism, fishing and research industries, as well as research stations on land. However, the microplastics might also make their way to Antarctica from other regions via ocean currents or atmospheric transport.

Research leads to awareness

Clara Leistenschneider plans to focus next on analyzing the sediment samples she collected during the same expedition. This should provide information about how microplastics are accumulating on the sea floor, which is home to unique and sensitive organisms and is a breeding ground for Antarctic icefish (Bovichtidae).

With the increase in tourism in the Antarctic Ocean, pollution may increase even more in the future, further impacting the environment and the food chain.

Read more at Science Daily

Pacific cities much older than previously thought

New evidence of one of the first cities in the Pacific shows they were established much earlier than previously thought, according to new research from The Australian National University (ANU).

The study used aerial laser scanning to map archaeological sites on the island of Tongatapu in Tonga.

Lead author, PhD scholar Phillip Parton, said the new timeline also indicates that urbanisation in the Pacific was an indigenous innovation that developed before Western influence.

"Earth structures were being constructed in Tongatapu around AD 300. This is 700 years earlier than previously thought," Mr Parton said.

"As settlements grew, they had to come up with new ways of supporting that growing population. This kind of set-up -- what we call low density urbanisation -- sets in motion huge social and economic change. People are interacting more and doing different kinds of work."

Mr Parton said traditionally, studying urbanisation in the Pacific has been tricky due to challenges collecting data, but new technology has changed that.

"We were able to combine high-tech mapping and archaeological fieldwork to understand what was happening in Tongatapu," he said.

"Having this type of information really adds to our understanding of early Pacific societies.

"Urbanisation is not an area that had been investigated much until now. When people think of early cities they usually think of traditional old European cities with compact housing and windy cobblestone streets. This is a very different kind of city.

"But it shows the contribution of the Pacific to urban science. We can see clues that Tongatapu's influence spread across the southwest Pacific Ocean between the 13th and 19th centuries."

According to Mr Parton, the collapse of this kind of low-density urbanisation in Tonga was largely due to the arrival of Europeans.

"It didn't collapse because the system was flawed; it was more to do with the arrival of Europeans and introduced diseases," he said.

Read more at Science Daily

Does the time of day you move your body make a difference to your health?

Undertaking the majority of daily physical activity in the evening is linked to the greatest health benefits for people living with obesity, according to researchers from the University of Sydney, Australia who followed the trajectory of 30,000 people over almost 8 years.

Using wearable device data to categorise participant's physical activity by morning, afternoon or evening, the researchers uncovered that those who did the majority of their aerobic moderate to vigorous physical activity- the kind that raises our heartrate and gets us out of breath- between 6pm and midnight had the lowest risk of premature death and death from cardiovascular disease.

The frequency with which people undertook moderate to vigorous physical activity (MVPA) in the evening, measured in short bouts up to or exceeding three minutes, also appeared to be more important than their total amount of physical activity daily.

The study, led by researchers from the University's Charles Perkins Centre is published in the journal Diabetes Care today.

"Due to a number of complex societal factors, around two in three Australians have excess weight or obesity which puts them at a much greater risk of major cardiovascular conditions such as heart attacks and stroke, and premature death," said Dr Angelo Sabag, Lecturer in Exercise Physiology at the University of Sydney.

"Exercise is by no means the only solution to the obesity crisis, but this research does suggest that people who can plan their activity into certain times of the day may best offset some of these health risks."

Smaller clinical trials have shown similar results, however the large scale of participant data in this study, the use of objective measures of physical activity and hard outcomes, such as premature death, makes these findings significant.

Joint first author Dr Matthew Ahmadi also stressed that the study did not just track structured exercise. Rather researchers focused on tracking continuous aerobic MVPA in bouts of 3 minutes or more as previous research shows a strong association between this type of activity, glucose control and lowered cardiovascular disease risk compared with shorter (non-aerobic) bouts.

"We didn't discriminate on the kind of activity we tracked, it could be anything from power walking to climbing the stairs, but could also include structured exercise such as running, occupational labour or even vigorously cleaning the house," said Dr Ahmadi, National Heart Foundation postdoctoral research fellow at the Charles Perkins Centre, University of Sydney.

While observational, the findings of the study support the authors original hypothesis, which is the idea -- based on previous research -- that people living with diabetes or obesity, who are already glucose intolerant in the late evening, may be able to offset some of that intolerance and associated complications, by doing physical activity in the evening.

The researchers used data from UK Biobank and included 29,836 adults aged over 40 years of age living with obesity, of whom 2,995 participants were also diagnosed with Type 2 diabetes.

Participants were categorised into morning, afternoon of evening MVPA based on when they undertook the majority of their aerobic MVPA as measured by a wrist accelerometer worn continuously for 24 hours a day over 7 days at study onset.

The team then linked health data (from the National Health Services and National Records of Scotland) to follow participants health trajectory for 7.9 years. Over this period they recorded 1,425 deaths, 3,980 cardiovascular events and 2,162 microvascular disfunction events.

To limit bias, the researchers accounted for differences such as age, sex, smoking, alcohol intake, fruit and vegetable consumption, sedentary time, total MVPA, education, medication use and sleep duration. They also excluded participants with pre-existing cardiovascular disease and cancer.

The researchers say the length of the study follow-up and additional sensitivity analysis bolster the strength of their findings however, due to the observational design, they cannot completely rule out potential reverse causation. This is the possibility that some participants had lower aerobic MVPA levels due to underlying or undiagnosed disease.

Professor Emmanuel Stamatakis, Director of the Mackenzie Wearables Research Hub at the Charles Perkins Centre and senior author on the paper, said the sophistication of studies in the wearables field is providing huge insights into the patterns of activity that are most beneficial for health.

"It is a really exciting time for researchers in this field and practitioners alike, as wearable device-captured data allow us to examine physical activity patterns at a very high resolution and accurately translate findings into advice that could play an important role in health care," said Professor Stamatakis.

Read more at Science Daily

Apr 10, 2024

Study shedding new light on Earth's global carbon cycle could help assess liveability of other planets

Research has uncovered important new insights into the evolution of oxygen, carbon, and other vital elements over the entire history of Earth – and it could help assess which other planets can develop life, ranging from plants to animals and humans.

The study, published today in Nature Geoscience and led by a researcher at the University of Bristol, reveals for the first time how the build up of carbon-rich rocks has accelerated oxygen production and its release into the atmosphere.

Until now the exact nature of how the atmosphere became oxygen-rich has long eluded scientists and generated conflicting explanations.

As carbon dioxide is steadily emitted by volcanoes, it ends up entering the ocean and forming rocks like limestone.

As global stocks of these rocks build up they can then release their carbon during tectonic processes, including mountain building and metamorphism.

Using this knowledge, the scientists built a unique sophisticated computer model to more accurately chart key changes in the carbon, nutrient and oxygen cycles deep into Earth’s history, over 4 billion years of the planet’s lifetime.

Lead author and biogeochemist Dr Lewis Alcott, Lecturer in Earth Sciences at the University of Bristol, said: “This breakthrough is important and exciting because it may help us understand how planets, other than Earth, have the potential to support intelligent, oxygen-breathing life.

“Previously we didn’t have a clear idea of why oxygen rose from very low concentrations to present-day concentrations, as computer models haven’t previously been able to accurately simulate all the possible feedbacks together. This has puzzled scientists for decades and created different theories.”

The discovery indicates that older planets, originating billions of years ago like Earth, may have better prospects to accumulate enough carbon-rich deposits in their crust, which could facilitate rapid recycling of carbon and nutrients for life.

The findings showed this gradual carbon enrichment of the crust results in ever-increasing recycling rates of carbon and various minerals, including the nutrients needed for photosynthesis, the process green plants use sunlight to absorb nutrients from carbon dioxide and water.

This cycle therefore steadily speeds up oxygen production over the passage of Earth’s history.

The research, which started whilst Dr Alcott was a Hutchinson Postdoctoral Fellow at Yale University in the United States, paves the way for future work to further unravel the complex interrelationships between planetary temperature, oxygen, and nutrients.

Read more at Science Daily

Microplastic 'hotspots' identified in Long Island Sound

Forensic and environmental experts have teamed up to develop a new scientific method to pinpoint microplastic pollution 'hotspots' in open waters.

A study by Staffordshire University, The Rozalia Project for a Clean Ocean and Central Wyoming College trialled the technique in New York's Long Island Sound.

Professor Claire Gwinnett from Staffordshire University explained: "Long Island Sound was a location of interest because it has lots of factors that can cause pollution.

"It is an estuary that has high populations of wildlife, it is a busy transport route frequented by cargo ships and is a popular fishing area. Located adjacent to New York City, it is also highly populated and a major tourist destination."

Funded, in part, by the National Geographic Society, the study saw samples collected from the deck of the 60′ oceanographic sailing research vessel, American Promise. The team took 1 litre 'grab samples' of surface water every 3 miles from the East River along the middle of Long Island Sound to The Race, where it meets Rhode Island Sound.

Grab sampling allows analysis of specific locations, with the researchers applying a statistical approach to identify hotspots where microplastics were most in evidence.

"People often use the term 'hotspot' but it is not scientifically defined. Previous studies have used largely subjective methods, without the use of any rules or thresholds that differentiate hotspots from non-hotspots," Professor Gwinnett commented.

"Our study proposed a simple yet objective method for determining hotspots using standard deviation values. This is the first time that this has been done."

Two primary and two secondary hotspots were observed, near either end of the sampling area. There is potentially a "bottleneck" effect in the narrower zones or, conversely, a dilution effect in the wider section of Long Island Sound. Similarly, hotspots were observed as being close to or in line with a river mouth, specifically the Thames and Connecticut Rivers.

Overlaying heat maps of various types of shipping and vessel traffic with the microparticle heat map from this study shows potential similarities. In particular, between areas of high recreational and passenger vessel traffic and higher microplastic concentration.

Professor Gwinnett said: "We need to consider factors that might influence these results, such as population, geography and human use. The identified hotspots, however, were found in both densely populated areas and adjacent to some of the least densely populated land areas surrounding Long Island Sound.

"The first step in combatting this type of pollution is by characterizing microparticle samples so that we can begin to understand where they might have come from."

97% of samples contained human-made particulates. Microparticles were classified as 76.14% fibres and 23.86% fragments. 47.76% of the fibres were synthetic and 52.24% were non-synthetic.

Forensic science approaches developed by Staffordshire University were used to analyse the microparticles -- including type, colour, shape, material, presence of delusterant and width -- which identified 30 unique categories of potential sources of pollution.

Rachael Miller, Expedition lead and Rozalia Project Founder, explained: "Unlike larger fragments of plastic, which may exhibit clear features that easily identify its original source, such as bottle cap ridges or a partial logo, this is generally very difficult for microparticles unless an analysis approach which fully characterizes the particle is used.

"Identifying a specific type of item from which a microparticle came from e.g. pair of jeans, carpet, tyre or personal hygiene product increases the likelihood of discovering the mechanism for transport to the environment. That, in turn, increases opportunities to prevent a subset of microplastic pollution."

The authors are now calling for reference databases of potential pollutants of waterways. PhD researcher Amy Osbourne specialises in forensic fibre analysis at Staffordshire University after progressing from the undergraduate degree in Forensic Investigation.

She said: "We cannot confidently identify the sources of pollution without being able to cross reference samples against large, easily searched known provenance databases. Such databases are already used in forensic science when identifying sources of evidence found at crime scenes.

"For example, we might begin with a database of all the different types of fishing nets or tarpaulins that we know are commonly used in areas like Long Island Sound."

Read more at Science Daily

Researchers discover how we perceive bitter taste

Humans can sense five different tastes: sour, sweet, umami, bitter, and salty, using specialized sensors on our tongues called taste receptors. Other than allowing us to enjoy delicious foods, the sensation of taste allows us to determine the chemical makeup of food and prevents us from consuming toxic substances.

Researchers at the UNC School of Medicine, including Bryan Roth, MD, PhD, the Michael Hooker Distinguished Professor of Pharmacology, and Yoojoong Kim, PhD, a postdoctoral researcher in the Roth Lab, recently set out to address one very basic question: "How exactly do we perceive bitter taste?"

A new study, published in Nature, reveals the detailed protein structure of the TAS2R14 bitter taste receptor. In addition to solving the structure of this taste receptor, the researchers were also able to determine where bitter-tasting substances bind to TAS2R14 and how they activate them, allowing us to taste bitter substances.

"Scientists know very little about the structural make up of sweet, bitter, and umami taste receptors," said Kim. "Using a combination of biochemical and computational methods, we now know the structure of the bitter taste receptor TAS2R14 and the mechanisms that initializes the sensation of bitter taste in our tongues."

This detailed information is important for discovering and designing drug candidates that can directly regulate taste receptors, with the potential to treat metabolic diseases such as obesity and diabetes.

From Chemicals to Electricity to Sensation

TAS2R14s are members of the G protein-coupled receptor (GPCR) family of bitter taste receptors. The receptors are attached to a protein known as a G protein. TAS2R14 stands out from the others in its family because it can identify more than 100 distinct substances known as bitter tastants.

Researchers found that when bitter tastants come into contact with TAS2R14 receptors, the chemicals wedge themselves into to a specific spot on the receptor called an allosteric site, this causes the protein to change its shape, activating the attached G protein.

This triggers a series of biochemical reactions within the taste receptor cell, leading to activation of the receptor, which can then send signals to tiny nerve fibers -- through the cranial nerves in the face -- to an area of the brain called the gustatory cortex. It is here where the brain processes and perceives the signals as bitterness. And of course, this complex signaling system occurs almost instantaneously.

Cholesterol's Role in Bitter Taste Reception

While working to define its structure, researchers found another unique feature of TAS2R14 -- that cholesterol is giving it a helping hand in its activation.

"Cholesterol was residing in another binding site called the orthosteric pocket in TAS2R14, while the bitter tastant binds to the allosteric site," said Kim. "Through molecular dynamics simulations, we also found that the cholesterol puts the receptor in a semi-active state, so it can be easily activated by the bitter tastant."

Bile acids, which are created in the liver, have similar chemical structures with cholesterol. Previous studies have suggested that bile acids can bind and activate TAS2R14, but little is known about how and where they bind in the receptor.

Using their newfound structure, researchers found that bile acids might be binding to the same orthosteric pocket as cholesterol. While the exact role of bile acid or cholesterol in TAS2R14 remains unknown, it may play a role in the metabolism of these substances or in relation to metabolic disorders such as obesity or diabetes.

How This Can Help Drug Development

The discovery of this novel allosteric binding site for bitter tasting substances is unique.

The allosteric binding region is located between TAS2R14 and its coupled G protein is called G-protein alpha. This region is critical to form a signaling complex, which helps to transfer the signal from the taste receptor to the G-protein to the taste receptor cells.

"In the future, this structure will be key to discovering and designing drug candidates that can directly regulate G proteins through the allosteric sites," said Kim. "We also have the ability to affect specific G-protein subtypes, like G-protein alpha or G-protein beta, rather than other G-protein pathways that we don't want to cause any other side effects."

Roth and Kim have made a number of new discoveries, but some leave more questions than answers. While running a genomics study, they found that the TAS2R14 protein in complex with the GI is expressed outside the tongue, especially in the cerebellum in the brain, the thyroid, and the pancreas. Researchers are planning future studies to elucidate the function these proteins may have outside of the mouth.

Read more at Science Daily

Revolutionary molecular device unleashes potential for targeted drug delivery and self-healing materials

In a new breakthrough that could revolutionise medical and material engineering, scientists have developed a first-of-its-kind molecular device that controls the release of multiple small molecules using force.

The researchers from The University of Manchester describe a force-controlled release system that harnesses natural forces to trigger targeted release of molecules, which could significantly advance medical treatment and smart materials.

The discovery, published today in the journal Nature, uses a novel technique using a type of interlocked molecule known as rotaxane.

Under the influence of mechanical force -- such as that observed at an injured or damaged site -- this component triggers the release of functional molecules, like medicines or healing agents, to precisely target the area in need.

For example, the site of a tumour.

It also holds promise for self-healing materials that can repair themselves in situ when damaged, prolonging the lifespan of these materials.

For example, a scratch on a phone screen.

Guillaume De Bo, Professor of Organic Chemistry at The University of Manchester, said: "Forces are ubiquitous in nature and play pivotal roles in various processes. Our aim was to exploit these forces for transformative applications, particularly in material durability and drug delivery.

"Although this is only a proof-of-concept design, we believe that our rotaxane-based approach holds immense potential with far reaching applications -- we're on the brink of some truly remarkable advancements in healthcare and technology."

Traditionally, the controlled release of molecules with force has presented challenges in releasing more than one molecule at once, usually operating through a molecular "tug of war" game where two polymers pull at either side to release a single molecule.

The new approach involves two polymer chains attached to a central ring-like structure that slide along an axle supporting the cargo, effectively releasing multiple cargo molecules in response to force application.

The scientists demonstrated the release of up to five molecules simultaneously with the possibility of releasing more, overcoming previous limitations.

The breakthrough marks the first time scientists have been able to demonstrate the ability to release more than one component, making it one of the most efficient release systems to date.

The researchers also show versatility of the model by using different types of molecules, including drug compounds, fluorescent markers, catalyst and monomers, revealing the potential for a wealth of future applications.

Looking ahead, the researchers aim to delve deeper into self-healing applications, exploring whether two different types of molecules can be released at the same time.

For example, the integration of monomers and catalysts could enable polymerization at the site of damage, creating an integrated self-healing system within materials.

They will also look to expand the sort of molecules that can be released.

Read more at Science Daily

Apr 9, 2024

Telescope detects unprecedented behavior from nearby magnetar

Researchers using Murriyang, CSIRO's Parkes radio telescope, have detected unusual radio pulses from a previously dormant star with a powerful magnetic field.

​New results published today in Nature Astronomy describe radio signals from magnetar XTE J1810-197 behaving in complex ways.

​Magnetars are a type of neutron star and the strongest magnets in the Universe.

At roughly 8,000 light years away, this magnetar is also the closest known to Earth.

​Most are known to emit polarised light, though the light this magnetar is emitting is circularly polarised, where the light appears to spiral as it moves through space.

​Dr Marcus Lower, a postdoctoral fellow at Australia's national science agency -- CSIRO, led the latest research and said the results are unexpected and totally unprecedented.

​"Unlike the radio signals we've seen from other magnetars, this one is emitting enormous amounts of rapidly changing circular polarisation. We had never seen anything like this before," Dr Lower said.

​Dr Manisha Caleb from the University of Sydney and co-author on the study said studying magnetars offers insights into the physics of intense magnetic fields and the environments these create.

​"The signals emitted from this magnetar imply that interactions at the surface of the star are more complex than previous theoretical explanations."

​Detecting radio pulses from magnetars is already extremely rare: XTE J1810-197 is one of only a handful known to produce them.

​While it's not certain why this magnetar is behaving so differently, the team has an idea.

​"Our results suggest there is a superheated plasma above the magnetar's magnetic pole, which is acting like a polarising filter," Dr Lower said.

​"How exactly the plasma is doing this is still to be determined."

​XTE J1810-197 was first observed to emit radio signals in 2003.

Then it went silent for well over a decade. The signals were again detected by the University of Manchester's 76-m Lovell telescope at the Jodrell Bank Observatory in 2018 and quickly followed up by Murriyang, which has been crucial to observing the magnetar's radio emissions ever since.

​The 64-m diameter telescope on Wiradjuri Country is equipped with a cutting edge ultra-wide bandwidth receiver.

The receiver was designed by CSIRO engineers who are world leaders in developing technologies for radio astronomy applications.

​The receiver allows for more precise measurements of celestial objects, especially magnetars, as it is highly sensitive to changes in brightness and polarisation across a broad range of radio frequencies.

Read more at Science Daily

Climate change threatens Antarctic meteorites

Using artificial intelligence, satellite observations, and climate model projections, a team of researchers from Switzerland and Belgium calculate that for every tenth of a degree of increase in global air temperature, an average of nearly 9,000 meteorites disappear from the surface of the ice sheet. This loss has major implications, as meteorites are unique samples of extraterrestrial bodies that provide insights into the origin of life on Earth and the formation of the Moon.

Disappearing at an alarming rate

By 2050, about a quarter of the estimated of 300,000 -- 800,000 meteorites in Antarctica will be lost due to glacial melt.

By end of the century, researchers anticipate that number could rise approaching a loss of meteorites closer to three-quarters of the meteorites on the continent under a high-warming scenario.

Published in the journal Nature Climate Change, Harry Zekollari co-led the study while working under Professor Daniel Farinotti in the Laboratory of Hydraulics, Hydrology and Glaciology at the Department of Civil, Environmental and Geomatic Engineering at ETH Zurich.

Zekollari and co-lead Veronica Tollenaar, Université Libre de Bruxelles, reveal in the study that ongoing warming results in the loss of about 5,000 meteorites a year, outpacing the collection efforts of Antarctic meteorites by a factor five.

Meteorites -- time capsules of the universe

Zekollari, now an Associate Professor of Glaciology at Vrije Universiteit Brussel, calls for a major international effort to preserve the scientific value of meteorites, "We need to accelerate and intensify efforts to recover Antarctic meteorites. The loss of Antarctic meteorites is much like the loss of data that scientists glean from ice cores collected from vanishing glaciers -- once they disappear, so do some of the secrets of the universe."

Meteorites are fragments from space that provide unique information about our solar system.

Antarctica is the most prolific place to find meteorites, and to date, about 60 percent of all meteorites ever found on Earth have been collected from the surface of the Antarctic ice sheet.

The flow of the ice sheet concentrates meteorites in so-called "meteorite stranding zones," where their dark crust allows them to be easily detected.

In addition to intensifying recovery operations, there is potential to increase the efficiency of meteorite recovery missions in the short term.

This potential relies mainly on data-driven analysis to identify unexplored meteorite stranding zones and mapping areas exposing blue ice where meteorites are often found.

Extraterrestrial heritage slipping away

Due to their dark colour, meteorites preferentially heat up with respect to the surrounding ice.

As this heat transfers from the meteorites to the ice, it can warm up the ice, and eventually cause the ice to locally melt, leading to a sinking of meteorites underneath the surface of the ice sheet.

Once the meteorites enter the ice sheet, even at shallow depths, they cannot be detected anymore, and they are thus lost for science.

As atmospheric temperatures increase, so does the surface temperature of the ice, intensifying the loss.

"Even when temperatures of the ice are well below zero, the dark meteorites warm-up so much in the sun that they can melt the ice directly beneath the meteorite. Through this process, the warm meteorite creates a local depression in the ice and over time fully disappears under the surface," says Tollenaar.

Read more at Science Daily

Toothed whale echolocation organs evolved from jaw muscles

Genetic analysis finds evidence suggesting that acoustic fat bodies in the heads of toothed whales were once the muscles and bone marrow of the jaw.

Dolphins and whales use sound to communicate, navigate and hunt.

New research suggests that the collections of fatty tissue that enable toothed whales to do so may have evolved from their skull muscles and bone marrow.

Scientists at Hokkaido University determined DNA sequences of genes which were expressed in acoustic fat bodies -- collections of fat around the head that toothed whales use for echolocation.

They measured gene expression in the harbor porpoise (Phocoena phocoena) and Pacific white-sided dolphin (Lagenorhynchus obliquidens). Their findings were published in the journal Gene.

The evolution of acoustic fat bodies in the head -- the melon in the whale forehead, extramandibular fat bodies (EMFB) alongside the jawbone, and intramandibular fat bodies (IMFB) within the jawbone -- was essential for sound use such as echolocation.

However, little is known about the genetic origins of those fatty tissues.

"Toothed whales have undergone significant degenerations and adaptations to their aquatic lifestyle," said Hayate Takeuchi, a PhD student at Hokkaido University's Hayakawa Lab and first author of the study.

One adaptation was the partial loss of their sense of smell and taste, along with the gain of echolocation to enable them to navigate in the underwater environment.

The researchers found that genes which are normally associated with muscle function and development were active in the melon and EMFBs.

There was also evidence of an evolutionary connection between the extramandibular fat and the masseter muscle, which in humans connects the lower jawbone to the cheekbones and is a key muscle involved in chewing.

"This study has revealed that the evolutionary tradeoff of masticatory muscles for the EMFB -- between auditory and feeding ecology -- was crucial in the aquatic adaptation of toothed whales," said Assistant Professor Takashi Hayakawa of the Faculty of Environmental Earth Science, who led the study.

"It was part of the evolutionary shift away from chewing to simply swallowing food, which meant the chewing muscles were no longer needed."

Analysis of gene expression in the intramandibular fat detected activity of genes related to immune functions, such as the activation of some elements of the immune response and regulation of T cell formation.

Read more at Science Daily

Engineers design soft and flexible 'skeletons' for muscle-powered robots

Our muscles are nature's perfect actuators -- devices that turn energy into motion. For their size, muscle fibers are more powerful and precise than most synthetic actuators. They can even heal from damage and grow stronger with exercise.

For these reasons, engineers are exploring ways to power robots with natural muscles. They've demonstrated a handful of "biohybrid" robots that use muscle-based actuators to power artificial skeletons that walk, swim, pump, and grip. But for every bot, there's a very different build, and no general blueprint for how to get the most out of muscles for any given robot design.

Now, MIT engineers have developed a spring-like device that could be used as a basic skeleton-like module for almost any muscle-bound bot. The new spring, or "flexure," is designed to get the most work out of any attached muscle tissues. Like a leg press that's fit with just the right amount of weight, the device maximizes the amount of movement that a muscle can naturally produce.

The researchers found that when they fit a ring of muscle tissue onto the device, much like a rubber band stretched around two posts, the muscle pulled on the spring, reliably and repeatedly, and stretched it five times more, compared with other previous device designs.

The team sees the flexure design as a new building block that can be combined with other flexures to build any configuration of artificial skeletons. Engineers can then fit the skeletons with muscle tissues to power their movements.

"These flexures are like a skeleton that people can now use to turn muscle actuation into multiple degrees of freedom of motion in a very predictable way," says Ritu Raman, the Brit and Alex d'Arbeloff Career Development Professor in Engineering Design at MIT. "We are giving roboticists a new set of rules to make powerful and precise muscle-powered robots that do interesting things."

Raman and her colleagues report the details of the new flexure design in a paper appearing in the journal Advanced Intelligent Systems. The study's MIT co-authors include Naomi Lynch '12, SM '23; undergraduate Tara Sheehan; graduate students Nicolas Castro, Laura Rosado, and Brandon Rios; and professor of mechanical engineering Martin Culpepper.

Muscle pull


When left alone in a petri dish in favorable conditions, muscle tissue will contract on its own but in directions that are not entirely predictable or of much use.

"If muscle is not attached to anything, it will move a lot, but with huge variability, where it's just flailing around in liquid," Raman says.

To get a muscle to work like a mechanical actuator, engineers typically attach a band of muscle tissue between two small, flexible posts. As the muscle band naturally contracts, it can bend the posts and pull them together, producing some movement that would ideally power part of a robotic skeleton. But in these designs, muscles have produced limited movement, mainly because the tissues are so variable in how they contact the posts. Depending on where the muscles are placed on the posts, and how much of the muscle surface is touching the post, the muscles may succeed in pulling the posts together but at other times may wobble around in uncontrollable ways.

Raman's group looked to design a skeleton that focuses and maximizes a muscle's contractions regardless of exactly where and how it is placed on a skeleton, to generate the most movement in a predictable, reliable way.

"The question is: How do we design a skeleton that most efficiently uses the force the muscle is generating?" Raman says.

The researchers first considered the multiple directions that a muscle can naturally move. They reasoned that if a muscle is to pull two posts together along a specific direction, the posts should be connected to a spring that only allows them to move in that direction when pulled.

"We need a device that is very soft and flexible in one direction, and very stiff in all other directions, so that when a muscle contracts, all that force gets efficiently converted into motion in one direction," Raman says.

Soft flex

As it turns out, Raman found many such devices in Professor Martin Culpepper's lab. Culpepper's group at MIT specializes in the design and fabrication of machine elements such as miniature actuators, bearings, and other mechanisms, that can be built into machines and systems to enable ultraprecise movement, measurement, and control, for a wide variety of applications. Among the group's precision machined elements are flexures -- spring-like devices, often made from parallel beams, that can flex and stretch with nanometer precision.

"Depending on how thin and far apart the beams are, you can change how stiff the spring appears to be," Raman says.

She and Culpepper teamed up to design a flexure specifically tailored with a configuration and stiffness to enable muscle tissue to naturally contract and maximally stretch the spring. The team designed the device's configuration and dimensions based on numerous calculations they carried out to relate a muscle's natural forces with a flexure's stiffness and degree of movement.

The flexure they ultimately designed is 1/100 the stiffness of muscle tissue itself. The device resembles a miniature, accordion-like structure, the corners of which are pinned to an underlying base by a small post, which sits near a neighboring post that is fit directly onto the base. Raman then wrapped a band of muscle around the two corner posts (the team molded the bands from live muscle fibers that they grew from mouse cells), and measured how close the posts were pulled together as the muscle band contracted.

The team found that the flexure's configuration enabled the muscle band to contract mostly along the direction between the two posts. This focused contraction allowed the muscle to pull the posts much closer together -- five times closer -- compared with previous muscle actuator designs.

"The flexure is a skeleton that we designed to be very soft and flexible in one direction, and very stiff in all other directions," Raman says. "When the muscle contracts, all the force is converted into movement in that direction. It's a huge magnification."

The team found they could use the device to precisely measure muscle performance and endurance. When they varied the frequency of muscle contractions (for instance, stimulating the bands to contract once versus four times per second), they observed that the muscles "grew tired" at higher frequencies, and didn't generate as much pull.

"Looking at how quickly our muscles get tired, and how we can exercise them to have high-endurance responses -- this is what we can uncover with this platform," Raman says.

The researchers are now adapting and combining flexures to build precise, articulated, and reliable robots, powered by natural muscles.

Read more at Science Daily

Apr 8, 2024

First results from DESI make the most precise measurement of our expanding universe

With 5,000 tiny robots in a mountaintop telescope, researchers can look 11 billion years into the past. The light from far-flung objects in space is just now reaching the Dark Energy Spectroscopic Instrument (DESI), enabling us to map our cosmos as it was in its youth and trace its growth to what we see today. Understanding how our universe has evolved is tied to how it ends, and to one of the biggest mysteries in physics: dark energy, the unknown ingredient causing our universe to expand faster and faster.

To study dark energy's effects over the past 11 billion years, DESI has created the largest 3D map of our cosmos ever constructed, with the most precise measurements to date. This is the first time scientists have measured the expansion history of the young universe with a precision better than 1%, giving us our best view yet of how the universe evolved. Researchers shared the analysis of their first year of collected data in multiple papers that will be posted today on the arXiv and in talks at the American Physical Society meeting in the United States and the Rencontres de Moriond in Italy.

"We're incredibly proud of the data, which have produced world-leading cosmology results and are the first to come out of the new generation of dark energy experiments," said Michael Levi, DESI director and a scientist at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), which manages the project. "So far, we're seeing basic agreement with our best model of the universe, but we're also seeing some potentially interesting differences that could indicate that dark energy is evolving with time. Those may or may not go away with more data, so we're excited to start analyzing our three-year dataset soon."

Our leading model of the universe is known as Lambda CDM. It includes both a weakly interacting type of matter (cold dark matter, or CDM) and dark energy (Lambda). Both matter and dark energy shape how the universe expands -- but in opposing ways. Matter and dark matter slow the expansion down, while dark energy speeds it up. The amount of each influences how our universe evolves. This model does a good job of describing results from previous experiments and how the universe looks throughout time.

However, when DESI's first-year results are combined with data from other studies, there are some subtle differences with what Lambda CDM would predict. As DESI gathers more information during its five-year survey, these early results will become more precise, shedding light on whether the data are pointing to different explanations for the results we observe or the need to update our model. More data will also improve DESI's other early results, which weigh in on the Hubble constant (a measure of how fast the universe is expanding today) and the mass of particles called neutrinos.

"No spectroscopic experiment has had this much data before, and we're continuing to gather data from more than a million galaxies every month," said Nathalie Palanque-Delabrouille, a Berkeley Lab scientist and co-spokesperson for the experiment. "It's astonishing that with only our first year of data, we can already measure the expansion history of our universe at seven different slices of cosmic time, each with a precision of 1 to 3%. The team put in a tremendous amount of work to account for instrumental and theoretical modeling intricacies, which gives us confidence in the robustness of our first results."

DESI's overall precision on the expansion history across all 11 billion years is 0.5%, and the most distant epoch, covering 8-11 billion years in the past, has a record-setting precision of 0.82%. That measurement of our young universe is incredibly difficult to make. Yet within one year, DESI has become twice as powerful at measuring the expansion history at these early times as its predecessor (the Sloan Digital Sky Survey's BOSS/eBOSS), which took more than a decade.

"We are delighted to see cosmology results from DESI's first year of operations," said Gina Rameika, associate director for High Energy Physics at DOE. "DESI continues to amaze us with its stellar performance and is already shaping our understanding of the universe."

Traveling back in time

DESI is an international collaboration of more than 900 researchers from over 70 institutions around the world. The instrument was constructed and is operated with funding from the DOE Office of Science, and sits atop the U.S. National Science Foundation's Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory, a program of NSF's NOIRLab.

Looking at DESI's map, it's easy to see the underlying structure of the universe: strands of galaxies clustered together, separated by voids with fewer objects. Our very early universe, well beyond DESI's view, was quite different: a hot, dense soup of subatomic particles moving too fast to form stable matter like the atoms we know today. Among those particles were hydrogen and helium nuclei, collectively called baryons.

Tiny fluctuations in this early ionized plasma caused pressure waves, moving the baryons into a pattern of ripples that is similar to what you'd see if you tossed a handful of gravel into a pond. As the universe expanded and cooled, neutral atoms formed and the pressure waves stopped, freezing the ripples in three dimensions and increasing clustering of future galaxies in the dense areas. Billions of years later, we can still see this faint pattern of 3D ripples, or bubbles, in the characteristic separation of galaxies -- a feature called Baryon Acoustic Oscillations (BAOs).

Researchers use the BAO measurements as a cosmic ruler. By measuring the apparent size of these bubbles, they can determine distances to the matter responsible for this extremely faint pattern on the sky. Mapping the BAO bubbles both near and far lets researchers slice the data into chunks, measuring how fast the universe was expanding at each time in its past and modeling how dark energy affects that expansion.

"We've measured the expansion history over this huge range of cosmic time with a precision that surpasses all of the previous BAO surveys combined," said Hee-Jong Seo, a professor at Ohio University and the co-leader of DESI's BAO analysis. "We're very excited to learn how these new measurements will improve and alter our understanding of the cosmos. Humans have a timeless fascination with our universe, wanting to know both what it is made of and what will happen to it."

Using galaxies to measure the expansion history and better understand dark energy is one technique, but it can only reach so far. At a certain point, light from typical galaxies is too faint, so researchers turn to quasars, extremely distant, bright galactic cores with black holes at their centers. Light from quasars is absorbed as it passes through intergalactic clouds of gas, enabling researchers to map the pockets of dense matter and use them the same way they use galaxies -- a technique known as using the "Lyman-alpha forest."

"We use quasars as a backlight to basically see the shadow of the intervening gas between the quasars and us," said Andreu Font-Ribera, a scientist at the Institute for High Energy Physics (IFAE) in Spain who co-leads DESI's Lyman-alpha forest analysis. "It lets us look out further to when the universe was very young. It's a really hard measurement to do, and very cool to see it succeed."

Researchers used 450,000 quasars, the largest set ever collected for these Lyman-alpha forest measurements, to extend their BAO measurements all the way out to 11 billion years in the past. By the end of the survey, DESI plans to map 3 million quasars and 37 million galaxies.

State-of-the-art science

DESI is the first spectroscopic experiment to perform a fully "blinded analysis," which conceals the true result from the scientists to avoid any subconscious confirmation bias. Researchers work in the dark with modified data, writing the code to analyze their findings. Once everything is finalized, they apply their analysis to the original data to reveal the actual answer.

"The way we did the analysis gives us confidence in our results, and particularly in showing that the Lyman-alpha forest is a powerful tool for measuring the universe's expansion," said Julien Guy, a scientist at Berkeley Lab and the co-lead for processing information from DESI's spectrographs. "The dataset we are collecting is exceptional, as is the rate at which we are gathering it. This is the most precise measurement I have ever done in my life."

DESI's data will be used to complement future sky surveys such as the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope, and to prepare for a potential upgrade to DESI (DESI-II) that was recommended in a recent report by the U.S. Particle Physics Project Prioritization Panel.

"We are in the golden era of cosmology, with large-scale surveys ongoing and about to be started, and new techniques being developed to make the best use of these datasets," said Arnaud de Mattia, a researcher with the French Alternative Energies and Atomic Energy Commission (CEA) and co-leader of DESI's group interpreting the cosmological data. "We're all really motivated to see whether new data will confirm the features we saw in our first-year sample and build a better understanding of the dynamics of our universe."

Read more at Science Daily

Experiencing extreme weather predicts support for policies to mitigate effects of climate change

Most Americans report having personally experienced the effects of extreme weather, according to new survey data from the Annenberg Public Policy Center that finds support for pro-environmental government policies meant to lessen the effects of climate change.

More than 6 in 10 people favor increased investment in energy-efficient public transit and an equal number support providing tax credits to families who install rooftop solar or battery storage, according to the nationally representative panel survey, fielded in November 2023 with over 1,500 U.S. adults.

Two-thirds of U.S. adults say that in the past year their typical daily activities were affected either sometimes, often, or frequently by extreme outdoor heat, and half say that their typical daily activities were affected sometimes, often, or frequently by poor air quality resulting from wildfire smoke.

Importantly, an analysis finds a connection between these reported experiences and policy support: exposure to extreme weather is associated with support for a half-dozen policies intended to mitigate the effects of climate change, policies that are contained in the Inflation Reduction Act of 2022.

Annenberg opens new Climate Communication division

The findings were released at an opening session of the Society of Environmental Journalists' (SEJ) 33rd annual conference, #SEJ2024, which was held at the University of Pennsylvania. Penn's Annenberg Public Policy Center (APPC) hosted the group in celebration of the Penn Center for Science, Sustainability, and the Media. APPC director Kathleen Hall Jamieson released the findings at the SEJ conference on April 3, 2024.

"We've traditionally assumed that experiencing a threat will affect policy preferences," Jamieson said. "In this polarized time, on this polarized topic, that assumption holds true. People who report exposure to extreme weather are more supportive of measures to help address climate change."

Jamieson also announced that APPC, now celebrating its 30th anniversary, is marking the occasion with the creation of a Climate Communication division, led by Annenberg School for Communication vice dean and professor Emily Falk, who heads a communication neuroscience lab at Penn. The new climate division joins APPC's Communication Science and Institutions of Democracy divisions, which are headed, respectively, by Penn Integrates Knowledge Professor Dolores Albarracín and political science Professor Matt Levendusky.

"This moves the policy center into an important new area in which communication plays a crucial role," Jamieson said.

Experiencing extreme weather

APPC's survey, the 17th wave of a nationally representative panel of 1,538 U.S. adults, finds that millions of Americans report that extreme weather has affected their daily lives over the past year (subtotals may not add due to rounding):

  •     Temperature: Over 4 in 10 (45%) say temperatures in their local area were warmer than usual last summer.
  •     Heat: Two-thirds (68%) say extreme outdoor heat either sometimes (34%), often (19%), or frequently (16%) affected their typical daily activities.
  •     Smoke: Half (50%) say poor air quality resulting from wildfire smoke either sometimes (31%), often (12%), or frequently (7%) affected their typical daily activities.
  •     Flooding: 29% say flooding produced by unusual levels of rain either sometimes (20%), often (6%), or frequently (3%) affected their typical daily activities.
  •     Tornado/hurricane: 19% said a tornado or hurricane either sometimes (13%), often (4%), or frequently (1%), affected their typical daily activities.


Support for pro-environment measures


More than half of Americans strongly or somewhat favor a series of government steps designed to mitigate the effects of climate change. Although these steps were not identified as such in the survey, these measures are contained in the Inflation Reduction Act of 2022, which was passed by the 117th Congress and signed into law by President Joe Biden on Aug. 16, 2022.

Support for these government initiatives varied widely by party affiliation and was driven by Democrats, who expressed strong support for all. Support by Republicans was much weaker.

In these findings, "favor" includes strongly favor and somewhat favor. The survey found that:

  •     62% favor increased investment in energy-efficient public transit.
  •         86% Democrats, 44% independents, 42% Republicans
  •     62% favor tax credits for rooftop solar or battery storage.
  •         80% Democrats, 52% independents, 46% Republicans
  •     60% favor community grants to protect against impacts of climate change.
  •         85% Democrats, 50% independents, 36% Republicans
  •     57% favor forgivable loans for rural communities improving energy efficiency.
  •         78% Democrats, 43% independents, 38% Republicans
  •     56% favor taxing corporations based on carbon emissions to reduce climate change.
  •         81% Democrats, 41% independents, 33% Republicans
  •     46% favor tax credits for electric cars.
  •         71% Democrats, 29% independents, 26% Republicans


The initiative that garnered the most support ("strongly favor") was community grants to protect against impacts of climate change (27%). The initiative that had the greatest opposition ("strongly oppose") was tax credits for electric cars (18%). The policy with the strongest Democratic support was energy-efficient public transit (86%), while the one with the strongest Republican support was tax credits for rooftop solar or battery storage (46%).

Extreme weather exposure associated with policy support

A regression analysis of the survey data by APPC research analyst Shawn Patterson Jr. finds that reported exposure to extreme weather is associated with greater support for policies that address the effects of climate change. This support extends to both parties -- Republicans who report experiencing extreme weather are more supportive of these policies than those who do not, and the same holds true for Democrats.

Read more at Science Daily

Fans are not a magic bullet for beating the heat!

A new study by researchers at the University of Ottawa throws cold water on the idea that fans can effectively cool you down during extremely hot weather events.

With severe heat waves becoming more frequent due to climate change, there's a growing need for safe and accessible ways to keep people cool, especially vulnerable populations like older adults.

Fans are often recommended as cheap and easy solutions, but this study suggests they might not be as helpful as previously thought.

The research was led by post-doctoral fellow Robert Meade and was conducted at the Human and Environmental Physiology Research Unit at the University of Ottawa , a unit led by Dr. Glen Kenny, who is a professor of physiology at the Faculty of Health Sciences.

"Fans do improve sweat evaporation, but this effect is not strong enough to significantly lower your body's internal temperature when it's already really hot (above 33-35°C). In older adults, who may have a reduced ability to sweat, fans provide even less cooling benefits," explains Meade.

"In fact, even in younger adults, fans only provide a small fraction of the cooling power of air conditioning."

The study recommends that health organizations continue to advise against relying on fans during extreme heat events, especially for older adults and other groups at higher risk of heat stroke and other adverse health events during heat waves.

Instead, the emphasis should be on providing access to alternative cooling solutions, such as air conditioning, and on exploring ways to make these options more accessible and environmentally friendly.

The research was conducted using "human heat balance" modeling techniques developed in 2015.

By extending these models to estimate core temperature under a range of conditions and modeling assumptions, the authors were able to compare the expected effects of fan use under a wide range of scenarios.

"Results from the 116,640 alternative models we produced in sensitivity analyses indicated that fans likely do not significantly reduce core temperature in high heat, or match air conditioning cooling. Comparisons with more advanced modeling techniques and laboratory heat wave simulations supported this conclusion," adds Meade.

Fans are good at providing air circulation and may work in moderate temperatures but are not as effective in extreme heat.

Read more at Science Daily