Apr 1, 2023

AI algorithm unblurs the cosmos

The cosmos would look a lot better if Earth's atmosphere wasn't photo bombing it all the time.

Even images obtained by the world's best ground-based telescopes are blurry due to the atmosphere's shifting pockets of air. While seemingly harmless, this blur obscures the shapes of objects in astronomical images, sometimes leading to error-filled physical measurements that are essential for understanding the nature of our universe.

Now researchers at Northwestern University and Tsinghua University in Beijing have unveiled a new strategy to fix this issue. The team adapted a well-known computer-vision algorithm used for sharpening photos and, for the first time, applied it to astronomical images from ground-based telescopes. The researchers also trained the artificial intelligence (AI) algorithm on data simulated to match the Vera C. Rubin Observatory's imaging parameters, so, when the observatory opens next year, the tool will be instantly compatible.

While astrophysicists already use technologies to remove blur, the adapted AI-driven algorithm works faster and produces more realistic images than current technologies. The resulting images are blur-free and truer to life. They also are beautiful -- although that's not the technology's purpose.

"Photography's goal is often to get a pretty, nice-looking image," said Northwestern's Emma Alexander, the study's senior author. "But astronomical images are used for science. By cleaning up images in the right way, we can get more accurate data. The algorithm removes the atmosphere computationally, enabling physicists to obtain better scientific measurements. At the end of the day, the images do look better as well."

The research will be published March 30 in the Monthly Notices of the Royal Astronomical Society.

Alexander is an assistant professor of computer science at Northwestern's McCormick School of Engineering, where she runs the Bio Inspired Vision Lab. She co-led the new study with Tianao Li, an undergraduate in electrical engineering at Tsinghua University and a research intern in Alexander's lab.

When light emanates from distant stars, planets and galaxies, it travels through Earth's atmosphere before it hits our eyes. Not only does our atmosphere block out certain wavelengths of light, it also distorts the light that reaches Earth. Even clear night skies still contain moving air that affects light passing through it. That's why stars twinkle and why the best ground-based telescopes are located at high altitudes where the atmosphere is thinnest.

"It's a bit like looking up from the bottom of a swimming pool," Alexander said. "The water pushes light around and distorts it. The atmosphere is, of course, much less dense, but it's a similar concept."

The blur becomes an issue when astrophysicists analyze images to extract cosmological data. By studying the apparent shapes of galaxies, scientists can detect the gravitational effects of large-scale cosmological structures, which bend light on its way to our planet. This can cause an elliptical galaxy to appear rounder or more stretched than it really is. But atmospheric blur smears the image in a way that warps the galaxy shape. Removing the blur enables scientists to collect accurate shape data.

"Slight differences in shape can tell us about gravity in the universe," Alexander said. "These differences are already difficult to detect. If you look at an image from a ground-based telescope, a shape might be warped. It's hard to know if that's because of a gravitational effect or the atmosphere."

To tackle this challenge, Alexander and Li combined an optimization algorithm with a deep-learning network trained on astronomical images. Among the training images, the team included simulated data that matches the Rubin Observatory's expected imaging parameters. The resulting tool produced images with 38.6% less error compared to classic methods for removing blur and 7.4% less error compared to modern methods.

When the Rubin Observatory officially opens next year, its telescopes will begin a decade-long deep survey across an enormous portion of the night sky. Because the researchers trained the new tool on data specifically designed to simulate Rubin's upcoming images, it will be able to help analyze the survey's highly anticipated data.

For astronomers interested in using the tool, the open-source, user-friendly code and accompanying tutorials are available online.

"Now we pass off this tool, putting it into the hands of astronomy experts," Alexander said. "We think this could be a valuable resource for sky surveys to obtain the most realistic data possible."

Read more at Science Daily

Generating power with blood sugar

In type 1 diabetes, the body does not produce insulin. This means that patients have to obtain the hormone externally to regulate their blood sugar levels. Nowadays, this is mostly done via insulin pumps that are attached directly to the body. These devices, as well as other medical applications such as pacemakers, require a reliable energy supply, which at present is met primarily by power from either single-use or rechargeable batteries.

Now, a team of researchers led by Martin Fussenegger from the Department of Biosystems Science and Engineering at ETH Zurich in Basel have put a seemingly futuristic idea into practice. They have developed an implantable fuel cell that uses excess blood sugar (glucose) from tissue to generate electrical energy. The researchers have combined the fuel cell with artificial beta cells developed by their group several years ago. These produce insulin at the touch of a button and effectively lower blood glucose levels much like their natural role models in the pancreas.

"Many people, especially in the Western industrialised nations, consume more carbohydrates than they need in everyday life," Fussenegger explains. This, he adds, leads to obesity, diabetes and cardiovascular disease. "This gave us the idea of using this excess metabolic energy to produce electricity to power biomedical devices," he says.

Fuel cell in tea bag format

At the heart of the fuel cell is an anode (electrode) made of copper-based nanoparticles, which Fussenegger's team created specifically for this application. It consists of copper-based nanoparticles and splits glucose into gluconic acid and a proton to generate electricity, which sets an electric circuit in motion.

Wrapped in a nonwoven fabric and coated with alginate, an algae product approved for medical use, the fuel cell resembles a small tea bag that can be implanted under the skin. The alginate soaks up body fluid and allows glucose to pass from the tissue into the fuel cell within.

A diabetes network with its own power supply

In a second step, the researchers coupled the fuel cell with a capsule containing artificial beta cells. These can be stimulated to produce and secrete insulin using electric current or blue LED light. Fussenegger and his colleagues already tested such designer cells some time ago (see ETH News, 8 December 2016).

The system combines sustained power generation and controlled insulin delivery. As soon as the fuel cell registers excess glucose, it starts to generate power. This electrical energy is then used to stimulate the cells to produce and release insulin into the blood. As a result, blood sugar dips to a normal level. Once it falls below a certain threshold value, the production of electricity and insulin stops.

The electrical energy provided by the fuel cell is sufficient not only to stimulate the designer cells but also to enable the implanted system to communicate with external devices such as a smartphone. This allows potential users to adjust the system via a corresponding app. A doctor could also access it remotely and make adjustments. "The new system autonomously regulates insulin and blood glucose levels and could be used to treat diabetes in the future," Fussenegger says.

Read more at Science Daily

Mar 31, 2023

'Taffy galaxies' collide, leave behind bridge of star-forming material

Galaxy collisions are transformative events, largely responsible for driving the evolution of the Universe. The mixing and mingling of stellar material is an incredibly dynamic process that can lead to the formation of molecular clouds populated with newly forming stars. But, a head-on collision between the two galaxies UGC 12914 (left) and UGC 12915 (right) 25-30 million years ago appears to have resulted in a different kind of structure -- a bridge of highly turbulent material spanning the two galaxies. Though this intergalactic bridge is teeming with star-forming material, its turbulent nature is suppressing star formation.

This pair of galaxies, nicknamed the Taffy Galaxies, is located about 180 million light-years away in the direction of the constellation Pegasus.

This new image, captured with Gemini North, one half of the International Gemini Observatory, operated by NSF's NOIRLab, showcases the fascinating feature that gave them their name. A tenuous bridge composed of narrow molecular filaments, shown in brown, and clumps of hydrogen gas, shown in red, can be seen between the two galaxies. Its complex web structure resembles taffy being stretched as the pair slowly separates.

Galaxy collisions can happen out of a variety of different scenarios, often involving a larger galaxy and a smaller satellite galaxy. As they drift near one another, the satellite galaxy can attract one of the larger galaxy's primary spiral arms, pulling it out of its orbit. Or the satellite galaxy can actually intersect with the larger galaxy, causing significant distortions to its own structure. In other cases, a collision can lead to a merger if neither member has enough momentum to continue on after colliding. In all these scenarios, stellar material from both galaxies mixes through a gradual combining and redistribution of gas, like two puddles of liquid that are slowly bleeding into each other. The resulting collecting and compression of the gas can then trigger star formation.

A head-on collision, however, would be more like pouring liquid from two separate cups into a shared bowl. When the Taffy Galaxies' collided, their galactic disks and gaseous components smashed right into each other. This resulted in a massive injection of energy into the gas, causing it to become highly turbulent. As the pair emerged from their collision, high-velocity gas was pulled from each galaxy, creating a massive gas bridge between them. The turbulence of the stellar material throughout the bridge is now prohibiting the collection and compression of gas that are required to form new stars.

Read more at Science Daily

A reconstruction of prehistoric temperatures for some of the oldest archaeological sites in North America

Scientists often look to the past for clues about how Earth's landscapes might shift under a changing climate, and for insight into the migrations of human communities through time. A new study offers both by providing, for the first time, a reconstruction of prehistoric temperatures for some of the first known North American settlements.

The study, published in Quaternary Science Reviews, uses new techniques to examine the past climate of Alaska's Tanana Valley. With a temperature record that reaches back 14,000 years, researchers now have a glimpse into the environment that supported humans living at some of the continent's oldest archaeological sites, where mammoth bones are preserved alongside evidence of human occupation. Reconstructing the past environment can help scientists understand the importance of the region for human migration into the Americas.

"When you think about what was happening in the Last Glacial Maximum, all these regions on Earth were super cold, with massive ice sheets, but this area was never fully glaciated," says Jennifer Kielhofer, Ph.D., a paleoclimatologist at DRI and lead author of the study. "We're hypothesizing that if this area was comparatively warm, maybe that would have been an attractive reason to come there and settle."

Kielhofer conducted the research during her doctoral studies at the University of Arizona, and was attracted to the Alaska location because of the wealth of research expertise being focused on the area. She also saw an opportunity to contribute to scientific understanding of a part of the world that is particularly sensitive to global climate change.

"We have to look to the past to try to better constrain how these areas have responded previously," she said, "and how they might respond in the future under climate scenarios that we predict."

Earlier research had relied on coarse temperature records by examining changes in vegetation and pollen. However, this information can only provide a general sense of whether a region was warming or cooling over time. To obtain a more precise history of temperatures, Kielhofer examined soil samples from the archeological sites. Using a technique known as brGDGT paleothermometry, she examined temperature records stored in bacteria to obtain a record of mean annual air temperature above freezing with a precision within about 2.8 degrees Celsius.

"Bacteria are everywhere," she said. "That's great because in areas where you might not have other means of recording or assessing past temperature, you have bacteria. They can preserve for millions of years, so it's a great opportunity to look at pretty much anywhere on Earth."

The results were surprising, she said, because many scientists had previously believed that the region experienced large swings in temperature, which may have contributed to the movement of early humans. But Kielhofer's data showed that temperatures in the Tanana Valley remained fairly stable over time.

"The region wasn't really responding to these global scale climate changes as we might expect," she said. "Because temperatures are really stable through this record, we can't necessarily use temperature as a way to explain changes in human occupation or adaptation through time, as scientists have previously tried to do."

Read more at Science Daily

New, exhaustive study probes hidden history of horses in the American West

A team of international researchers has dug into archaeological records, DNA evidence and Indigenous oral traditions to paint what might be the most exhaustive history of early horses in North America to date. The group's findings show that these beasts of burden may have spread throughout the American West much faster and earlier than many European accounts have suggested.

The researchers, including several scientists from the University of Colorado Boulder, published their findings today in the journal Science.

To tell the stories of horses in the West, the team closely examined about two dozen sets of animal remains found at sites ranging from New Mexico to Kansas and Idaho. The researchers come from 15 countries and multiple Native American groups, including the Lakota, Comanche and Pawnee nations.

"What unites everyone is the shared vision of telling a different kind of story about horses," said William Taylor, a corresponding author of the study and curator of archaeology at the CU Museum of Natural History. "Focusing only on the historical record has underestimated the antiquity and the complexity of Indigenous relationships with horses across a huge swath of the American West."

For many of the scientists involved, the research holds deep personal significance, added Taylor, who grew up in Montana where his grandfather was a rancher.

"We're looking at parts of the country that are extraordinarily important to the people on this project," he said.

The researchers drew on archaeozoology, radiocarbon dating, DNA sequencing and other tools to unearth how and when horses first arrived in various regions of today's United States. Based on the team's calculations, Indigenous communities were likely riding and raising horses as far north as Idaho and Wyoming by at least the first half of the 17th Century -- as much as a century before records from Europeans had suggested.

Groups like the Comanche, in other words, may have begun to form deep bonds with horses mere decades after the animals arrived in the Americas on Spanish boats.

The results line up with a wide range of Indigenous oral histories.

"All this information has come together to tell a bigger, broader, deeper story, a story that natives have always been aware of but has never been acknowledged," said Jimmy Arterberry, co-author of the new study and tribal historian of the Comanche Nation in Oklahoma.

Study co-author Carlton Shield Chief Gover agreed, noting that the love of horses may be one thing that extends across societies and borders.

"People are fascinated by horses. They've grown up with horses," said Shield Chief Gover, a citizen of the Pawnee Nation of Oklahoma and curator for public anthropology at the Indiana University Museum of Archaeology and Anthropology. "We can talk to one another through our shared love of an animal."

Mud Pony

For many Native American communities that shared love goes a long way back.

The Pawnee, for example, tell the story of "Mud Pony," a boy who began seeing visions of strange creatures in his sleep.

"He makes these little mud figurines of these animals he sees in his dreams, and, overnight, they become alive," Shield Chief Gover said. "That's how you get horses."

European historical records from the colonial period, however, have tended to favor a more recent origin story for horses in the West. Many scholars have suggested that Native American communities didn't begin caring for horses until after the Pueblo Revolt of 1680. During this event, Pueblo people in what is today New Mexico temporarily overthrew Spanish rule, releasing European livestock in the process.

Taylor, also an assistant professor of anthropology at CU Boulder, and his colleagues didn't think it fit as an origin story for the relationships between humans and horses in the West: "We thought: There's something fishy about this story."

Clues in bone


With funding from the U.S. National Science Foundation (NSF), they formed an equine dream team that includes archaeologists from the University of Oklahoma and University of New Mexico. Geneticist Ludovic Orlando and Lakota scholar Yvette Running Horse Collin took part from the University of Toulouse.

"This research demonstrates how multiple different types of data can be integrated to address the fascinating historical question of how and when horses spread across the West," said NSF Archaeology program director John Yellen.

The researchers began collecting as much data as they could on horses remains from the West. DNA evidence, for example, suggests that most Indigenous horses had descended from Spanish and Iberian horses, with British horses becoming more common in the 18th and 19th Centuries.

"Our analyses show it was born and raised locally," Taylor said. "It was cared for, and when that animal passed, there was extraordinary significance to that event."

The remains of this horse, along with several others from the study, also seemed to date back to around the turn of the 17th Century, decades before the start of the Pueblo Revolt.

How animals like it arrived in Wyoming isn't clear, but it's likely that Europeans weren't involved in their initial transport.

Shield Chief Gover explained that few Indigenous people will be surprised by the results of the study. But the team's findings may help to illustrate for academic scientists just how important these animals were to the history of Indigenous peoples. The Pawnee, who lived in Nebraska, for example, rode horses on twice-a-year buffalo hunts, traveling farther and faster into the "sea of grass" of the Great Plains. Comanche also galloped on horseback to hunt buffalo, while owning a lot of horses was a sign of wealth.

"I don't want to diminish the reverence and the respect we have for horses," Arterberry said. "We see them as gifts the Creator gave us, and, because of that, we survived and thrived and became who we are today."

Respecting horses

Study co-author Chance Ward, a master's student in Museum and Field Studies at CU Boulder, would like to see the archaeology community begin to treat those relationships with more respect. He was born and raised on the Cheyenne River Reservation in South Dakota, which is home to four bands of the Lakota Nation. Ward grew up listening to his mother's childhood stories about riding ponies in the Bear Creek community. His father's parents started a ranch on the reservation where members of the family practice rodeo today.

He explained that many researchers don't handle animal remains with the same care they reserve for cultural objects and human remains.

Read more at Science Daily

Earth prefers to serve life in XXS and XXL sizes

Life comes in all shapes in sizes, but some sizes are more popular than others, new research from the University of British Columbia has found.

In the first study of its kind published today in PLOS ONE, Dr. Eden Tekwa, who conducted the study as a postdoctoral fellow at UBC's department of zoology, surveyed the body sizes of all Earth's living organisms, and uncovered an unexpected pattern. Contrary to what current theories can explain, our planet's biomass -- the material that makes up all living organisms -- is concentrated in organisms at either end of the size spectrum.

"The smallest and largest organisms significantly outweigh all other organisms," said Dr. Tekwa, lead author of "The size of life," and now a research associate with McGill University's department of biology. "This seems like a new and emerging pattern that needs to be explained, and we don't have theories for how to explain it right now. Current theories predict that biomass would be spread evenly across all body sizes."

In addition to challenging our understanding of how life is distributed, these results have important implications for predicting the effects and impacts of climate change. "Body size governs a lot of global processes as well as local processes, including the rate at which carbon gets sequestered, and how the function and stability of ecosystems might be affected by the composition of living things," said Dr. Tekwa. "We need to think about how body size biomass distribution will change under environmental pressures."

"Life constantly amazes us, including the incredible range of sizes that it comes in," says senior author Dr. Malin Pinsky, associate professor in the department of ecology, evolution, and natural resources at Rutgers University. "If the tiniest microbe was the size of the period at the end of this sentence, the largest living organism, a sequoia tree, would be the size of the Panama Canal."

To obtain their results, Dr. Tekwa spent five years compiling and analyzing data about the size and biomass of every type of living organism on the planet -- from tiny one-celled organisms like soil archaea and bacteria to large organisms like blue whales and sequoia trees. They found that the pattern favouring large and small organisms held across all types of species, and was more pronounced in land-based organisms than in marine environments. Interestingly, maximum body size seemed to reach the same upper limits across multiple species and environments.

"The largest body sizes appear across multiple species groups, and their maximum body sizes are all within a relatively narrow range," Dr. Tekwa noted. "Trees, grasses, underground fungi, mangroves, corals, fish and marine mammals all have similar maximum body sizes. This might suggest that there is a universal upper size limit due to ecological, evolutionary or biophysical limitations."

Dr. Tekwa was also able to uncover some intriguing details about the distribution of life in various ecosystems. "Even though corals occur in only a small fraction of the ocean, it turns out that they have about the same biomass as all the fish in the ocean," said Dr. Tekwa. "This illustrates how important the balance of biomass is in the oceans. Corals support a lot of fish diversity, so it's really interesting that those two organisms have almost the same biomass."

As for humans, we already know we comprise a relatively small biomass, but our size among all living things reveals our place in the global biome. "We belong to the size range that comprises the highest biomass, which is a relatively large body size," said Dr. Tekwa.

Read more at Science Daily

Mar 30, 2023

Astronomers witness the birth of a very distant cluster of galaxies from the early Universe

Using the Atacama Large Millimeter/submillimeter Array (ALMA), of which ESO is a partner, astronomers have discovered a large reservoir of hot gas in the still-forming galaxy cluster around the Spiderweb galaxy -- the most distant detection of such hot gas yet. Galaxy clusters are some of the largest objects known in the Universe and this result, published today in Nature, further reveals just how early these structures begin to form.

Galaxy clusters, as the name suggests, host a large number of galaxies -- sometimes even thousands. They also contain a vast "intracluster medium" (ICM) of gas that permeates the space between the galaxies in the cluster. This gas in fact considerably outweighs the galaxies themselves. Much of the physics of galaxy clusters is well understood; however, observations of the earliest phases of formation of the ICM remain scarce.

Previously, the ICM had only been studied in fully-formed nearby galaxy clusters. Detecting the ICM in distant protoclusters -- that is, still-forming galaxy clusters -- would allow astronomers to catch these clusters in the early stages of formation. A team led by Luca Di Mascolo, first author of the study and researcher at the University of Trieste, Italy, were keen to detect the ICM in a protocluster from the early stages of the Universe.

Galaxy clusters are so massive that they can bring together gas that heats up as it falls towards the cluster. "Cosmological simulations have predicted the presence of hot gas in protoclusters for over a decade, but observational confirmations has been missing," explains Elena Rasia, researcher at the Italian National Institute for Astrophysics (INAF) in Trieste, Italy, and co-author of the study. "Pursuing such key observational confirmation led us to carefully select one of the most promising candidate protoclusters." That was the Spiderweb protocluster, located at an epoch when the Universe was only 3 billion years old. Despite being the most intensively studied protocluster, the presence of the ICM has remained elusive. Finding a large reservoir of hot gas in the Spiderweb protocluster would indicate that the system is on its way to becoming a proper, long-lasting galaxy cluster rather than dispersing.

Di Mascolo's team detected the ICM of the Spiderweb protocluster through what's known as the thermal Sunyaev-Zeldovich (SZ) effect. This effect happens when light from the cosmic microwave background -- the relic radiation from the Big Bang -- passes through the ICM. When this light interacts with the fast-moving electrons in the hot gas it gains a bit of energy and its colour, or wavelength, changes slightly. "At the right wavelengths, the SZ effect thus appears as a shadowing effect of a galaxy cluster on the cosmic microwave background," explains Di Mascolo.

By measuring these shadows on the cosmic microwave background, astronomers can therefore infer the existence of the hot gas, estimate its mass and map its shape. "Thanks to its unparalleled resolution and sensitivity, ALMA is the only facility currently capable of performing such a measurement for the distant progenitors of massive clusters," says Di Mascolo.

They determined that the Spiderweb protocluster contains a vast reservoir of hot gas at a temperature of a few tens of millions of degrees Celsius. Previously, cold gas had been detected in this protocluster, but the mass of the hot gas found in this new study outweighs it by thousands of times. This finding shows that the Spiderweb protocluster is indeed expected to turn into a massive galaxy cluster in around 10 billion years, growing its mass by at least a factor of ten.

Tony Mroczkowski, co-author of the paper and researcher at ESO, explains that "this system exhibits huge contrasts. The hot thermal component will destroy much of the cold component as the system evolves, and we are witnessing a delicate transition." He concludes that "it provides observational confirmation of long-standing theoretical predictions about the formation of the largest gravitationally bound objects in the Universe."

Read more at Science Daily

How dogs are used impacts how they are treated

Research into the unique cognitive abilities of dogs often leads to surprises, including dogs' ability to form mental representations of things they smell, or that they know when their owners do something by accident. However, dog cognition research suffers from the same biases as general psychology: in both fields, studies are usually done in WEIRD (Western, educated, industrialized, rich, and democratic) societies.

Although nearly everything we know about dog-human bonds, dog behavior, and dog cognition comes from WEIRD societies, the majority of dogs in the world live outside of these conditions. To address this bias and form a better understanding of dog-human relationships in societies around the world, a team of researchers from the MPI of Geoanthropology and the MPI for Evolutionary Anthropology assessed data on the functions and treatment of dogs in 124 globally distributed societies.

The researchers found that, across all societies, dogs' functions are a good predictor of how they are treated by their owners. Analysis showed that the more functions dogs have in a society, such as guarding, herding, or hunting, the closer the dog-human relationship is likely to be.

To conduct the study, the researchers investigated ethnographic data from the eHRAF cross-cultural database and identified societies in which dogs serve any of five main functions: hunting, defense, guarding herds, herding, and carrying or transporting supplies. They then collected data on how dogs are treated in those societies and coded it into three dimensions: positive care (e.g. dogs are allowed indoors, dogs receive healthcare, puppies are raised), negative treatment (e.g. dogs are not fed, dogs are physically abused, dogs are regularly culled), and personhood (e.g. dogs are named, dogs are buried and/or mourned, dogs are perceived as family members).

By analyzing the relationship between dog functions and treatment, the researchers showed that the number of functions is positively associated with positive care and personhood and negatively associated with negative treatment. However, they also found that not all of a dog's jobs influence treatment equally. For example, herding is particularly likely to increase positive care, whereas hunting has no impact on positive care or negative treatment, but does increase the odds of personhood. Thus, in societies where dogs are kept for hunting, humans are more likely to name their dogs and perceive them as family members.

Additionally, the study showed that negative treatment and positive care are not mutually exclusive. In fact, of the 77 societies with data for all three dimensions of dog treatment, 32 showed the presence of both positive care and negative treatment. This suggests that the dog-human relationship is not as simple or straightforward as "man's best friend," but involves a complex balance between offering care and minimizing costs.

"Our study adds a systematic test for explaining the cultural drivers that shape the variety of dog-human bonds around the world," says Juliane Bräuer of the Max Planck Institute of Geoanthropology. "This represents a first step into understanding whether the cognitive and social skills associated with dogs are universal or are influenced by the cultural environment the dogs live in."

Read more at Science Daily

New research shows how cultural transmission shapes the evolution of music

New research has found that constraints in the way our brains work can shape the way people interact when creating music, influencing its evolution. The results are published this week [22 March] in the journal Current Biology.

The research team made up of scientistic from the University of Oxford, the University of Cambridge, and the Max Planck Institute for Empirical Aesthetics, used singing experiments to perform the largest ever cultural transmission study on the evolution of music.

Dr Manuel Anglada-Tort, Lecturer at the University of Oxford said: 'Singing is a universal mode of musical communication, practiced by all cultures and ages, even in infants. For most of our history, oral transmission was the main mechanism by which songs were passed down human generations.

'We believe that cross-cultural commonalities and diversities in human song emerged from this transmission process, but thus far it has been difficult to test how oral transmission shapes music evolution.'

The research team developed a novel method to simulate the evolution of music with singing experiments, where sung melodies are passed from one singer to the next. Over time, participants make errors in their efforts to replicate the melodies that they hear, gradually shaping the evolution of music in systematic ways. This approach allowed the researchers to study music evolution in unprecedented detail, quantifying the evolution of 3,424 melodies transmitted across 1,797 participants in the USA and India.

'This work demonstrates the benefits of combining large-scale online data collection with innovative psychological paradigms to explore cultural transmission processes in unprecedented detail.' Dr Anglada-Tort continues.

They found that oral transmission has profound effects on music evolution, revealing the emergence of musical structures that are consistent with widespread musical features observed across world cultures. In several controlled experiments, the researchers found that this happens because as humans, we are limited by our capacity to produce and process music. For example, musical elements that are difficult to sing such as large pitch intervals or to remember such as unfamiliar melodies, are consistently less likely to survive the transmission process.

Despite the infinite patterns in which music could be combined, the researchers found in practice, 'human transmission biases' shape vocal music towards those structures that are easier to learn and transmit. They found individual participant biases, including biological and cognitive factors, are an important bottleneck for evolution by oral transmission.

Dr Nori Jacoby, Research Group Leader at the Max Planck Institute for Empirical Aesthetics who supervised the study said: 'This work demonstrates the benefits of combining large-scale online data collection with innovative psychological paradigms to explore cultural transmission processes in unprecedented detail.

Read more at Science Daily

Deep ocean currents around Antarctica headed for collapse

The deep ocean circulation that forms around Antarctica could be headed for collapse, say scientists.

Such decline of this ocean circulation will stagnate the bottom of the oceans and generate further impacts affecting climate and marine ecosystems for centuries to come.

The results are detailed in a new study coordinated by Scientia Professor Matthew England, Deputy Director of the ARC Centre for Excellence in Antarctic Science (ACEAS) at UNSW Sydney. The work, published today in Nature, includes lead author Dr. Qian Li -- formerly from UNSW and now at the Massachusetts Institute of Technology (MIT) -- as well as co-authors from the Australian National University (ANU) and CSIRO.

Cold water that sinks near Antarctica drives the deepest flow of the overturning circulation -- a network of currents that spans the world's oceans. The overturning carries heat, carbon, oxygen and nutrients around the globe. This influences climate, sea level and the productivity of marine ecosystems.

"Our modelling shows that if global carbon emissions continue at the current rate, then the Antarctic overturning will slow by more than 40 per cent in the next 30 years -- and on a trajectory that looks headed towards collapse," says Prof England.

Modelling the deep ocean

About 250 trillion tonnes of cold, salty, oxygen-rich water sinks near Antarctica each year. This water then spreads northwards and carries oxygen into the deep Indian, Pacific and Atlantic Oceans.

"If the oceans had lungs, this would be one of them," Prof England says.

The international team of scientists modelled the amount of Antarctic deep water produced under the IPCC 'high emissions scenario', until 2050.

The model captures detail of the ocean processes that previous models haven't been able to, including how predictions for meltwater from ice might influence the circulation.

This deep ocean current has remained in a relatively stable state for thousands of years, but with increasing greenhouse gas emissions, Antarctic overturning is predicted to slow down significantly over the next few decades.

Impacts of reduced Antarctic overturning

With a collapse of this deep ocean current, the oceans below 4000 metres would stagnate.

"This would trap nutrients in the deep ocean, reducing the nutrients available to support marine life near the ocean surface," says Prof England.

Co-author Dr Steve Rintoul of CSIRO and the Australian Antarctic Program Partnership says the model simulations show a slowing of the overturning, which then leads to rapid warming of the deep ocean.

"Direct measurements confirm that warming of the deep ocean is indeed already underway," says Dr Rintoul. The study found melting ice around Antarctica makes the nearby ocean waters less dense, which slows the Antarctic overturning circulation. The melt of the Antarctic and Greenland ice sheets is expected to continue to accelerate as the planet warms.

"Our study shows that the melting of the ice sheets has a dramatic impact on the overturning circulation that regulates Earth's climate," says Dr Adele Morrison, also from ACEAS and the ANU Research School of Earth Sciences.

"We are talking about the possible long-term extinction of an iconic water mass," says Prof England.

Read more at Science Daily

Mar 29, 2023

Brightest gamma-ray burst ever observed reveals new mysteries of cosmic explosions

On October 9, 2022, an intense pulse of gamma-ray radiation swept through our solar system, overwhelming gamma-ray detectors on numerous orbiting satellites, and sending astronomers on a chase to study the event using the most powerful telescopes in the world.

The new source, dubbed GRB 221009A for its discovery date, turned out to be the brightest gamma-ray burst (GRB) ever recorded.

In a new study that appears today in the Astrophysical Journal Letters, observations of GRB 221009A spanning from radio waves to gamma-rays, including critical millimeter-wave observations with the Center for Astrophysics | Harvard & Smithsonian's Submillimeter Array (SMA) in Hawaii, shed new light on the decades-long quest to understand the origin of these extreme cosmic explosions.

The gamma-ray emission from GRB 221009A lasted over 300 seconds. Astronomers think that such "long-duration" GRBs are the birth cry of a black hole, formed as the core of a massive and rapidly spinning star collapses under its own weight. The newborn black hole launches powerful jets of plasma at near the speed of light, which pierce through the collapsing star and shine in gamma-rays.

With GRB 221009A being the brightest burst ever recorded, a real mystery lay in what would come after the initial burst of gamma-rays. "As the jets slam into gas surrounding the dying star, they produce a bright `afterglow' of light across the entire spectrum," says Tanmoy Laskar, assistant professor of physics and astronomy at the University of Utah, and lead author of the study. "The afterglow fades quite rapidly, which means we have to be quick and nimble in capturing the light before it disappears, taking its secrets with it."

As part of a campaign to use the world's best radio and millimeter telescopes to study the afterglow of GRB 221009A, astronomers Edo Berger and Yvette Cendes of the Center for Astrophysics (CfA) rapidly gathered data with the SMA.

"This burst, being so bright, provided a unique opportunity to explore the detailed behavior and evolution of an afterglow with unprecedented detail -- we did not want to miss it!" says Edo Berger, professor of astronomy at Harvard University and the CfA. "I have been studying these events for more than twenty years, and this one was as exciting as the first GRB I ever observed."

"Thanks to its rapid-response capability, we were able to quickly turn the SMA to the location of GRB 221009A," says SMA project scientist and CfA researcher Garrett Keating. "The team was excited to see just how bright the afterglow of this GRB was, which we were able to continue to monitor for more than 10 days as it faded."

After analyzing and combining the data from the SMA and other telescopes all over the world, the astronomers were flummoxed: the millimeter and radio wave measurements were much brighter than expected based on the visible and X-ray light.

"This is one of the most detailed datasets we have ever collected, and it is clear that the millimeter and radio data just don't behave as expected," says CfA research associate Yvette Cendes. "A few GRBs in the past have shown a brief excess of millimeter and radio emission that is thought to be the signature of a shockwave in the jet itself, but in GRB 221009A the excess emission behaves quite differently than in these past cases."

She adds, "It is likely that we have discovered a completely new mechanism to produce excess millimeter and radio waves."

One possibility, says Cendes, is that the powerful jet produced by GRB 221009A is more complex than in most GRBs. "It is possible that the visible and X-ray light are produced by one portion of the jet, while the early millimeter and radio waves are produced by a different component."

"Luckily, this afterglow is so bright that we will continue to study its radio emission for months and maybe years to come," adds Berger. "With this much longer time span we hope to decipher the mysterious origin of the early excess emission."

Independent of the exact details of this particular GRB, the ability to respond rapidly to GRBs and similar events with millimeter-wave telescopes is an essential new capability for astronomers.

Read more at Science Daily

Temperature of a rocky exoplanet measured

An international team of researchers has used NASA's James Webb Space Telescope to measure the temperature of the rocky exoplanet TRAPPIST-1 b. The measurement is based on the planet's thermal emission: heat energy given off in the form of infrared light detected by Webb's Mid-Infrared Instrument (MIRI). The result indicates that the planet's dayside has a temperature of about 500 kelvins (roughly 450 degrees Fahrenheit) and suggests that it has no significant atmosphere.

This is the first detection of any form of light emitted by an exoplanet as small and as cool as the rocky planets in our own solar system. The result marks an important step in determining whether planets orbiting small active stars like TRAPPIST-1 can sustain atmospheres needed to support life. It also bodes well for Webb's ability to characterize temperate, Earth-sized exoplanets using MIRI.

"These observations really take advantage of Webb's mid-infrared capability," said Thomas Greene, an astrophysicist at NASA's Ames Research Center and lead author on the study published today in the journal Nature. "No previous telescopes have had the sensitivity to measure such dim mid-infrared light."

Rocky Planets Orbiting Ultracool Red Dwarfs

In early 2017, astronomers reported the discovery of seven rocky planets orbiting an ultracool red dwarf star (or M dwarf) 40 light-years from Earth. What is remarkable about the planets is their similarity in size and mass to the inner, rocky planets of our own solar system. Although they all orbit much closer to their star than any of our planets orbit the Sun - all could fit comfortably within the orbit of Mercury - they receive comparable amounts of energy from their tiny star.

TRAPPIST-1 b, the innermost planet, has an orbital distance about one hundredth that of Earth's and receives about four times the amount of energy that Earth gets from the Sun. Although it is not within the system's habitable zone, observations of the planet can provide important information about its sibling planets, as well as those of other M-dwarf systems.

"There are ten times as many of these stars in the Milky Way as there are stars like the Sun, and they are twice as likely to have rocky planets as stars like the Sun," explained Greene. "But they are also very active - they are very bright when they're young, and they give off flares and X-rays that can wipe out an atmosphere."

Co-author Elsa Ducrot from the French Alternative Energies and Atomic Energy Commission (CEA) in France, who was on the team that conducted earlier studies of the TRAPPIST-1 system, added, "It's easier to characterize terrestrial planets around smaller, cooler stars. If we want to understand habitability around M stars, the TRAPPIST-1 system is a great laboratory. These are the best targets we have for looking at the atmospheres of rocky planets."

Detecting an Atmosphere (or Not)

Previous observations of TRAPPIST-1 b with the Hubble and Spitzer space telescopes found no evidence for a puffy atmosphere, but were not able to rule out a dense one.

One way to reduce the uncertainty is to measure the planet's temperature. "This planet is tidally locked, with one side facing the star at all times and the other in permanent darkness," said Pierre-Olivier Lagage from CEA, a co-author on the paper. "If it has an atmosphere to circulate and redistribute the heat, the dayside will be cooler than if there is no atmosphere."

The team used a technique called secondary eclipse photometry, in which MIRI measured the change in brightness from the system as the planet moved behind the star. Although TRAPPIST-1 b is not hot enough to give off its own visible light, it does have an infrared glow. By subtracting the brightness of the star on its own (during the secondary eclipse) from the brightness of the star and planet combined, they were able to successfully calculate how much infrared light is being given off by the planet.

Measuring Minuscule Changes in Brightness

Webb's detection of a secondary eclipse is itself a major milestone. With the star more than 1,000 times brighter than the planet, the change in brightness is less than 0.1%.

"There was also some fear that we'd miss the eclipse. The planets all tug on each other, so the orbits are not perfect," said Taylor Bell, the post-doctoral researcher at the Bay Area Environmental Research Institute who analyzed the data. "But it was just amazing: The time of the eclipse that we saw in the data matched the predicted time within a couple of minutes."

The team analyzed data from five separate secondary eclipse observations. "We compared the results to computer models showing what the temperature should be in different scenarios," explained Ducrot. "The results are almost perfectly consistent with a blackbody made of bare rock and no atmosphere to circulate the heat. We also didn't see any signs of light being absorbed by carbon dioxide, which would be apparent in these measurements."

This research was conducted as part of Webb Guaranteed Time Observation (GTO) program 1177, which is one of eight programs from Webb's first year of science designed to help fully characterize the TRAPPIST-1 system. Additional secondary eclipse observations of TRAPPIST-1 b are currently in progress, and now that they know how good the data can be, the team hopes to eventually capture a full phase curve showing the change in brightness over the entire orbit. This will allow them to see how the temperature changes from the day to the nightside and confirm if the planet has an atmosphere or not.

Read more at Science Daily

Early morning university classes correlate with poor sleep and academic performance

Digital data from university students in Singapore suggest they could be getting better grades if their classes started later. The findings, from tens of thousands of students, were published by Duke-NUS Medical School researchers and colleagues in the journal Nature Human Behaviour.

Research in recent years has shown that postponing the start time of high schools improves the amount of sleep that students get and reduces their sleepiness during school hours. But findings are mixed about whether this has a positive impact on grades.

To determine the impact specifically on university students, Associate Professor Joshua Gooley, from Duke-NUS’ Neuroscience & Behavioural Disorders Programme and colleagues used student Wi-Fi connection data, log-ins to university digital learning platforms, and activity data from special sensing watches to conduct large-scale monitoring of class attendance and sleep behaviour of tens of thousands of university students.

“We implemented new methods that allow large-scale monitoring of class attendance and sleep behaviour by analysing students’ classroom Wi-Fi connection data and their interactions with digital learning platforms,” said Dr Yeo Sing Chen, first author of the study and a Duke-NUS PhD graduate.

From the data, the researchers found that early class start times were associated with lower attendance, with many students regularly sleeping past the start of such classes. When students did attend an early class, they lost about an hour of sleep. Morning classes on more days of the week were also associated with a lower grade point average.

“If the goal of formal education is to position our students to succeed in the classroom and workforce, why are we forcing many university students into the bad decision of either skipping morning class to sleep more or attending class while sleep-deprived?” asked Assoc Prof Gooley. “The take-home message from our study is that universities should reconsider mandatory early morning classes.”

The researchers drew insights using the Wi-Fi connection logs of 23,391 students to find out if early morning classes were associated with lower attendance. They then compared the data with six weeks of watch-derived activity data from a subset of 181 students to determine if the students were sleeping instead of attending early morning classes.

They also analysed activity data with the day and night patterns of digital learning platform logins of 39,458 students to determine if early morning classes were associated with waking up earlier and getting less sleep. Finally, they studied the grades of 33,818 students and the number of morning classes these students were taking to determine if it impacted their grade point average.

Read more at Science Daily

The Greenland Ice Sheet is close to a melting point of no return

The Greenland Ice Sheet covers 1.7 million square kilometers (660,200 square miles) in the Arctic. If it melts entirely, global sea level would rise about 7 meters (23 feet), but scientists aren't sure how quickly the ice sheet could melt. Modeling tipping points, which are critical thresholds where a system behavior irreversibly changes, helps researchers find out when that melt might occur.

Based in part on carbon emissions, a new study using simulations identified two tipping points for the Greenland Ice Sheet: releasing 1000 gigatons of carbon into the atmosphere will cause the southern portion of the ice sheet to melt; about 2500 gigatons of carbon means permanent loss of nearly the entire ice sheet.

Having emitted about 500 gigatons of carbon, we're about halfway to the first tipping point.

"The first tipping point is not far from today's climate conditions, so we're in danger of crossing it," said Dennis Höning, a climate scientist at the Potsdam Institute for Climate Impact Research who led the study. "Once we start sliding, we will fall off this cliff and cannot climb back up."

The study was published in AGU's journal Geophysical Research Letters, which publishes short-format, high-impact research spanning the Earth and space sciences.

The Greenland Ice Sheet is already melting; between 2003 and 2016, it lost about 255 gigatons (billions of tons) of ice each year. Much of the melt to date has been in the southern part of the ice sheet. Air and water temperature, ocean currents, precipitation and other factors all determine how quickly the ice sheet melts and where it loses ice.

The complexity of how those factors influence each other, along with the long timescales scientists need to consider for melting an ice sheet of this size, make it difficult to predict how the ice sheet will respond to different climate and carbon emissions scenarios.

Previous research identified global warming of between 1 degree to 3 degrees Celsius (1.8 to 5.4 degrees Fahrenheit) as the threshold beyond which the Greenland Ice Sheet will melt irreversibly.

To more comprehensively model how the ice sheet's response to climate could evolve over time, Höning's new study for the first time used a complex model of the whole Earth system, which includes all the key climate feedback processes, paired with a model of ice sheet behavior. They first used simulations with constant temperatures to find equilibrium states of the ice sheet, or points where ice loss equaled ice gain. Then they ran a set of 20,000-year-long simulations with carbon emissions ranging from 0 to 4000 gigatons of carbon.

From among those simulations, the researchers derived the 1000-gigaton carbon tipping point for the melting of the southern portion of the ice sheet and the even more perilous 2,500-gigaton carbon tipping point for the disappearance of nearly the entire ice sheet.

As the ice sheet melts, its surface will be at ever-lower elevations, exposed to warmer air temperatures. Warmer air temperatures accelerate melt, making it drop and warm further. Global air temperatures have to remain elevated for hundreds of years or even longer for this feedback loop to become effective; a quick blip of 2 degrees Celsius (3.6 degrees Fahrenheit) wouldn't trigger it, Höning said. But once the ice crosses the threshold, it would inevitably continue to melt. Even if atmospheric carbon dioxide were reduced to pre-industrial levels, it wouldn't be enough to allow the ice sheet to regrow substantially.

Read more at Science Daily

Mar 28, 2023

Two meteorites are providing a detailed look into outer space

If you've ever seen a shooting star, you might have actually seen a meteor on its way to Earth. Those that land here are called meteorites and can be used to peek back in time, into the far corners of outer space or at the earliest building blocks of life. Today, scientists report some of the most detailed analyses yet of the organic material of two meteorites. They've identified tens of thousands of molecular "puzzle pieces," including a larger amount of oxygen atoms than they had expected.

The researchers will present their results at the spring meeting of the American Chemical Society (ACS).

Previously, the team led by Alan Marshall, Ph.D., investigated complex mixtures of organic materials found on Earth, including petroleum. But now, they are turning their attention toward the skies -- or the things that have fallen from them. Their ultra-high resolution mass spectrometry (MS) technique is starting to reveal new information about the universe and could ultimately provide a window into the origin of life itself.

"This analysis gives us an idea of what's out there, what we're going to run into as we move forward as a 'spacefaring' species," says Joseph Frye-Jones, a graduate student who is presenting the work at the meeting. Both Marshall and Frye-Jones are at Florida State University and the National High Magnetic Field Laboratory.

Thousands of meteorites fall to Earth every year, but only a rare few are "carbonaceous chondrites," the category of space rock that contains the most organic, or carbon-containing, material. One of the most famous is the "Murchison" meteorite, which fell in Australia in 1969 and has been studied extensively since. A newer entry is the relatively unexplored "Aguas Zarcas," which fell in Costa Rica in 2019, bursting through back porches and even a doghouse as its pieces fell to the ground. By understanding the organic makeup of these meteorites, researchers can obtain information about where and when the rocks formed, and what they ran into on their journey through space.

To make sense of the complicated jumble of molecules on the meteorites, the scientists turned to MS. This technique blasts a sample apart into tiny particles, then basically reports the mass of each one, represented as a peak. By analyzing the collection of peaks, or the spectrum, scientists can learn what was in the original sample. But in many cases, the resolution of the spectrum is only good enough to confirm the presence of a compound that was already presumed to be there, rather than providing information about unknown components.

This is where Fourier-transform ion cyclotron resonance (FT-ICR) MS comes in, which is also known as "ultra-high resolution" MS. It can analyze incredibly complex mixtures with very high levels of resolution and accuracy. It's especially well suited for analyzing mixtures, like petroleum, or the complex organic material extracted from a meteorite. "With this instrument, we really have the resolution to look at everything in many kinds of samples," says Frye-Jones.

The researchers extracted the organic material from samples of both the Murchison and Aguas Zarcas meteorites, then analyzed it with ultra-high resolution MS. Rather than analyzing only one specific class of molecules at a time, such as amino acids, they chose to look at all soluble organic material at once. This provided the team with more than 30,000 peaks for each meteorite to analyze, and over 60% of them could be given a unique molecular formula. Frye-Jones says these results represent the first analysis of this type on the Aguas Zarcas meteorite, and the highest-resolution analysis on the Murchison one. In fact, this team identified nearly twice as many molecular formulas as previously reported for the older meteorite.

Once determined, the data were sorted into unique groups based on various characteristics, such as whether they included oxygen or sulfur, or whether they potentially contained a ring structure or double bonds. They were surprised to find a large amount of oxygen content among the compounds. "You don't think of oxygen-containing organics as being a big part of meteorites," explained Marshall.

The researchers will next turn their attention to two far more precious samples: a few grams of lunar dust from the Apollo 12 and 14 missions of 1969 and 1971, respectively. These samples predate Marshall's invention of FT-ICR MS in the early 1970s. Instrumentation has come a long way in the decades since and is now perfectly poised to analyze these powders. The team will soon compare their results from the meteorite analyses to the data they obtain from the lunar samples, hoping to learn more information about where the moon's surface came from. "Was it from meteorites? Solar radiation? We should be able to soon shed some light on that," says Marshall.

Read more at Science Daily

Human cells help researchers understand squid camouflage

Squids and octopuses are masters of camouflage, blending into their environment to evade predators or surprise prey. Some aspects of how these cephalopods become reversibly transparent are still "unclear," largely because researchers can't culture cephalopod skin cells in the lab. Today, however, researchers report that they have replicated the tunable transparency of some squid skin cells in mammalian cells, which can be cultured. The work could not only shed light on basic squid biology, but also lead to better ways to image many cell types.

The researchers will present their results at the spring meeting of the American Chemical Society (ACS).

For many years, Alon Gorodetsky, Ph.D., and his research group have been working on materials inspired by squid. In past work, they developed "invisibility stickers," which consisted of bacterially produced squid reflectin proteins that were adhered onto sticky tape. "So then, we had this crazy idea to see whether we could capture some aspect of the ability of squid skin tissues to change transparency within human cell cultures," says Gorodetsky, who is the principal investigator on the project.

The team at the University of California, Irvine focused their efforts on cephalopod cells called leucophores, which have particulate-like nanostructures composed of reflectin proteins that scatter light. Typically, reflectins clump together and form the nanoparticles, so light isn't absorbed or directly transmitted; instead, the light scatters or bounces off of them, making the leucophores appear bright white.

"We wanted to engineer mammalian cells to stably, instead of temporarily, form reflectin nanostructures for which we could better control the scattering of light," says Gorodetsky. That's because if cells allow light through with little scattering, they'll seem more transparent. Alternatively, by scattering a lot more light, cells will become opaque and more apparent. "Then, at a cellular level, or even the culture level, we thought that we could predictably alter the cells' transparency relative to the surroundings or background," he says.

To change how light interacts with cultured cells, Georgii Bogdanov, a graduate student in Gorodetsky's lab who is presenting the results, introduced squid-derived genes that encoded for reflectin into human cells, which then used the DNA to produce the protein. "A key advance in our experiments was getting the cells to stably produce reflectin and form light-scattering nanostructures with relatively high refractive indices, which also allowed us to better image the cells in three dimensions," says Bogdanov.

In experiments, the team added salt to the cells' culture media and observed the reflectin proteins clumping together into nanostructures. By systematically increasing the salt concentration, Bogdanov got detailed, time-lapse 3D images of the nanostructures' properties. As the nanoparticles became larger, the amount of light that bounced off the cells increased, consequently tuning their opacity.

Then, the COVID-19 pandemic hit, leaving the researchers to wonder what they could do to advance their investigation without being physically in the lab. So, Bogdanov spent his time at home developing computational models that could predict a cell's expected light scattering and transparency before an experiment was even run. "It's a beautiful loop between theory and experiments, where you feed in design parameters for the reflectin nanostructures, get out specific predicted optical properties and then engineer the cells more efficiently -- for whatever light-scattering properties you might be interested in," explains Gorodetsky.

On a basic level, Gorodetsky suggests that these results will help scientists better understand squid skin cells, which haven't been successfully cultured in a laboratory setting. For example, previous researchers postulated that reflectin nanoparticles disassemble and reassemble to change the transparency of tunable squid leucophores. And now Gorodetsky's team has shown that similar rearrangements occurred in their stable engineered mammalian cells with simple changes in salt concentration, a mechanism that appears analogous to what has been observed in the tunable squid cells.

Read more at Science Daily

Earth's first plants likely to have been branched

A new discovery by scientists at the University of Bristol changes ideas about the origin of branching in plants.

By studying the mechanisms responsible for branching, the team have determined what the first land plants are likely to have looked like millions of years ago.

Despite fundamentally different patterns in growth, their research has identified a common mechanism for branching in vascular plants.

Dr Jill Harrison from Bristol's School of Biological Sciences explained: "Diverse shapes abound in the dominant flowering plant group, and gardeners will be familiar with 'pinching out' plants' shoot tips to stimulate side branch growth, leading to a bushier overall form.

"However, unlike flowering plants, other vascular plants branch by splitting the shoot apex into two during growth, a process known as 'dichotomy'.

As an ancient vascular plant lineage that formed coal seams during the Carboniferous era, lycophytes preserve the ancestral pattern of dichotomous branching.

Using surgical experiments in a lycophyte, researchers at the University of Bristol have discovered that dichotomy is regulated by short range auxin transport and co-ordinated in different parts of the plant by long range auxin transport.

Published in Development, the findings that both flowering plant and lycophyte branching are regulated by auxin transport imply that similar mechanisms were present in the earliest vascular plants around 420 million years ago.

By combining these findings with discoveries made in the non-vascular, non-branching moss group we can infer what the first land plants looked like around 480 million years ago.

Previously, the Dr Harrison's lab disrupted auxin transport in a moss, leading it to branch in a similar manner to the earliest branching fossils.

Together these studies imply that the earliest land plants were branched, and that branching was lost during the evolution of non-vascular mosses.

Dr Jill Harrison explained: "The greening of the land by plants paved the way for all terrestrial life to evolve as it provided food for animals and oxygen to breathe, and branching was a key innovation in the radiation of land plants.

"Our work implies that branching evolved earlier than thought, which is an important evolutionary conclusion."

Read more at Science Daily

Colorful films could help buildings, cars keep their cool

The cold blast of an air conditioner can be a welcome relief as temperatures soar, but "A/C" units require large amounts of energy and can leak potent greenhouse gases. Today, scientists report an eco-friendly alternative -- a plant-based film that gets cooler when exposed to sunlight and comes in a variety of textures and bright, iridescent colors. The material could someday keep buildings, cars and other structures cool without requiring external power.

The researchers will present their results at the spring meeting of the American Chemical Society (ACS).

"To make materials that remain cooler than the air around them during the day, you need something that reflects a lot of solar light and doesn't absorb it, which would transform energy from the light into heat," says Silvia Vignolini, Ph.D., the project's principal investigator. "There are only a few materials that have this property, and adding color pigments would typically undo their cooling effects," Vignolini adds.

Passive daytime radiative cooling (PDRC) is the ability of a surface to emit its own heat into space without it being absorbed by the air or atmosphere. The result is a surface that, without using any electrical power, can become several degrees colder than the air around it. When used on buildings or other structures, materials that promote this effect can help limit the use of air conditioning and other power-intensive cooling methods.

Some paints and films currently in development can achieve PDRC, but most of them are white or have a mirrored finish, says Qingchen Shen, Ph.D., who is presenting the work at the meeting. Both Vignolini and Shen are at Cambridge University (U.K.). But a building owner who wanted to use a blue-colored PDRC paint would be out of luck -- colored pigments, by definition, absorb specific wavelengths of sunlight and only reflect the colors we see, causing undesirable warming effects in the process.

But there's a way to achieve color without the use of pigments. Soap bubbles, for example, show a prism of different colors on their surfaces. These colors result from the way light interacts with differing thicknesses of the bubble's film, a phenomenon called structural color. Part of Vignolini's research focuses on identifying the causes behind different types of structural colors in nature. In one case, her group found that cellulose nanocrystals (CNCs), which are derived from the cellulose found in plants, could be made into iridescent, colorful films without any added pigment.

As it turns out, cellulose is also one of the few naturally occurring materials that can promote PDRC. Vignolini learned this after hearing a talk from the first researchers to have created a cooling film material. "I thought wow, this is really amazing, and I never really thought cellulose could do this."

In recent work, Shen and Vignolini layered colorful CNC materials with a white-colored material made from ethyl cellulose, producing a colorful bi-layered PDRC film. They made films with vibrant blue, green and red colors that, when placed under sunlight, were an average of nearly 40 F cooler than the surrounding air. A square meter of the film generated over 120 Watts of cooling power, rivaling many types of residential air conditioners. The most challenging aspect of this research, Shen says, was finding a way to make the two layers stick together -- on their own, the CNC films were brittle, and the ethyl cellulose layer had to be plasma-treated to get good adhesion. The result, however, was films that were robust and could be prepared several meters at a time in a standard manufacturing line.

Since creating these first films, the researchers have been improving their aesthetic appearance. Using a method modified from approaches previously explored by the group, they're making cellulose-based cooling films that are glittery and colorful. They've also adjusted the ethyl cellulose film to have different textures, like the differences between types of wood finishes used in architecture and interior design, says Shen. These changes would give people more options when incorporating PDRC effects in their homes, businesses, cars and other structures.

The researchers now plan to find ways they can make their films even more functional. According to Shen, CNC materials can be used as sensors to detect environmental pollutants or weather changes, which could be useful if combined with the cooling power of their CNC-ethyl cellulose films. For example, a cobalt-colored PDRC on a building façade in a car-dense, urban area could someday keep the building cool and incorporate detectors that would alert officials to higher levels of smog-causing molecules in the air.

Read more at Science Daily

Mar 27, 2023

Neutrinos made by a particle collider detected

In a scientific first, a team led by physicists at the University of California, Irvine has detected neutrinos created by a particle collider. The discovery promises to deepen scientists' understanding of the subatomic particles, which were first spotted in 1956 and play a key role in the process that makes stars burn.

The work could also shed light on cosmic neutrinos that travel large distances and collide with the Earth, providing a window on distant parts of the universe.

It's the latest result from the Forward Search Experiment, or FASER, a particle detector designed and built by an international group of physicists and installed at CERN, the European Council for Nuclear Research in Geneva, Switzerland. There, FASER detects particles produced by CERN's Large Hadron Collider.

"We've discovered neutrinos from a brand-new source - particle colliders - where you have two beams of particles smash together at extremely high energy," said UC Irvine particle physicist and FASER Collaboration Co-Spokesman Jonathan Feng, who initiated the project, which involves over 80 researchers at UCI and 21 partner institutions.

Brian Petersen, a particle physicist at CERN, announced the results Sunday on behalf of FASER at the 57th Rencontres de Moriond Electroweak Interactions and Unified Theories conference in Italy.

Neutrinos, which were co-discovered nearly 70 years ago by the late UCI physicist and Nobel laureate Frederick Reines, are the most abundant particle in the cosmos and "were very important for establishing the standard model of particle physics," said Jamie Boyd, a particle physicist at CERN and co-spokesman for FASER. "But no neutrino produced at a collider had ever been detected by an experiment."

Since the groundbreaking work of Reines and others like Hank Sobel, UCI professor of physics & astronomy, the majority of neutrinos studied by physicists have been low-energy neutrinos. But the neutrinos detected by FASER are the highest energy ever produced in a lab and are similar to the neutrinos found when deep-space particles trigger dramatic particle showers in our atmosphere.

"They can tell us about deep space in ways we can't learn otherwise," said Boyd. "These very high-energy neutrinos in the LHC are important for understanding really exciting observations in particle astrophysics."

FASER itself is new and unique among particle-detecting experiments. In contrast to other detectors at CERN, such as ATLAS, which stands several stories tall and weighs thousands of tons, FASER is about one ton and fits neatly inside a small side tunnel at CERN. And it took only a few years to design and construct using spare parts from other experiments.

"Neutrinos are the only known particles that the much larger experiments at the Large Hadron Collider are unable to directly detect, so FASER's successful observation means the collider's full physics potential is finally being exploited," said UCI experimental physicist Dave Casper.

Beyond neutrinos, one of FASER's other chief objectives is to help identify the particles that make up dark matter, which physicists think comprises most of the matter in the universe, but which they've never directly observed.

Read more at Science Daily

Copper artifacts unearth new cultural connections in southern Africa

Chemical and isotopic analysis of copper artifacts from southern Africa reveals new cultural connections among people living in the region between the 5th and 20th centuries according to a University of Missouri researcher and colleagues.

People in the area between northern South Africa and the Copperbelt region in central Africa were more connected to one another than scholars previously thought, said Jay Stephens, a post-doctoral fellow in the MU Research Reactor (MURR) Archaeometry Lab.

"Over the past 20 to 30 years, most archaeologists have framed the archaeological record of southern Africa in a global way with a major focus on its connection to imports coming from the Indian Ocean," he said. "But it's also important to recognize the interconnected relationships that existed among the many groups of people living in southern Africa. The data shows the interaction between these groups not only involved the movement of goods, but also flows of information and the sharing of technological practices that come with that exchange."

Mining copper ore

For years, scholars debated whether these artifacts, called rectangular, fishtail and croisette copper ingots, were made exclusively from copper ore mined in the Copperbelt region or from Zimbabwe's Magondi Belt. As it turns out, both theories are correct, Stephens said.

"We now have tangible linkages to reconstruct connectivity at various points in time in the archeological record," he said. "There is a massive history of interconnectivity found throughout the region in areas now known as the countries of Zambia, Zimbabwe and the Democratic Republic of the Congo. This also includes people from the contemporary Ingombe Ilede, Harare, and Musengezi traditions of northern Zimbabwe between at least the 14th and 18th centuries A.D."

To determine their findings, researchers took small samples from 33 copper ingots and analyzed them at the University of Arizona. All samples were carefully selected by researchers from archeological samples found in the collections of the Museum of Human Sciences in Harare, Zimbabwe, and the Livingstone Museum in Livingstone, Zambia.

"We didn't want to impact the display of an object, so we tried to be aware of how museums and institutions would want to interact with the data we collected and share it with the general public," Stephens said. "We also want our knowledge to be accessible for the individuals in these communities who continue to interact with these objects. Hopefully, some of the skills linked with these analyses can be used by whomever wants to ask similar questions in the future."

Stephens said copper ingots are excellent objects for this type of analysis because they often have emblematic shapes that allow archaeologists to identify specific markings and follow changes over different time periods.

"By looking at their changes in shape and morphology over time, we can pair those changes with how technology changed over time," he said. "This often comes from observing the decorative features produced from the cast object or mold, or other surface attributes found on these objects."

Gathering scientific evidence

Once the samples arrived at the University of Arizona lab, researchers took a small amount of each sample -- less than one gram -- and dissolved it with specific acids to leave behind a liquid mixture of chemical ions. Then the samples were analyzed for lead isotopes and other chemical elements. One challenge the team encountered was a lack of existing data to match their samples with.

"One part of the project included analyzing hundreds of ore samples from different geological deposits in southern Africa -- especially ones mined before the arrival of European colonial forces -- to create a robust data set," Stephens said. "The data can provide a scientific foundation to help back up the inferences and conclusions we make in the study."

Historical connections

Stephens said the data they collect is one of the only remaining tangible links that exist today to those precolonial mines in Africa.

"Unfortunately, large open pit mines have destroyed a lot of the archaeological sites and broader cultural landscapes around these geological deposits," he said. "This makes it a challenge to reconstruct the history related to these mines. It's a concerning development, especially with the global push toward more electric vehicles which use minerals like copper and cobalt found in the Copperbelt."

Read more at Science Daily

Drought, heat waves worsen West Coast air pollution inequality

A new study led by North Carolina State University researchers found drought and heat waves could make air pollution worse for communities that already have a high pollution burden in California, and deepen pollution inequalities along racial and ethnic lines.

Published in Nature Communications, the study also found financial penalties for power plants can significantly reduce people's pollution exposure, except during severe heat waves.

"We have known that air pollution disproportionally impacts communities of color, the poor and communities that are already more likely to be impacted by other sources of environmental pollution," said the study's lead author Jordan Kern, assistant professor of forestry and environmental resources at NC State. "What we know now is that drought and heat waves makes things worse."

For the study, researchers estimated emissions of sulfur dioxide, nitrogen oxides and fine particulate matter from power plants in California across 500 different scenarios for what the weather could look like in future years, which they called "synthetic weather years." These years simulated conditions that could occur based on historical wind, air, temperature and solar radiation values on the West Coast between 1953 and 2008. Then by using information about the location of power plants in California and how much electricity they would be generating under different weather conditions, they estimated air pollution within individual counties.

They saw the worst air pollution in the hottest, driest years, which Kern said is due to the demand for more air conditioning during hot years. In addition, drought can impact the availability of hydropower. The excess electricity has to come from somewhere else, which is where fossil fuel plants come in.

"One of the things we were interested in was teasing apart the relative roles of drought, which can be chronic, lasting for months or years, versus heat waves, which can happen like a flash in a pan," Kern said. "We found drought is a driver of chronic pollution exposure, but heat waves are responsible for these incredible spikes in emissions in a short period of time."

They also saw that counties with a higher existing pollution burden were disproportionately impacted by pollution during drought and heat waves. Counties that were more diverse by race and ethnicity were also far more likely to be impacted by increased emissions from power plants during droughts and heat waves.

"The more diverse your county is by race and ethnicity, the more likely you are to be impacted by air pollution on an annual basis," Kern said. "During a drought, the relationship is more pronounced."

When they simulated the impact of three different policies that taxed power generators for emitting air pollution locally, overall, or both, they found that penalties helped reduce pollution health damages in more than 99% of days. However, during extreme heat waves, penalties failed to reduce emissions.

"Penalties make the more damaging power plants more expensive to operate, while it makes clean power plants comparatively less expensive," Kern said. "It incentivizes the system to switch to rely on more clean power plants, but that stops happening during really massive heat waves. The power operators have no choice but to turn on every power plant. They can't switch from the dirty power plants to the clean ones."

Read more at Science Daily

Robotic system offers hidden window into collective bee behavior

Honeybees are famously finicky when it comes to being studied. Research instruments and conditions and even unfamiliar smells can disrupt a colony's behavior. Now, a joint research team from the Mobile Robotic Systems Group in EPFL's School of Engineering and School of Computer and Communication Sciences and the Hiveopolis project at Austria's University of Graz have developed a robotic system that can be unobtrusively built into the frame of a standard honeybee hive.

Composed of an array of thermal sensors and actuators, the system measures and modulates honeybee behavior through localized temperature variations.

"Many rules of bee society -- from collective and individual interactions to raising a healthy brood -- are regulated by temperature, so we leveraged that for this study," explains EPFL PhD student Rafael Barmak, first author on a paper on the system recently published in Science Robotics. "The thermal sensors create a snapshot of the bees' collective behavior, while the actuators allow us to influence their movement by modulating thermal fields."

"Previous studies on the thermal behavior of honeybees in winter have relied on observing the bees or manipulating the outside temperature," adds Martin Stefanec of the University of Graz. "Our robotic system enables us to change the temperature from within the cluster, emulating the heating behavior of core bees there, and allowing us to study how the winter cluster actively regulates its temperature."

A 'biohybrid superorganism' to mitigate colony collapse

Bee colonies are challenging to study in winter since they are sensitive to cold, and opening their hives risks harming them in addition to influencing their behavior. But thanks to the researchers' biocompatible robotic system, they were able to study three experimental hives, located at the Artificial Life Lab at the University of Graz, during winter and to control them remotely from EPFL. Inside the device, a central processor coordinated the sensors, sent commands to the actuators, and transmitted data to the scientists, demonstrating that the system could be used to study bees with no intrusion -- or even cameras -- required.

Mobile Robotic Systems Group head Francesco Mondada explains that one of the most important aspects of the system -- which he calls a 'biohybrid superorganism' for its combination of robotics with a colony of individuals acting as a living entity -- is its ability to simultaneously observe and influence bee behavior.

"By gathering data on the bees' position and creating warmer areas in the hive, we were able to encourage them to move around in ways they would never normally do in nature during the winter, when they tend to huddle together to conserve energy. This gives us the possibility to act on behalf of a colony, for example by directing it toward a food source, or discouraging it from dividing into too-small groups, which can threaten its survival."

The scientists were able to prolong the survival of a colony following the death of its queen by distributing heat energy via the actuators. The system's ability to mitigate colony collapse could have implications for bee survivability, which has become a growing environmental and food security concern as the pollinators' global populations have declined.

Never-before-seen behaviors

In addition to its potential to support colonies, the system has shed light on honeybee behaviors that have never been observed, opening new avenues in biological research.

"The local thermal stimuli produced by our system revealed previously unreported dynamics that are generating exciting new questions and hypotheses," says EPFL postdoctoral researcher and corresponding author Rob Mills. "For example, currently, no model can explain why we were able to encourage the bees to cross some cold temperature 'valleys' within the hive."

The researchers now plan to use the system to study bees in summertime, which is a critical period for raising young. In parallel, the Mobile Robotic Systems Group is exploring systems using vibrational pathways to interact with honeybees.

"The biological acceptance aspect of this work is critical: the fact that the bees accepted the integration of electronics into the hive gives our device great potential for different scientific or agricultural applications," says Mondada.

Read more at Science Daily

Mar 26, 2023

Artificial intelligence discovers secret equation for 'weighing' galaxy clusters

Astrophysicists at the Institute for Advanced Study, the Flatiron Institute and their colleagues have leveraged artificial intelligence to uncover a better way to estimate the mass of colossal clusters of galaxies. The AI discovered that by just adding a simple term to an existing equation, scientists can produce far better mass estimates than they previously had.

The improved estimates will enable scientists to calculate the fundamental properties of the universe more accurately, the astrophysicists reported March 17, 2023, in the Proceedings of the National Academy of Sciences.

"It's such a simple thing; that's the beauty of this," says study co-author Francisco Villaescusa-Navarro, a research scientist at the Flatiron Institute's Center for Computational Astrophysics (CCA) in New York City. "Even though it's so simple, nobody before found this term. People have been working on this for decades, and still they were not able to find this."

The work was led by Digvijay Wadekar of the Institute for Advanced Study in Princeton, New Jersey, along with researchers from the CCA, Princeton University, Cornell University and the Center for Astrophysics | Harvard & Smithsonian.

Understanding the universe requires knowing where and how much stuff there is. Galaxy clusters are the most massive objects in the universe: A single cluster can contain anything from hundreds to thousands of galaxies, along with plasma, hot gas and dark matter. The cluster's gravity holds these components together. Understanding such galaxy clusters is crucial to pinning down the origin and continuing evolution of the universe.

Perhaps the most crucial quantity determining the properties of a galaxy cluster is its total mass. But measuring this quantity is difficult -- galaxies cannot be 'weighed' by placing them on a scale. The problem is further complicated because the dark matter that makes up much of a cluster's mass is invisible. Instead, scientists deduce the mass of a cluster from other observable quantities.

In the early 1970s, Rashid Sunyaev, current distinguished visiting professor at the Institute for Advanced Study's School of Natural Sciences, and his collaborator Yakov B. Zel'dovich developed a new way to estimate galaxy cluster masses. Their method relies on the fact that as gravity squashes matter together, the matter's electrons push back. That electron pressure alters how the electrons interact with particles of light called photons. As photons left over from the Big Bang's afterglow hit the squeezed material, the interaction creates new photons. The properties of those photons depend on how strongly gravity is compressing the material, which in turn depends on the galaxy cluster's heft. By measuring the photons, astrophysicists can estimate the cluster's mass.

However, this 'integrated electron pressure' is not a perfect proxy for mass, because the changes in the photon properties vary depending on the galaxy cluster. Wadekar and his colleagues thought an artificial intelligence tool called 'symbolic regression' might find a better approach. The tool essentially tries out different combinations of mathematical operators -- such as addition and subtraction -- with various variables, to see what equation best matches the data.

Wadekar and his collaborators 'fed' their AI program a state-of-the-art universe simulation containing many galaxy clusters. Next, their program, written by CCA research fellow Miles Cranmer, searched for and identified additional variables that might make the mass estimates more accurate.

AI is useful for identifying new parameter combinations that human analysts might overlook. For example, while it is easy for human analysts to identify two significant parameters in a dataset, AI can better parse through high volumes, often revealing unexpected influencing factors.

"Right now, a lot of the machine-learning community focuses on deep neural networks," Wadekar explained. "These are very powerful, but the drawback is that they are almost like a black box. We cannot understand what goes on in them. In physics, if something is giving good results, we want to know why it is doing so. Symbolic regression is beneficial because it searches a given dataset and generates simple mathematical expressions in the form of simple equations that you can understand. It provides an easily interpretable model."

The researchers' symbolic regression program handed them a new equation, which was able to better predict the mass of the galaxy cluster by adding a single new term to the existing equation. Wadekar and his collaborators then worked backward from this AI-generated equation and found a physical explanation. They realized that gas concentration correlates with the regions of galaxy clusters where mass inferences are less reliable, such as the cores of galaxies where supermassive black holes lurk. Their new equation improved mass inferences by downplaying the importance of those complex cores in the calculations. In a sense, the galaxy cluster is like a spherical doughnut. The new equation extracts the jelly at the center of the doughnut that can introduce larger errors, and instead concentrates on the doughy outskirts for more reliable mass inferences.

The researchers tested the AI-discovered equation on thousands of simulated universes from the CCA's CAMELS suite. They found that the equation reduced the variability in galaxy cluster mass estimates by around 20 to 30 percent for large clusters compared with the currently used equation.

The new equation can provide observational astronomers engaged in upcoming galaxy cluster surveys with better insights into the mass of the objects they observe. "There are quite a few surveys targeting galaxy clusters [that] are planned in the near future," Wadekar noted. "Examples include the Simons Observatory, the Stage 4 CMB experiment and an X-ray survey called eROSITA. The new equations can help us in maximizing the scientific return from these surveys."

Read more at Science Daily

Being fit partially offsets negative impact of high blood pressure

High fitness levels may reduce the risk of death from cardiovascular disease in men with high blood pressure, according to a 29-year study published today in the European Journal of Preventive Cardiology, a journal of the ESC.

"This was the first study to evaluate the joint effects of fitness and blood pressure on the risk of dying from cardiovascular disease," said study author Professor Jari Laukkanen of the University of Eastern Finland, Kuopio, Finland. "The results suggest that being fit helps protect against some of the negative effects of high blood pressure."

Nearly 1.3 billion adults aged 30 to 79 years worldwide have high blood pressure (hypertension). Hypertension is a major risk factor for heart attack and stroke and a leading cause of premature death globally. Previous studies have shown that high cardiorespiratory fitness is linked with greater longevity. This study examined the interplay between blood pressure, fitness and risk of death from cardiovascular disease.

The study included 2,280 men aged 42 to 61 years living in eastern Finland and enrolled in the Kuopio Ischaemic Heart Disease Risk Factor Study. Baseline measurements were conducted between 1984 and 1989. These included blood pressure and cardiorespiratory fitness, which was assessed as maximal oxygen uptake while riding a stationary bicycle. Blood pressure was classified as normal or high, and fitness was classified as low, medium or high.

The average age at baseline was 53 years. Participants were followed up until 2018. During a median follow up of 29 years, there were 644 deaths due to cardiovascular disease. The risk of death from cardiovascular disease was analysed after adjusting for age, body mass index, cholesterol levels, smoking status, type 2 diabetes, coronary heart disease, use of antihypertensive medication, alcohol consumption, physical activity, socioeconomic status, and high sensitivity C-reactive protein (a marker of inflammation).

Considering blood pressure alone, compared to normal values, high blood pressure was associated with a 39% increased risk of cardiovascular mortality (hazard ratio [HR] 1.39; 95% confidence interval [CI] 1.17-1.63). Considering fitness alone, compared with high levels, low fitness was associated with a 74% elevated likelihood of cardiovascular death (HR 1.74; 95% CI 1.35-2.23).

To evaluate the joint associations of blood pressure and fitness with risk of cardiovascular death, participants were categorised into four groups: 1) normal blood pressure and high fitness (this was the reference group for comparison); 2) normal blood pressure and low fitness; 3) high blood pressure and high fitness; 4) high blood pressure and low fitness.

Men with high blood pressure and low fitness had a more than doubled risk of cardiovascular death compared to those with normal blood pressure and high fitness (HR 2.35; 95% CI 1.81-3.04). When men with high blood pressure had high fitness levels, their elevated risk of cardiovascular risk persisted but was weaker: it was 55% higher than those with normal blood pressure and high fitness (HR 1.55; 95% CI 1.16-2.07).

Professor Laukkanen said: "Both high blood pressure and low fitness levels were each associated with an increased risk of cardiovascular death. High fitness levels attenuated, but did not eliminate, the increased risk of cardiovascular mortality in men with elevated blood pressure."

The paper states: The inability of cardiorespiratory fitness to completely eliminate the risk of cardiovascular mortality in those with high blood pressure could partly be due to the strong, independent and causal relationship between blood pressure and cardiovascular disease.

Professor Laukkanen concluded: "Getting blood pressure under control should remain a goal in those with elevated levels. Our study indicates that men with high blood pressure should also aim to improve their fitness levels with regular physical activity. In addition to habitual exercise, avoiding excess body weight may enhance fitness."

Read more at Science Daily