Jan 14, 2023

How do rocky planets really form?

A new theory for how rocky planets form could explain the origin of so-called "super-Earths" -- a class of exoplanets a few times more massive than the Earth that are the most abundant type of planet in the galaxy.

Further, it could explain why super-Earths within a single planetary system often wind up looking strangely similar in size, as though each system were only capable of producing a single kind of planet.

"As our observations of exoplanets have grown over the past decade, it has become clear that the standard theory of planet formation needs to be revised, starting with the fundamentals. We need a theory that can simultaneously explain the formation of the terrestrial planets in our solar system as well as the origins of self-similar systems of super-Earths, many of which appear rocky in composition," says Caltech professor of planetary science Konstantin Batygin (MS '10, PhD '12), who collaborated with Alessandro Morbidelli of the Observatoire de la Côte d'Azur in France on the new theory. A paper explaining their work was published by Nature Astronomy on Jan. 12.

Planetary systems begin their lifecycles as large spinning disks of gas and dust that consolidate over the course of a few million years or so. Most of the gas accretes into the star at the center of the system, while solid material slowly coalesces into asteroids, comets, planets, and moons.

In our solar system, there are two distinct types of planets: the smaller rocky inner planets closest to the sun and the outer larger water- and hydrogen-rich gas giants that are farther from the sun. In an earlier study published in Nature Astronomy at the end of 2021, this dichotomy led Morbidelli, Batygin, and colleagues to suggest that planet formation in our solar system occurred in two distinct rings in the protoplanetary disk: an inner one where the small rocky planets formed and an outer one for the more massive icy planets (two of which -- Jupiter and Saturn -- later grew into gas giants).

Super-Earths, as the name suggests, are more massive than the Earth. Some even have hydrogen atmospheres, which makes them appear almost gas giant-like. Moreover, they are often found orbiting close to their stars, suggesting that they migrated to their current location from more distant orbits.

"A few years ago we built a model where super-Earths formed in the icy part of the protoplanetary disk and migrated all the way to the inner edge of the disk, near the star," says Morbidelli. "The model could explain the masses and orbits of super-Earths but predicted that all are water-rich. Recent observations, however, have demonstrated that most super-Earths are rocky, like the Earth, even if surrounded by a hydrogen atmosphere. That was the death sentence for our old model."

Over the past five years, the story has gotten even weirder as scientists -- including a team led by Andrew Howard, professor of astronomy at Caltech; Lauren Weiss, assistant professor at the University of Notre Dame; and Erik Petigura, formerly a Sagan Postdoctoral Scholar in Astronomy at Caltech and now a professor at UCLA -- have studied these exoplanets and made an unusual discovery: while there exists a wide variety of types of super-Earths, all of the super-Earths within a single planetary system tend to be similar in terms of orbital spacing, size, mass, and other key features.

"Lauren discovered that, within a single planetary system, super-Earths are like 'peas in a pod,'" says Howard, who was not directly connected with the Batygin-Morbidelli paper but has reviewed it. "You basically have a planet factory that only knows how to make planets of one mass, and it just squirts them out one after the other."

So, what single process could have given rise to the rocky planets in our solar system but also to uniform systems of rocky super-Earths?

"The answer turns out to be related to something we figured out in 2020 but didn't realize applied to planetary formation more broadly," Batygin says.

In 2020, Batygin and Morbidelli proposed a new theory for the formation of Jupiter's four largest moons (Io, Europa, Ganymede, and Callisto). In essence, they demonstrated that, for a specific size range of dust grains, the force dragging the grains toward Jupiter and the force (or entrainment) carrying those grains in an outward flow of gas cancel each other perfectly. That balance in forces created a ring of material that constituted the solid building blocks for the subsequent formation of the moons. Further, the theory suggests that bodies would grow in the ring until they become large enough to exit the ring due to gas-driven migration. After that, they stop growing, which explains why the process produces bodies of similar sizes.

In their new paper, Batygin and Morbidelli suggest that the mechanism for forming planets around stars is largely the same. In the planetary case, the large-scale concentration of solid rocky material occurs at a narrow band in the disk called the silicate sublimation line -- a region where silicate vapors condense to form solid, rocky pebbles. "If you're a dust grain, you feel considerable headwind in the disk because the gas is orbiting a bit more slowly, and you spiral toward the star; but if you're in vapor form, you simply spiral outward, together with the gas in the expanding disk. So that place where you turn from vapor into solids is where material accumulates," Batygin says.

The new theory identifies this band as the likely site for a "planet factory" that, over time, can produce several similarly sized rocky planets. Moreover, as planets grow sufficiently massive, their interactions with the disk will tend to draw these worlds inward, closer to the star.

Batygin and Morbidelli's theory is backed up by extensive computer modeling but began with a simple question. "We looked at the existing model of planet formation, knowing that it does not reproduce what we see, and asked, 'What assertion are we taking for granted?'" Batygin says. "The trick is to look at something that everybody takes to be true but for no good reason."

In this case, the assumption was that solid material is dispersed throughout the protoplanetary disks. By jettisoning that assumption and instead supposing that the first solid bodies form in rings, the new theory can explain different types of planetary systems with a unified framework, Batygin says.

If the rocky ring contains a lot of mass, planets grow until they migrate away from the ring, resulting in a system of similar super-Earths. If the ring contains little mass, it produces a system that looks much more like our solar system's terrestrial planets.

Read more at Science Daily

Why chocolate feels so good -- it is all down to lubrication

Scientists have decoded the physical process that takes place in the mouth when a piece of chocolate is eaten, as it changes from a solid into a smooth emulsion that many people find totally irresistible.

By analysing each of the steps, the interdisciplinary research team at the University of Leeds hope it will lead to the development of a new generation of luxury chocolates that will have the same feel and texture but will be healthier to consume.

During the moments it is in the mouth, the chocolate sensation arises from the way the chocolate is lubricated, either from ingredients in the chocolate itself or from saliva or a combination of the two.

Fat plays a key function almost immediately when a piece of chocolate is in contact with the tongue. After that, solid cocoa particles are released and they become important in terms of the tactile sensation, so fat deeper inside the chocolate plays a rather limited role and could be reduced without having an impact on the feel or sensation of chocolate.

Anwesha Sarkar, Professor of Colloids and Surfaces in the School of Food Science and Nutrition at Leeds, said: "Lubrication science gives mechanistic insights into how food actually feels in the mouth. You can use that knowledge to design food with better taste, texture or health benefits.

"If a chocolate has 5% fat or 50% fat it will still form droplets in the mouth and that gives you the chocolate sensation. However, it is the location of the fat in the make-up of the chocolate which matters in each stage of lubrication, and that has been rarely researched.

"We are showing that the fat layer needs to be on the outer layer of the chocolate, this matters the most, followed by effective coating of the cocoa particles by fat, these help to make chocolate feel so good."

The study -- published in the scientific journal ACS Applied Materials and Interface -- did not investigate the question of how chocolate tastes. Instead, the investigation focused on its feel and texture.

Tests were conducted using a luxury brand of dark chocolate on an artificial 3D tongue-like surface that was designed at the University of Leeds. The researchers used analytical techniques from a field of engineering called tribology to conduct the study, which included in situ imaging.

Tribology is about how surfaces and fluids interact, the levels of friction between them and the role of lubrication: in this case, saliva or liquids from the chocolate. Those mechanisms are all happening in the mouth when chocolate is eaten.

When chocolate is in contact with the tongue, it releases a fatty film that coats the tongue and other surfaces in the mouth. It is this fatty film that makes the chocolate feel smooth throughout the entire time it is in the mouth.

Dr Siavash Soltanahmadi, from the School of Food Science and Nutrition at Leeds and the lead researcher in the study, said: "With the understanding of the physical mechanisms that happen as people eat chocolate, we believe that a next generation of chocolate can be developed that offers the feel and sensation of high-fat chocolate yet is a healthier choice.

"Our research opens the possibility that manufacturers can intelligently design dark chocolate to reduce the overall fat content.

"We believe dark chocolate can be produced in a gradient-layered architecture with fat covering the surface of chocolates and particles to offer the sought after self-indulging experience without adding too much fat inside the body of the chocolate."

Revenue from chocolate sales in the UK is forecast to grow over the next five years, according to research from the business intelligence agency MINTEL. Sales are expected to grow 13% between 2022 and 2027 to reach £6.6 billion.

The researchers believe the physical techniques used in the study could be applied to the investigation of other foodstuffs that undergo a phase change, where a substance is transformed from a solid to a liquid, such as ice-cream, margarine or cheese.

Read more at Science Daily

Jan 13, 2023

Researchers measure size-luminosity relation of galaxies less than a billion years after Big Bang

An international team of researchers including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) has studied the relation between galaxy size and luminosity of some of the earliest galaxies in the universe taken by the brand-new James Webb Space Telescope (JWST), less than a billion years after the Big Bang, reports a new study in The Astrophysical Journal Letters.

The result is part of the Grim Lens-Amplified Survey from Space (GLASS) Early-Release Science Program, led by University of California, Los Angeles, Professor Tommaso Treu. It is aimed at studying the early universe when the first stars/galaxies ignited, which ionized the neutral gas in the universe at the time and allowed light to shine through. This is called the epoch of reionization.

However, details of reionization have remained unknown because telescopes until today have not been capable of observing galaxies in this period of the universe's history in detail. Finding out more about the epoch of reionization would help researchers understand how stars and galaxies have evolved to create today's universe as we see it.

One study, led by Kavli IPMU JSPS Fellow Lilan Yang, and including Project Researcher Xuheng Ding, used multiband NIRCAM imaging data from the GLASS-JWST program to measure galaxy size and luminosity to figure out the morphology and the size-luminosity relation from rest-frame optical to UV.

"It's the first time that we can study the galaxy's properties in rest-frame optical at redshift larger than 7 with JWST, and the size-luminosity is important for determining the shape of luminosity function which indicates the primary sources responsible for the cosmic reionization, i.e., numerous faint galaxies or relatively less bright galaxies.

"The original wavelength of light will shift to longer wavelength when it travels from the early universe to us. Thus, the rest-frame wavelength is used to clarify their intrinsic wavelength, rather than observed wavelength.

Previously, with Hubble Space Telescope, we know the properties of galaxies only in rest-frame UV band. Now, with JWST, we can measure longer wavelength than UV," said first author Yang.

The researchers found the first rest-frame optical size-luminosity relation of galaxies at redshift larger than 7, or roughly 800 million years after the Big Bang, allowing them to study the size as function of wavelength. They found the median size at the reference luminosity is roughly 450-600 parsecs and decreased slightly from rest-frame optical to UV. But was this expected?

"The answer is we don't know what's to expect. Previous simulation studies give a range of predictions," said Yang.

The team also found the slope of the size-luminosity relationship was somewhat steeper in the shortest wavelength band when allowing the slope to vary.

"That would suggest higher surface brightness density at shorter wavelength, hence less observational incompleteness correction when estimating luminosity function, but the result is not conclusive. We don't want to over-interpret here," said Yang.

Read more at Science Daily

New studies suggest social isolation is a risk factor for dementia in older adults, point to ways to reduce risk

In two studies using nationally representative data from the National Health and Aging Trends Study gathered on thousands of Americans, researchers from the Johns Hopkins University School of Medicine and Bloomberg School of Public Health have significantly added to evidence that social isolation is a substantial risk factor for dementia in community-dwelling (noninstitutionalized) older adults, and identified technology as an effective way to intervene.

Collectively, the studies do not establish a direct cause and effect between dementia and social isolation, defined as lack of social contact and interactions with people on a regular basis. But, the researchers say, the studies strengthen observations that such isolation increases the risk of dementia, and suggest that relatively simple efforts to increase social support of older adults -- such as texting and use of email -- may reduce that risk. In the United States, an estimated 1 in 4 people over age 65 experience social isolation, according to the National Institute on Aging.

"Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication," says Thomas Cudjoe, M.D., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of both of the new studies.

The first study, described Jan. 11 in the Journal of the American Geriatrics Society, used data collected on a group of 5,022 Medicare beneficiaries for a long-term study known as the National Health and Aging Trends, which began in 2011. All participants were 65 or older, and were asked to complete an annual two-hour, in-person interview to assess cognitive function, health status and overall well-being.

At the initial interview, 23% of the 5,022 participants were socially isolated and showed no signs of dementia. However, by the end of this nine-year study, 21% of the total sample of participants had developed dementia. The researchers concluded that risk of developing dementia over nine years was 27% higher among socially isolated older adults compared with older adults who were not socially isolated.

"Socially isolated older adults have smaller social networks, live alone and have limited participation in social activities," says Alison Huang, Ph.D., M.P.H., senior research associate at the Johns Hopkins Bloomberg School of Public Health. "One possible explanation is that having fewer opportunities to socialize with others decreases cognitive engagement as well, potentially contributing to increased risk of dementia."

Interventions to reduce that risk are possible, according to results of the second study, published Dec. 15 in the Journal of the American Geriatrics Society. Specifically, researchers found the use of communications technology such as telephone and email lowered the risk for social isolation.

Researchers for the second study used data from participants in the same National Health and Aging Trends study, and found that more than 70% of people age 65 and up who were not socially isolated at their initial appointment had a working cellphone and/or computer, and regularly used email or texting to initiate and respond to others. Over the four-year research period for this second study, older adults who had access to such technology consistently showed a 31% lower risk for social isolation than the rest of the cohort.

"Basic communications technology is a great tool to combat social isolation," says Mfon Umoh, M.D., Ph.D., postdoctoral fellow in geriatric medicine at the Johns Hopkins University School of Medicine. "This study shows that access and use of simple technologies are important factors that protect older adults against social isolation, which is associated with significant health risks. This is encouraging because it means simple interventions may be meaningful."

Social isolation has gained significant attention in the past decade, especially due to restrictions implemented for the COVID-19 pandemic, but more work needs to be done to identify at-risk populations and create tools for providers and caregivers to minimize risk, the researchers say. Future research in this area should focus on increased risks based on biological sex, physical limitations, race and income level.

Read more at Science Daily

How crocs can go hours without air: Crocodilian hemoglobin

It can pogo-stick along at 50-plus miles per hour, leaping 30-odd feet in a single bound. But that platinum-medal athleticism falls by the wayside at a sub-Saharan riverside, the source of life and death for the skittish impala stilling itself for a drink in 100-degree heat. A Nile crocodile has silently baptized itself in that same muddy river for the past hour. When the unseen apex predator lashes from the water to seize the impala, its infamous teeth latch onto a hindquarter, jaws clenching with 5,000 pounds of force. Yet it's the water itself that does the killing, with the deep-breathed reptile dragging its prey to the deep end to drown.

The success of the croc's ambush lies in the nanoscopic scuba tanks -- hemoglobins -- that course through its bloodstream, unloading oxygen from lungs to tissues at a slow but steady clip that allows it to go hours without air. The hyper-efficiency of that specialized hemoglobin has led some biologists to wonder why, of all the jawed vertebrates in all the world, crocodilians were the lone group to hit on such an optimal solution to making the most of a breath.

By statistically reconstructing and experimentally resurrecting the hemoglobin of an archosaur, the 240-million-year-old ancestor of all crocodilians and birds, the University of Nebraska-Lincoln's Jay Storz and colleagues have gleaned new insights into that why. Rather than requiring just a few key mutations, as earlier research suggested, the unique properties of crocodilian hemoglobin stemmed from 21 interconnected mutations that litter the intricate component of red blood cells.

That complexity, and the multiple knock-on effects that any one mutation can induce in hemoglobin, may have forged an evolutionary path so labyrinthine that nature failed to retrace it even over tens of millions of years, the researchers said.

"If it was such an easy trick -- if it was that easy to do, just making a few changes -- everyone would be doing it," said Storz, a senior author of the study and Willa Cather Professor of biological sciences at Nebraska.

All hemoglobin binds with oxygen in the lungs before swimming the bloodstream and eventually releasing that oxygen to the tissues that depend on it. In most vertebrates, hemoglobin's affinity for capturing and holding oxygen is dictated largely by molecules known as organic phosphates, which, by attaching themselves to the hemoglobin, can coax it into releasing its precious cargo.

But in crocodilians -- crocodiles, alligators and their kin -- the role of organic phosphates was supplanted by a molecule, bicarbonate, that is produced from the breakdown of carbon dioxide. Because hardworking tissues produce lots of carbon dioxide, they also indirectly generate lots of bicarbonate, which in turn encourages hemoglobin to dispense its oxygen to the tissues most in need of it.

"It's a super-efficient system that provides a kind of slow-release mechanism that allows crocodilians to efficiently exploit their onboard oxygen stores," Storz said. "It's part of the reason they're able to stay underwater for so long."

As postdoctoral researchers in Storz's lab, Chandrasekhar Natarajan, Tony Signore and Naim Bautista had already helped decipher the workings of the crocodilian hemoglobin. Alongside colleagues from Denmark, Canada, the United States and Japan, Storz's team decided to embark on a multidisciplinary study of how the oxygen-ferrying marvel came to be.

Prior efforts to understand its evolution involved incorporating known mutations into human hemoglobin and looking for any functional changes, which were usually scant. Recent findings from his own lab had convinced Storz that the approach was flawed. There were plenty of differences, after all, between human hemoglobin and that of the ancient reptilian creatures from which modern-day crocodilians evolved.

"What's important is to understand the effects of mutations on the genetic background in which they actually evolved, which means making vertical comparisons between ancestral and descendant proteins, rather than horizontal comparisons between proteins of contemporary species," Storz said. "By using that approach, you can figure out what actually happened."

So, with the help of biochemical principles and statistics, the team set out to reconstruct hemoglobin blueprints from three sources: the 240-million-year-old archosaur ancestor; the last common ancestor of all birds; and the 80-million-year-old shared ancestor of contemporary crocodilians. After putting all three of the resurrected hemoglobins through their paces in the lab, the team confirmed that only the hemoglobin of the direct crocodilian ancestor lacked phosphate binding and boasted bicarbonate sensitivity.

Comparing the hemoglobin blueprints of the archosaur and crocodilian ancestors also helped identify changes in amino acids -- essentially the joints of the hemoglobin skeleton -- that may have proved important. To test those mutations, Storz and his colleagues began introducing certain croc-specific mutations into the ancestral archosaur hemoglobin. By identifying the mutations that made archosaur hemoglobin behave more like that of a modern-day crocodilian, the team pieced together the changes responsible for those unique, croc-specific properties.

Counter to conventional wisdom, Storz and his colleagues discovered that evolved changes in hemoglobin's responsiveness to bicarbonate and phosphates were driven by different sets of mutations, so that the gain of one mechanism was not dependent on the loss of the other. Their comparison also revealed that, though a few mutations were enough to subtract the phosphate-binding sites, multiple others were needed to eliminate phosphate sensitivity all together. In much the same way, two mutations seemed to directly drive the emergence of bicarbonate sensitivity -- but only when combined with or preceded by other, easy-to-miss mutations in remote regions of the hemoglobin.

Storz said the findings speak to the fact that a combination of mutations might yield functional changes that transcend the sum of their individual effects. A mutation that produces no functional effect on its own might, in any number of ways, open a path to other mutations with clear, direct consequences. In the same vein, he said, those later mutations might influence little without the proper stage-setting predecessors already in place. And all of those factors can be supercharged or waylaid by the environment in which they unfold.

"When you have these complex interactions, it suggests that certain evolutionary solutions are only accessible from certain ancestral starting points," Storz said. "With the ancestral archosaur hemoglobin, you have a genetic background that makes it possible to evolve the unique properties that we see in hemoglobins of modern-day crocodilians. By contrast, with the ancestor of mammals as a starting point, it may be that there's some way that you could evolve the same property, but it would have to be through a completely different molecular mechanism, because you're working within a completely different structural context."

For better or worse, Storz said, the study also helps explain the difficulty of engineering a human hemoglobin that can mimic and approach the performance of the crocodilian.

Read more at Science Daily

Computers that power self-driving cars could be a huge driver of global carbon emissions

In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario -- where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate -- they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

"If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn't seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start," says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears in the January-February issue of IEEE Micro.

Modeling emissions


The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don't require a back-up human driver.

The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

"On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet," Sudhakar says.

For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn't exist yet.

To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms' workload added up.

For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook's data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

"After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people's radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time," Karaman says.

Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing -- it doesn't take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

Keeping emissions in check

To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to "future-proof" it so it can run new algorithms.

In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles -- the carbon emissions generated when a car is manufactured -- and emissions from a vehicle's sensors.

While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

Read more at Science Daily

Jan 12, 2023

NASA's Webb uncovers star formation in cluster's dusty ribbons

NGC 346, one of the most dynamic star-forming regions in nearby galaxies, is full of mystery. Now, it is less mysterious with new findings from NASA's James Webb Space Telescope.

NCG 346 is located in the Small Magellanic Cloud (SMC), a dwarf galaxy close to our Milky Way. The SMC contains lower concentrations of elements heavier than hydrogen or helium, which astronomers call metals, compared to the Milky Way. Since dust grains in space are composed mostly of metals, scientists expected there would be low amounts of dust, and that it would be hard to detect. New data from Webb reveals the opposite.

Astronomers probed this region because the conditions and amount of metals within the SMC resemble those seen in galaxies billions of years ago, during an era in the universe known as "cosmic noon," when star formation was at its peak. Some 2 to 3 billion years after the big bang, galaxies were forming stars at a furious rate. The fireworks of star formation happening then still shape the galaxies we see around us today.

"A galaxy during cosmic noon wouldn't have one NGC 346 like the Small Magellanic Cloud does; it would have thousands" of star-forming regions like this one, said Margaret Meixner, an astronomer at the Universities Space Research Association and principal investigator of the research team. "But even if NGC 346 is now the one and only massive cluster furiously forming stars in its galaxy, it offers us a great opportunity to probe conditions that were in place at cosmic noon."

By observing protostars still in the process of forming, researchers can learn if the star formation process in the SMC is different from what we observe in our own Milky Way. Previous infrared studies of NGC 346 have focused on protostars heavier than about 5 to 8 times the mass of our Sun. "With Webb, we can probe down to lighter-weight protostars, as small as one tenth of our Sun, to see if their formation process is affected by the lower metal content," said Olivia Jones of the United Kingdom Astronomy Technology Centre, Royal Observatory Edinburgh, a co-investigator on the program.

As stars form, they gather gas and dust, which can look like ribbons in Webb imagery, from the surrounding molecular cloud. The material collects into an accretion disk that feeds the central protostar. Astronomers have detected gas around protostars within NGC 346, but Webb's near-infrared observations mark the first time they have also detected dust in these disks.

"We're seeing the building blocks, not only of stars, but also potentially of planets," said Guido De Marchi of the European Space Agency, a co-investigator on the research team. "And since the Small Magellanic Cloud has a similar environment to galaxies during cosmic noon, it's possible that rocky planets could have formed earlier in the universe than we might have thought."

The team also has spectroscopic observations from Webb's NIRSpec instrument that they are continuing to analyze. These data are expected to provide new insights into the material accreting onto individual protostars, as well as the environment immediately surrounding the protostar.

Read more at Science Daily

Study offers most detailed glimpse yet of planet's last 11,000 summers and winters

By analyzing Antarctic ice cores, CU Boulder scientists and an international team of collaborators have revealed the most detailed look yet at the planet's recent climactic history, including summer and winter temperatures dating back 11,000 years to the beginning of what is known as the Holocene.

Published today in Nature, the study is the very first seasonal temperature record of its kind, from anywhere in the world.

"The goal of the research team was to push the boundaries of what is possible with past climate interpretations, and for us that meant trying to understand climate at the shortest timescales, in this case seasonally, from summer to winter, year-by-year, for many thousands of years," said Tyler Jones, lead author on the study, and assistant research professor and fellow at the Institute of Arctic and Alpine Research (INSTAAR).

The study also validates one aspect of a long-standing theory about Earth's climate that has not been previously proven: how seasonal temperatures in polar regions respond to Milankovitch cycles. Serbian scientist Milutin Milankovitch hypothesized a century ago that the collective effects of changes in Earth's position relative to the sun -- due to slow variations of its orbit and axis -- are a strong driver of Earth's long-term climate, including the start and end of ice ages (prior to any significant human influence on the climate).

"I am particularly excited that our result confirms a fundamental prediction of the theory used to explain Earth's ice-age climate cycles: that the intensity of sunlight controls summertime temperatures in the polar regions, and thus melt of ice, too," said Kurt Cuffey, a co-author on the study and professor at the University of California Berkeley.

These more highly detailed data on long-term climate patterns of the past also provide an important baseline for other scientists, who study the impacts of human-caused greenhouse gas emissions on our present and future climate. By knowing which planetary cycles occur naturally and why, researchers can better identify the human influence on climate change and its impacts on global temperatures.

"This research is something that humans can really relate to because we partly experience the world through the changing seasons -- documenting how summer and winter temperature varied through time translates to how we understand climate," said Jones.

Finer definition amidst diffusion

Scientists around the world have long studied Earth's past climate using ice cores gathered from the poles. These slender, cylindrical columns of ice, drilled from ancient ice sheets (mostly in Antarctica and Greenland), provide valuable long-term data trapped in time about everything from past atmospheric concentrations of greenhouse gases to past temperatures of the air and oceans.

The West Antarctic Ice Sheet (WAIS) Divide ice core, the longest ice core ever drilled by U.S. researchers, measures 11,171 feet (or over 2 miles) long and 4.8-inches in diameter -- containing data from as old as 68,000 years ago. Ice cores like this one are then carefully cut into smaller sections which can be safely transported to and stored or analyzed in ice core labs around the country -- like the Stable Isotope Lab at CU Boulder.

For this study, researchers analyzed a continuous record of water-isotope ratios from the WAIS ice core. The ratios between the concentration of these isotopes (elements with the same number of protons but different numbers of neutrons) reveal data about past temperatures and atmospheric circulation, including transitions between ice ages and warm periods in Earth's past.

Measuring seasonal changes in our planet's history from ice cores is especially difficult, however, due to the fine detail required for their shorter timescales. A process within ice sheets known as diffusion, or natural smoothing, can blur this needed detail.

These water isotopes tend to not stay in one place in the upper ice sheet, but instead move around in interconnected pathways (similar to the air pockets in Styrofoam) as they change states between vapor and ice, over decades or centuries, before sufficiently solidifying. This process can "blur" the data researchers are trying to examine. But by using the high-quality ice cores from the West Antarctic Ice Sheet, extremely high-resolution measurements and advances in ice core analysis from the past 15 years, the team was able to correct for the diffusion present in the data and complete the study.

"Even beyond that, we had to develop new methods entirely to deal with this data, because no one's ever seen it before. We had to go above and beyond what anyone's done in the past," said Jones.

Studying stable isotopes


While the study details the history of Earth's climate, the work behind it has a history of its own.

For more than three decades, researchers at INSTAAR's Stable Isotope Lab have been studying a variety of stable isotopes -- nonradioactive forms of atoms with unique molecular signatures -- found everywhere from the inside ice cores and the carbon in permafrost to the air in our atmosphere. Jones joined the lab in 2007 as a master's student and has never left.

"I have this distinct memory of walking into my advisor, Jim White's office in about 2013, and showing him that we would be able to pull out summer and winter values in this record for the last 11,000 years -- which is extremely rare. In our understanding, no one had ever done this before," said Jones. "We looked at each other and said, 'Wow, this is going to be a really big deal.'"

It then took almost a decade to figure out the proper way to interpret the data, from ice cores drilled many years before that meeting.

Bruce Vaughn, co-author and a chief scientist on the project, and manager of the Stable Isotope Lab, and Bradley Markle, co-author on the study and assistant professor at INSTAAR and the Department of Geology, were there to collect the ice in West Antarctica that was shipped back and analyzed.

The team's next step is to attempt to interpret high-resolution ice cores in other places -- such as the South Pole and in northeast Greenland, where cores have already been drilled -- to better understand our planet's climate variability.

Read more at Science Daily

Placebo reduces feelings of guilt

People don't always behave impeccably in relationship to others. When we notice that this has inadvertently caused harm, we often feel guilty. This is an uncomfortable feeling and motivates us to take remedial action, such as apologizing or owning up.

This is why guilt is considered an important moral emotion, as long as it is adaptive -- in other words, appropriate and in proportion to the situation. "It can improve interpersonal relationships and is therefore valuable for social cohesion," says Dilan Sezer, researcher at the Division of Clinical Psychology and Psychotherapy at the University of Basel.

Whether feelings of guilt can be reduced by taking placebos is something that researchers at the Faculty of Psychology at the University of Basel have been exploring. Their findings have now been published in the journal Scientific Reports.

Open-label placebos work

In order to arouse feelings of guilt, test subjects in the study were asked to write about a time when they had disregarded important rules of conduct, or treated someone close to them unfairly, hurt or even harmed them. The idea was that the study participants should still feel bad about the chosen situation.

Participants were then randomized to three conditions: Participants in one group were given placebo pills with being deceptively told that this was a real medication while participants in another group were told that they are given a placebo. Both groups were told that what they had been given will be effective against feelings of guilt. The control group was given no treatment at all. The results showed that feelings of guilt were significantly reduced in both placebo groups compared with those without medication.

This was also the case when the subjects knew they had been given a placebo. "Our study therefore supports the intriguing finding that placebos work even when they are administered openly, and that explanation of the treatment is key to its effectiveness," states the study's lead author, Dilan Sezer. Participants in this study were all healthy, had no psychiatric disorders and were not being treated with psychotropics.

Clinical applicability not yet proven

Where feelings of guilt are irrational and continue for longer periods of time, they are considered maladaptive -- in other words, disproportionate. These emotions can affect people's health and are also, among other things, a common symptom of depression.

Scientific studies have shown that placebo effects can be powerful in treating depression. But the finding that open-label placebos can also be useful for such strong emotions as guilt is new. It stands to reason, says Dilan Sezer, that we should try to harness these effects to help those affected. "The administering of open-label placebos, in particular, is a promising approach, as it preserves patient autonomy by allowing patients to be fully aware of how the intervention works." The results of the study are an initial promising step in the direction of symptom-specific and more ethical treatments for psychological complaints using open-label placebos, Sezer continues.

Read more at Science Daily

Fall rate nearly 50% among older Americans with dementia

With falls causing millions of injuries in older adults each year, it is an increasingly important public health concern. Older adults living with dementia have twice the risk of falling and three times the risk of incurring serious fall-related injuries, like fractures, compared to those without dementia. For older adults with dementia, even minor fall-related injuries can lead to hospitalization and nursing home admission. A new study from researchers in Drexel University's College of Nursing and Health Professions, has shed light on the many and varied fall-risk factors facing older adults in community-living environments.

Recently published in Alzheimer's & Dementia: The Journal of the Alzheimer's Association, the research led by Safiyyah Okoye, PhD, an assistant professor at Drexel, and Jennifer L. Wolff, PhD, a professor at Johns Hopkins Bloomberg School of Public Health, examined a comprehensive set of potential fall-risk factors -- including environmental factors, in addition to health and function -- in older community-living adults in the United States, both with and without dementia.

"Examining the multiple factors, including environmental ones like a person's home or neighborhood, is necessary to inform fall-risk screening, caregiver education and support, and prevention strategies for this high-risk population of older adults," said Okoye.

Despite awareness of this elevated risk, there are very few studies that have examined fall-risk factors among people with dementia living in a community setting (not nursing homes or other residential facilities). The studies that do exist, overwhelmingly focus on health and function factors. According to the authors, this is the first nationally representative study to compare a comprehensive set of potential risk factors for falls for older Americans living with dementia to those without dementia.

The research team examined data from the 2015 and 2016 National Health and Aging Trends Study (NHATS), a population-based survey of health and disability trends and trajectories of adults 65 and older in the U.S. They were able to obtain potential sociodemographic, health and function predictors of falls, as well as potential social and physical environmental predictors.

Data from NHATS showed that nearly half (45.5%) of older adults with dementia had experienced one or more falls in 2016, compared to less than one third (30.9%) of older adults without dementia.

Among older adults living with dementia, three characteristics stood out as significantly associated with a greater likelihood of falls: a history of falling the previous year; impaired vision; and living with others (versus alone). For older adults without dementia, financial hardship, a history of falling, fear of falling, poor lower extremity performance, depressive symptoms and home disrepair were strongly associated with increased risk of falls.

While prior history of falling and vision impairment are well-known risk factors for falls among older adults in general; the researchers' findings indicate that these were strong risk factors for falls among people living with dementia. According to the team, this suggests that people living with dementia should be assessed for presence of these characteristics. If they're present, the individuals should receive further assessment and treatment, including examining their feet and footwear, assessing their environment and ability to carry out daily living activities, among other items.

The finding that older adults living with dementia who lived with a spouse or with non-spousal others had higher odds of experiencing a fall, compared to those who lived alone, highlights that caregiver support and education are understudied components of fall prevention programs for older adults with dementia who live with family caregivers, and deserve greater attention from clinicians, researchers and policy makers.

"Overall, our findings demonstrate the importance of understanding and addressing fall-risk among older adults living with dementia," said Okoye. "It confirms that fall-risk is multidimensional and influenced by environmental context in addition to health and function factors."

The results of the study indicate the need to further investigate and design fall-prevention interventions, specifically for people living with dementia.

Read more at Science Daily

Jan 11, 2023

Planetary system's second Earth-size world discovered

Using data from NASA's Transiting Exoplanet Survey Satellite, scientists have identified an Earth-size world, called TOI 700 e, orbiting within the habitable zone of its star -- the range of distances where liquid water could occur on a planet's surface. The world is 95% Earth's size and likely rocky.

Astronomers previously discovered three planets in this system, called TOI 700 b, c, and d. Planet d also orbits in the habitable zone. But scientists needed an additional year of TESS observations to discover TOI 700 e.

"This is one of only a few systems with multiple, small, habitable-zone planets that we know of," said Emily Gilbert, a postdoctoral fellow at NASA's Jet Propulsion Laboratory in Southern California who led the work. "That makes the TOI 700 system an exciting prospect for additional follow up. Planet e is about 10% smaller than planet d, so the system also shows how additional TESS observations help us find smaller and smaller worlds."

Gilbert presented the result on behalf of her team at the 241st meeting of the American Astronomical Association in Seattle. A paper about the newly discovered planet was accepted by The Astrophysical Journal Letters.

TOI 700 is a small, cool M dwarf star located around 100 light-years away in the southern constellation Dorado. In 2020, Gilbert and others announced the discovery of the Earth-size, habitable-zone planet d, which is on a 37-day orbit, along with two other worlds.

The innermost planet, TOI 700 b, is about 90% Earth's size and orbits the star every 10 days. TOI 700 c is over 2.5 times bigger than Earth and completes an orbit every 16 days. The planets are probably tidally locked, which means they spin only once per orbit such that one side always faces the star, just as one side of the Moon is always turned toward Earth.

TESS monitors large swaths of the sky, called sectors, for approximately 27 days at a time. These long stares allow the satellite to track changes in stellar brightness caused by a planet crossing in front of its star from our perspective, an event called a transit. The mission used this strategy to observe the southern sky starting in 2018, before turning to the northern sky. In 2020, it returned to the southern sky for additional observations. The extra year of data allowed the team to refine the original planet sizes, which are about 10% smaller than initial calculations.

"If the star was a little closer or the planet a little bigger, we might have been able to spot TOI 700 e in the first year of TESS data," said Ben Hord, a doctoral candidate at the University of Maryland, College Park and a graduate researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "But the signal was so faint that we needed the additional year of transit observations to identify it."

TOI 700 e, which may also be tidally locked, takes 28 days to orbit its star, placing planet e between planets c and d in the so-called optimistic habitable zone.

Scientists define the optimistic habitable zone as the range of distances from a star where liquid surface water could be present at some point in a planet's history. This area extends to either side of the conservative habitable zone, the range where researchers hypothesize liquid water could exist over most of the planet's lifetime. TOI 700 d orbits in this region.

Finding other systems with Earth-size worlds in this region helps planetary scientists learn more about the history of our own solar system.

Follow-up study of the TOI 700 system with space- and ground-based observatories is ongoing, Gilbert said, and may yield further insights into this rare system.

Read more at Science Daily

Rice breeding breakthrough to feed billions

An international team has succeeded in propagating a commercial hybrid rice strain as a clone through seeds with 95 percent efficiency. This could lower the cost of hybrid rice seed, making high-yielding, disease resistant rice strains available to low-income farmers worldwide. The work was published Dec. 27 in Nature Communications.

First-generation hybrids of crop plants often show higher performance than their parent strains, a phenomenon called hybrid vigor. But this does not persist if the hybrids are bred together for a second generation. So when farmers want to use high-performing hybrid plant varieties, they need to purchase new seed each season.

Rice, the staple crop for half the world's population, is relatively costly to breed as a hybrid for a yield improvement of about 10 percent. This means that the benefits of rice hybrids have yet to reach many of the world's farmers, said Gurdev Khush, adjunct professor emeritus in the Department of Plant Sciences at the University of California, Davis. Working at the International Rice Research Institute from 1967 until retiring to UC Davis in 2002, Khush led efforts to create new rice high-yield rice varieties, work for which he received the World Food Prize in 1996.

One solution to this would be to propagate hybrids as clones that would remain identical from generation to generation without further breeding. Many wild plants can produce seeds that are clones of themselves, a process called apomixis.

"Once you have the hybrid, if you can induce apomixis, then you can plant it every year," Khush said.

However, transferring apomixis to a major crop plant has proved difficult to achieve.

One Step to Cloned Hybrid Seeds

In 2019, a team led by Professor Venkatesan Sundaresan and Assistant Professor Imtiyaz Khanday at the UC Davis Departments of Plant Biology and Plant Sciences achieved apomixis in rice plants, with about 30 percent of seeds being clones.

Sundaresan, Khanday and colleagues in France, Germany and Ghana have now achieved a clonal efficiency of 95 percent, using a commercial hybrid rice strain, and shown that the process could be sustained for at least three generations.

The single-step process involves modifying three genes called MiMe which cause the plant to switch from meioisis, the process that plants use to form egg cells, to mitosis, in which a cell divides into two copies of itself. Another gene modification induces apomixis. The result is a seed that can grow into a plant genetically identical to its parent.

The method would allow seed companies to produce hybrid seeds more rapidly and at larger scale, as well as providing seed that farmers could save and replant from season to season, Khush said.

"Apomixis in crop plants has been the target of worldwide research for over 30 years, because it can make hybrid seed production can become accessible to everyone," Sundaresan said. "The resulting increase in yields can help meet global needs of an increasing population without having to increase use of land, water and fertilizers to unsustainable levels."

The results could be applied to other food crops, Sundaresan said. In particular, rice is a genetic model for other cereal crops including maize and wheat, that together constitute major food staples for the world.

Read more at Science Daily

It would take 23 million years for evolution to replace Madagascar's endangered mammals

In many ways, Madagascar is a biologist's dream, a real-life experiment in how isolation on an island can spark evolution. About 90% of the plants and animals there are found nowhere else on Earth. But these plants and animals are in major trouble, thanks to habitat loss, over-hunting, and climate change. Of the 219 known mammal species on the island, including 109 species of lemurs, more than 120 are endangered. A new study in Nature Communications examined how long it took Madagascar's unique modern mammal species to emerge and estimated how long it would take for a similarly complex set of new mammal species to evolve in their place if the endangered ones went extinct: 23 million years, far longer than scientists have found for any other island.

That is, simply put, really bad news. "It's abundantly clear that there are whole lineages of unique mammals that only occur on Madagascar that have either gone extinct or are on the verge of extinction, and if immediate action isn't taken, Madagascar is going to lose 23 million years of evolutionary history of mammals, which means whole lineages unique to the face of the Earth will never exist again," says Steve Goodman, MacArthur Field Biologist at Chicago's Field Museum and Scientific Officer at Association Vahatra in Antananarivo, Madagascar, and one of the paper's authors.

Madagascar is the world's fifth-largest island, about the size of France, but "in terms of all the different ecosystems present on Madagascar, it's less like an island and more like a mini-continent," says Goodman. In the 150 million years since Madagascar split from the African mainland and the 80 million since it parted ways with India, the plants and animals there have gone down their own evolutionary paths, cut off from the rest of the world. This smaller gene pool, coupled with Madagascar's wealth of different habitat types, from mountainous rainforests to lowland deserts, allowed mammals there to split into different species far more quickly than their continental relatives.

But this incredible biodiversity comes at a cost: evolution happens faster on islands, but so does extinction. Smaller populations that are specially adapted to smaller, unique patches of habitat are more vulnerable to being wiped out, and once they're gone, they're gone. More than half of the mammals on Madagascar are included on the International Union for Conservation of Nature Red List of Threatened Species, aka the IUCN Red List. These animals are endangered primarily because of human actions over the past two hundred years, especially habitat destruction and over-hunting.

An international team of Malagasy, European, and American scientists, including Goodman, collaborated to study the looming extinction of Madagascar's endangered mammals. They built a dataset of every known mammal species to coexist with humans on Madagascar for the last 2,500 years. (Humans have lived on the island, perhaps intermittently, for the past 10,000 years, but have remained constant there for the last 2,500.) The scientists came up with the 219 known mammal species alive today, plus 30 more that have gone extinct over the past two millennia, including a gorilla-sized lemur that went extinct between 500 and 2,000 years ago.

Armed with this dataset of all the known Malagasy mammals that interacted with humans, the researchers built genetic family trees to establish how all these species are related to each other and how long it took them to evolve from their various common ancestors. Then, the scientists were able to extrapolate how long it took this amount of biodiversity to evolve, and thus, estimate how long it would take for evolution to "replace" all of the endangered mammals if they go extinct.

To rebuild the diversity of land-dwelling mammals that have already gone extinct over the past 2,500 years, it would take around 3 million years. But more alarmingly, the models suggested that if all the mammals that are currently endangered were to go extinct, it would take 23 million years to rebuild that level of diversity. That doesn't mean that if we let all of the lemurs and tenrecs and fossas and other unique Malagasy mammals go extinct, that evolution will recreate them if we just wait around 23 million more years. "It would be simply impossible to recover them," says Goodman. Instead, the model means that to achieve a similar level of evolutionary complexity, whatever those new species might look like, would take 23 million years.

Luis Valente, the study's corresponding author, says he was surprised by this finding. "It is much longer than what previous studies have found on other islands, such as New Zealand or the Caribbean," says Valente, a biologist at the Naturalis Biodiversity Center and the University of Groningen in the Netherlands. "It was already known that Madagascar was a hotspot of biodiversity, but this new research puts into context just how valuable this diversity is. These findings underline the potential gains of the conservation of nature on Madagascar from a novel evolutionary perspective."

According to Goodman, Madagascar is at a tipping point for protecting its biodiversity. "There is still a chance to fix things, but basically, we have about five years to really advance the conservation of Madagascar's forests and the organisms that those forests hold," he says.

Read more at Science Daily

Origins of the building blocks of life

A new study led by Southwest Research Institute Research Scientist Dr. Danna Qasim posits that interstellar cloud conditions may have played a significant role on the presence of key building blocks of life in the solar system.

"Carbonaceous chondrites, some of the oldest objects in the universe, are meteorites that are thought to have contributed to the origins of life. They contain several different molecules and organic substances, including amines and amino acids, which are key building blocks of life that were critical to creating life on Earth. These substances are necessary to create proteins and muscle tissue," Qasim said.

Most meteorites are fragments of asteroids that broke apart long ago in the asteroid belt, located between Mars and Jupiter. Such fragments orbit the Sun -- sometimes for millions of years -- before colliding with Earth.

One of the questions Qasim and others are trying to answer is how amino acids got into the carbonaceous chondrites in the first place. Because most meteorites come from asteroids, scientists have attempted to reproduce amino acids by simulating asteroid conditions in a laboratory setting, a process called "aqueous alteration."

"That method hasn't been 100% successful," Qasim said. "However, the make-up of asteroids originated from the parental interstellar molecular cloud, which was rich in organics. While there's no direct evidence of amino acids in interstellar clouds, there is evidence of amines. The molecular cloud could have provided the amino acids in asteroids, which passed them on to meteorites."

To determine to what extent amino acids formed from asteroid conditions and to what extent they were inherited from the interstellar molecular cloud, Qasim simulated the formation of amines and amino acids as it would occur in the interstellar molecular cloud.

"I created ices that are very common in the cloud and irradiated them to simulate the impact of cosmic rays," explained Qasim, who conducted the experiment while working at NASA's Goddard Space Flight Center in Greenbelt, Maryland, between 2020 and 2022. "This caused the molecules to break up and recombine into larger molecules, which ultimately created an organic residue."

Qasim then processed the residue again by recreating asteroid conditions through aqueous alteration and studied the substance, looking for amines and amino acids.

"No matter what kind of asteroid processing we did, the diversity of amines and amino acids from the interstellar ice experiments remained constant," she said. "That tells us that interstellar cloud conditions are quite resilient to asteroid processing. These conditions could have influenced the distribution of amino acids we find in meteorites."

However, the individual abundances of amino acids doubled, suggesting the asteroid processing influences the amount of amino acids present.

"Essentially we have to consider both the interstellar cloud conditions and processing by the asteroid to best interpret the distribution," she said.

Qasim looks forward to studies of asteroid samples from missions such as OSIRIS-REx, which is currently on its way back to Earth to deliver samples from the asteroid Bennu here in September, and Hayabusa2, which recently returned from the asteroid Ryugu, to better understand the role the interstellar cloud played in distributing the building blocks of life.

Read more at Science Daily

Jan 10, 2023

Wide diversity of galaxies in the early universe

New data from the James Webb Space Telescope (JWST) have revealed that the structures of galaxies in the early universe were much more diverse and mature than previously known. Scientists compared images of hundreds of galaxies taken by JWST for the Cosmic Evolution Early Release Science (CEERS) Survey with corresponding images previously taken by the Hubble Space Telescope and presented the results at the 241st meeting of the American Astronomical Society.

The study examined 850 galaxies at redshifts of z three through nine, or as they were roughly 11-13 billion years ago. Associate Professor Jeyhan Kartaltepe from Rochester Institute of Technology's School of Physics and Astronomy said that JWST's ability to see faint high redshift galaxies in sharper detail than Hubble allowed the team of researchers to resolve more features and see a wide mix of galaxies, including many with mature features such as disks and spheroidal components.

"There have been previous studies emphasizing that we see a lot of galaxies with disks at high redshift, which is true, but in this study we also see a lot of galaxies with other structures, such as spheroids and irregular shapes, as we do at lower redshifts," said Kartaltepe, lead author on the paper and CEERS co-investigator. "This means that even at these high redshifts, galaxies were already fairly evolved and had a wide range of structures."

The results of the study, which have been posted to ArXiv and accepted for publication in The Astrophysical Journal, demonstrate JWST's advances in depth, resolution, and wavelength coverage compared to Hubble. Out of the 850 galaxies used in the study that were previously identified by Hubble, 488 were reclassified with different morphologies after being shown in more detail with JWST. Kartaltepe said scientists are just beginning to reap the benefits of JWST's impressive capabilities and are excited by what forthcoming data will reveal.

"This tells us that we don't yet know when the earliest galaxy structures formed," said Kartaltepe. "We're not yet seeing the very first galaxies with disks. We'll have to examine a lot more galaxies at even higher redshifts to really quantify at what point in time features like disks were able to form."

The study used an initial data set captured by CEERS when JWST first came online in June, but the survey has since captured a total of 60 observing hours, potentially providing thousands of high redshift galaxies to further explore. Kartaltepe said COSMOS-Web, the largest General Observer program selected for JWST's first year, will provide an even larger sample through 255 hours of observing time with the telescope. COSMOS-Web began its observing campaign this month.

Read more at Science Daily

Astronomers find the most distant stars in our galaxy halfway to Andromeda

Astronomers have discovered more than 200 distant variable stars known as RR Lyrae stars in the Milky Way's stellar halo. The most distant of these stars is more than a million light years from Earth, almost half the distance to our neighboring galaxy, Andromeda, which is about 2.5 million light years away.

The characteristic pulsations and brightness of RR Lyrae stars make them excellent "standard candles" for measuring galactic distances. These new observations allowed the researchers to trace the outer limits of the Milky Way's halo.

"This study is redefining what constitutes the outer limits of our galaxy," said Raja GuhaThakurta, professor and chair of astronomy and astrophysics at UC Santa Cruz. "Our galaxy and Andromeda are both so big, there's hardly any space between the two galaxies."

GuhaThakurta explained that the stellar halo component of our galaxy is much bigger than the disk, which is about 100,000 light years across. Our solar system resides in one of the spiral arms of the disk. In the middle of the disk is a central bulge, and surrounding it is the halo, which contains the oldest stars in the galaxy and extends for hundreds of thousands of light years in every direction.

"The halo is the hardest part to study because the outer limits are so far away," GuhaThakurta said. "The stars are very sparse compared to the high stellar densities of the disk and the bulge, but the halo is dominated by dark matter and actually contains most of the mass of the galaxy."

Yuting Feng, a doctoral student working with GuhaThakurta at UCSC, led the new study and is presenting their findings in two talks at the American Astronomical Society meeting in Seattle on January 9 and 11.

According to Feng, previous modeling studies had calculated that the stellar halo should extend out to around 300 kiloparsecs or 1 million light years from the galactic center. (Astronomers measure galactic distances in kiloparsecs; one kiloparsec is equal to 3,260 light years.) The 208 RR Lyrae stars detected by Feng and his colleagues ranged in distance from about 20 to 320 kiloparsecs.

"We were able to use these variable stars as reliable tracers to pin down the distances," Feng said. "Our observations confirm the theoretical estimates of the size of the halo, so that's an important result."

The findings are based on data from the Next Generation Virgo Cluster Survey (NGVS), a program using the Canada-France-Hawaii Telescope (CFHT) to study a cluster of galaxies well beyond the Milky Way. The survey was not designed to detect RR Lyrae stars, so the researchers had to dig them out of the dataset. The Virgo Cluster is a large cluster of galaxies that includes the giant elliptical galaxy M87.

"To get a deep exposure of M87 and the galaxies around it, the telescope also captured the foreground stars in the same field, so the data we used are sort of a by-product of that survey," Feng explained.

According to GuhaThakurta, the excellent quality of the NGVS data enabled the team to obtain the most reliable and precise characterization of RR Lyrae at these distances. RR Lyrae are old stars with very specific physical properties that cause them to expand and contract in a regularly repeating cycle.

"The way their brightness varies looks like an EKG -- they're like the heartbeats of the galaxy -- so the brightness goes up quickly and comes down slowly, and the cycle repeats perfectly with this very characteristic shape," GuhaThakurta said. "In addition, if you measure their average brightness, it is the same from star to star. This combination is fantastic for studying the structure of the galaxy."

The sky is full of stars, some brighter than others, but a star may look bright because it is very luminous or because it is very close, and it can be hard to tell the difference. Astronomers can identify an RR Lyrae star from its characteristic pulsations, then use its observed brightness to calculate how far away it is. The procedures are not simple, however. More distant objects, such as quasars, can masquerade as RR Lyrae stars.

"Only astronomers know how painful it is to get reliable tracers of these distances," Feng said. "This robust sample of distant RR Lyrae stars gives us a very powerful tool for studying the halo and testing our current models of the size and mass of our galaxy."

Read more at Science Daily

Warming oceans have decimated marine parasites -- but that's not a good thing

More than a century of preserved fish specimens offer a rare glimpse into long-term trends in parasite populations. New research from the University of Washington shows that fish parasites plummeted from 1880 to 2019, a 140-year stretch when Puget Sound -- their habitat and the second largest estuary in the mainland U.S. -- warmed significantly.

The study, published the week of Jan. 9 in the Proceedings of the National Academy of Sciences, is the world's largest and longest dataset of wildlife parasite abundance. It suggests that parasites may be especially vulnerable to a changing climate.

"People generally think that climate change will cause parasites to thrive, that we will see an increase in parasite outbreaks as the world warms," said lead author Chelsea Wood, a UW associate professor of aquatic and fishery sciences. "For some parasite species that may be true, but parasites depend on hosts, and that makes them particularly vulnerable in a changing world where the fate of hosts is being reshuffled."

While some parasites have a single host species, many parasites travel between host species. Eggs are carried in one host species, the larvae emerge and infect another host and the adult may reach maturity in a third host before laying eggs.

For parasites that rely on three or more host species during their lifecycle -- including more than half the parasite species identified in the study's Puget Sound fish -- analysis of historic fish specimens showed an 11% average decline per decade in abundance. Of 10 parasite species that had disappeared completely by 1980, nine relied on three or more hosts.

"Our results show that parasites with one or two host species stayed pretty steady, but parasites with three or more hosts crashed," Wood said. "The degree of decline was severe. It would trigger conservation action if it occurred in the types of species that people care about, like mammals or birds."

And while parasites inspire fear or disgust -- especially for people who associate them with illness in themselves, their kids or their pets -- the result is worrying news for ecosystems, Wood said.

"Parasite ecology is really in its infancy, but what we do know is that these complex-lifecycle parasites probably play an important role in pushing energy through food webs and in supporting top apex predators," Wood said. She is one of the authors of a 2020 report laying out a conservation plan for parasites.

Wood's study is among the first to use a new method for resurrecting information on parasite populations of the past. Mammals and birds are preserved with taxidermy, which retains parasites only on skin, feathers or fur. But fish, reptile and amphibian specimens are preserved in fluid, which also preserves any parasites living inside the animal at the time of its death.

The study focused on eight species of fish that are common in the behind-the-scenes collections of natural history museums. Most came from the UW Fish Collection at the Burke Museum of Natural History and Culture. The authors carefully sliced into the preserved fish specimens and then identified and counted the parasites they discovered inside before returning the specimens to the museums.

"It took a long time. It's certainly not for the faint of heart," Wood said. "I'd love to stick these fish in a blender and use a genomic technique to detect their parasites' DNA, but the fish were first preserved with a fluid that shreds DNA. So what we did was just regular old shoe-leather parasitology."

Among the multi-celled parasites they found were arthropods, or animals with an exoskeleton, including crustaceans, as well as what Wood describes as "unbelievably gorgeous tapeworms:" the Trypanorhyncha, whose heads are armed with hook-covered tentacles. In total, the team counted 17,259 parasites, of 85 types, from 699 fish specimens.

To explain the parasite declines, the authors considered three possible causes: how abundant the host species was in Puget Sound; pollution levels; and temperature at the ocean's surface. The variable that best explained the decline in parasites was sea surface temperature, which rose by 1 degree Celsius (1.8 degrees Fahrenheit) in Puget Sound from 1950 to 2019.

A parasite that requires multiple hosts is like a delicate Rube Goldberg machine, Wood said. The complex series of steps they face to complete their lifecycle makes them vulnerable to disruption at any point along the way.

"This study demonstrates that major parasite declines have happened in Puget Sound. If this can happen unnoticed in an ecosystem as well studied as this one, where else might it be happening?" Wood said. "I hope our work inspires other ecologists to think about their own focal ecosystems, identify the right museum specimens, and see whether these trends are unique to Puget Sound, or something that is occurring in other places as well.

"Our result draws attention to the fact that parasitic species might be in real danger," Wood added. "And that could mean bad stuff for us -- not just fewer worms, but less of the parasite-driven ecosystem services that we've come to depend on."

Read more at Science Daily

Fewer cases of melanoma among people taking vitamin D supplements

Fewer cases of melanoma were observed among regular users of vitamin D supplements than among non-users, a new study finds. People taking vitamin D supplements regularly also had a considerably lower risk of skin cancer, according to estimates by experienced dermatologists. The study, conducted in collaboration between the University of Eastern Finland and Kuopio University Hospital and published in Melanoma Research, included nearly 500 people with an increased risk of skin cancer.

Vitamin D plays a key role in the normal function of the human body, and it may also play a role in many diseases. The link between vitamin D and skin cancers has been studied abundantly in the past, but these studies have mainly focused on serum levels of calcidiol, which is a metabolite of vitamin D, and its association with skin cancers. Findings from these studies have been inconclusive and even contradictory at times, as serum calcidiol levels have been associated with both a slightly higher and with a slightly lower risk of different skin cancers. This may, in part, be explained by the fact that serum calcidiol analyses do not provide information on the metabolism of vitamin D in the human skin, which can express enzymes that generate biologically active vitamin D metabolites or inactivate them.

The new study, conducted under the North Savo Skin Cancer Programme, took a different approach: 498 adult patients estimated to have an increased risk of a skin cancer, such as basal cell carcinoma, squamous cell carcinoma or melanoma, were recruited at the dermatological outpatient clinic of Kuopio University Hospital. Experienced dermatologists at the University of Eastern Finland carefully analysed the patients' background information and medical history and examined their skin. The dermatologists also classified the patients into different skin cancer risk classes, namely low risk, moderate risk and high risk. Based on their use of oral vitamin D supplements, the patients were divided into three groups: non-users, occasional users and regular users. Serum calcidiol levels were analysed in half of the patients and found to correspond to their self-reported use of vitamin D.

A key finding of the study is that there were considerably fewer cases of melanoma among regular users of vitamin D than among non-users, and that the skin cancer risk classification of regular users was considerably better than non-users'. Logistic regression analysis showed that the risk for melanoma among regular users was considerably reduced, more than halved, compared to non-users.

The findings suggest that even occasional users of vitamin D may have a lower risk for melanoma than non-users. However, there was no statistically significant association between the use of vitamin D and the severity of photoaging, facial photoaging, actinic keratoses, nevus count, basal cell carcinoma and squamous cell carcinoma. Serum calcidiol levels were not significantly associated with these skin changes, either. Since the research design was cross-sectional, the researchers were unable to demonstrate a causal relationship.

Other relatively recent studies, too, have provided evidence of the benefits of vitamin D in melanoma, such as of the association of vitamin D with a less aggressive melanoma.

"These earlier studies back our new findings from the North Savo region here in Finland. However, the question about the optimal dose of oral vitamin D in order to for it to have beneficial effects remains to be answered. Until we know more, national intake recommendations should be followed," Professor of Dermatology and Allergology Ilkka Harvima of the University of Eastern Finland notes.

Read more at Science Daily

Jan 9, 2023

How evolution works

With its powerful digging shovels, the European mole can burrow through the soil with ease. The same applies to the Australian marsupial mole. Although the two animal species live far apart, they have developed similar organs in the course of evolution -- in their case, extremities ideally adapted for digging in the soil.

Science speaks of "convergent evolution" in such cases, when animal, but also plant species independently develop features that have the same shape and function. There are many examples of this: Fish, for example, have fins, as do whales, although they are mammals. Birds and bats have wings, and when it comes to using poisonous substances to defend themselves against attackers, many creatures, from jellyfish to scorpions to insects, have all evolved the same instrument: the venomous sting.

Identical characteristics despite lack of relationship

It is clear that scientists around the world are interested in finding out which changes in the genetic material of the respective species are responsible for the fact that identical characteristics have evolved in them, even though there is no relationship between them.

The search for this is proving difficult: "Such traits -- we speak of phenotypes -- are of course always encoded in genome sequences," says plant physiologist Dr. Kenji Fukushima of the Julius-Maximilians-Universität (JMU) Würzburg. Mutations -- changes in the genetic material -- can be the triggers for the development of new traits.

However, genetic changes rarely lead to phenotypic evolution because the underlying mutations are largely random and neutral. Thus, a tremendous amount of mutations accumulate over the extreme time scale at which evolutionary processes occur, making the detection of phenotypically important changes extremely difficult.

Novel metric of molecular evolution.

Now, Fukushima and his colleague David D. Pollock of the University of Colorado (USA) have succeeded in developing a method that achieves significantly better results than previously used methods in the search for the genetic basis of phenotypic traits. They present their approach in the current issue of the journal Nature Ecology & Evolution.

"We have developed a novel metric of molecular evolution that can accurately represent the rate of convergent evolution in protein-coding DNA sequences," says Fukushima, describing the main result of the now-published work. This new method, he says, can reveal which genetic changes are associated with the phenotypes of organisms on an evolutionary time scale of hundreds of millions of years. It thus offers the possibility of expanding our understanding of how changes in DNA lead to phenotypic innovations that give rise to a great diversity of species.

Tremendous treasure trove of data as a basis

A key development in the life sciences forms the basis of Fukushima's and Pollock's work: the fact that in recent years more and more genome sequences of many living organisms across the diversity of species have been decoded and thus made accessible for analysis. "This has made it possible to study the interrelationships of genotypes and phenotypes on a large scale at a macroevolutionary level," Fukushima says.

However, because many molecular changes are nearly neutral and do not affect any traits, there is often a risk of "false-positive convergence" when interpreting the data -- that is, the result predicts a correlation between a mutation and a particular trait that does not actually exist. In addition, methodological biases could also be responsible for such false-positive convergences.

Correlations over millions of years


"To overcome this problem, we expanded the framework and developed a new metric that measures the error-adjusted convergence rate of protein evolution," Fukushima explains. This, he says, makes it possible to distinguish natural selection from genetic noise and phylogenetic errors in simulations and real-world examples. Enhanced with a heuristic algorithm, the approach enables bidirectional searches for genotype-phenotype associations, even in lineages that have diverged over hundreds of millions of years, he says.

The two scientists analyzed more than 20 million branch combinations in vertebrate genes to examine how well the metric they developed works. In a next step, they plan to apply this method to carnivorous plants. The goal is to decipher the genetic basis that is partly responsible for these plants' ability to attract, capture and digest prey.

Read more at Science Daily

Turning plastic waste into a valuable soil additive

University of California, Riverside, scientists have moved a step closer to finding a use for the hundreds of millions of tons of plastic waste produced every year that often winds up clogging streams and rivers and polluting our oceans.

In a recent study, Kandis Leslie Abdul-Aziz, a UCR assistant professor of chemical and environmental engineering, and her colleagues detailed a method to convert plastic waste into a highly porous form of charcoal or char that has a whopping surface area of about 400 square meters per gram of mass.

Such charcoal captures carbon and could potentially be added to soil to improve soil water retention and aeration of farmlands. It could also fertilize the soil as it naturally breaks down. Abdul-Aziz, however, cautioned that more work needs to be done to substantiate the utility of such char in agriculture.

The plastic-to-char process was developed at UC Riverside's Marlan and Rosemary Bourns College of Engineering. It involved mixing one of two common types of plastic with corn waste -- the leftover stalks, leaves, husks, and cobs -- collectively known as corn stover. The mix was then cooked with highly compressed hot water, a process known as hydrothermal carbonization.

The highly porous char was produced using polystyrene, the plastic used for Styrofoam packaging, and polyethylene terephthalate, or PET, the material commonly used to make water and soda bottles, among many other products.

The study followed an earlier successful effort to use corn stover alone to make activated charcoal used to filter pollutants from drinking water. In the earlier study, charcoal made from corn stover alone activated with potassium hydroxide was able to absorb 98% of the pollutant vanillin from test water samples.

In the follow-up study, Abdul-Aziz and her colleagues wanted to know if activated charcoal made from a combination of corn stover and plastic also could be an effective water treatment medium. If so, plastic waste could be repurposed to clean up water pollution. But the activated charcoal made from the mix absorbed only about 45% of vanillin in test water samples -- making it ineffective for water cleanups, she said.

"We theorize that there could be still some residual plastic on the surface of the materials, which is preventing the absorption of some of these (vanillin) molecules on the surface," she said.

Still, the ability to make highly porous charcoal by combining plastic and plant biomass waste is an important discovery, as detailed in the paper, "Synergistic and Antagonistic Effects of the Co-Pyrolysis of Plastics and Corn Stover to Produce Char and Activated Carbon," published in the journal ACS Omega. The lead author is Mark Gale, a former UCR doctoral student who is now a lecturer at Harvey Mudd College. UCR undergraduate student Peter Nguyen is a co-author and Abdul-Aziz is the corresponding author.

"It could be a very useful biochar because it is a very high surface area material," Abdul-Aziz said. "So, if we just stop at the char and not make it in that turn into activated carbon, I think there are a lot of useful ways that we can utilize it."

Plastic is essentially a solid form of petroleum that accumulates in the environment, where it pollutes, entangles, and chokes and kills fish, birds, and other animals that inadvertently ingest it. Plastics also break down into micro particles that can get into our bodies and damage cells or induce inflammatory and immune reactions.

Unfortunately, it costs more to recycle used plastic than it costs to make new plastic from petroleum.

Abdul-Aziz's laboratory takes a different approach to recycling. It is devoted to putting pernicious waste products such as plastic and plant biomass waste back into the economy by upcycling them into valuable commodities.

Read more at Science Daily

Study reveals average age at conception for men versus women over past 250,000 years

The length of a specific generation can tell us a lot about the biology and social organization of humans. Now, researchers at Indiana University can determine the average age that women and men had children throughout human evolutionary history with a new method they developed using DNA mutations.

The researchers said this work can help us understand the environmental challenges experienced by our ancestors and may also help us in predicting the effects of future environmental change on human societies.

"Through our research on modern humans, we noticed that we could predict the age at which people had children from the types of DNA mutations they left to their children," said study co-author Matthew Hahn, Distinguished Professor of biology in the College of Arts and Sciences and of computer science in the Luddy School of Informatics, Computing and Engineering at IU Bloomington. "We then applied this model to our human ancestors to determine what age our ancestors procreated."

According to the study, published today in Science Advances and co-authored by IU post-doctoral researcher Richard Wang, the average age that humans had children throughout the past 250,000 years is 26.9. Furthermore, fathers were consistently older, at 30.7 years on average, than mothers, at 23.2 years on average, but the age gap has shrunk in the past 5,000 years, with the study's most recent estimates of maternal age averaging 26.4 years. The shrinking gap seems to largely be due to mothers having children at older ages.

Other than the recent uptick in maternal age at childbirth, the researchers found that parental age has not increased steadily from the past and may have dipped around 10,000 years ago because of population growth coinciding with the rise of civilization.

"These mutations from the past accumulate with every generation and exist in humans today," Wang said. "We can now identify these mutations, see how they differ between male and female parents, and how they change as a function of parental age."

Children's DNA inherited from their parents contains roughly 25 to 75 new mutations, which allows scientists to compare the parents and offspring, and then to classify the kind of mutation that occurred. When looking at mutations in thousands of children, IU researchers noticed a pattern: The kinds of mutations that children get depend on the ages of the mother and the father.

Previous genetic approaches to determining historical generation times relied on the compounding effects of either recombination or mutation of modern human DNA sequence divergence from ancient samples. But the results were averaged across both males and females and across the past 40,000 to 45,000 years.

Hahn, Wang and their co-authors built a model that uses de novo mutations -- a genetic alteration that is present for the first time in one family member as a result of a variant or mutation in a germ cell of one of the parents or that arises in the fertilized egg during early embryogenesis -- to separately estimate the male and female generation times at many different points throughout the past 250,000 years.

The researchers were not originally seeking to understand the relationship of gender and age at conception over time; they were conducting a broader investigation about the number of mutations passed from parents to children. They only noticed the age-based mutation patterns while seeking to understand differences and similarities between these pattens in humans versus other mammals, such as cats, bears and macaques.

"The story of human history is pieced together from a diverse set of sources: written records, archaeological findings, fossils, etc.," Wang said. "Our genomes, the DNA found in every one of our cells, offer a kind of manuscript of human evolutionary history. The findings from our genetic analysis confirm some things we knew from other sources (such as the recent rise in parental age), but also offer a richer understanding of the demography of ancient humans. These findings contribute to a better understanding of our shared history."

Read more at Science Daily

Solar-powered system converts plastic and greenhouse gases into sustainable fuels

Researchers have developed a system that can transform plastic waste and greenhouse gases into sustainable fuels and other valuable products -- using just the energy from the Sun.

The researchers, from the University of Cambridge, developed the system, which can convert two waste streams into two chemical products at the same time -- the first time this has been achieved in a solar-powered reactor.

The reactor converts the carbon dioxide (CO2) and plastics into different products that are useful in a range of industries. In tests, CO2 was converted into syngas, a key building block for sustainable liquid fuels, and plastic bottles were converted into glycolic acid, which is widely used in the cosmetics industry. The system can easily be tuned to produce different products by changing the type of catalyst used in the reactor.

Converting plastics and greenhouse gases -- two of the biggest threats facing the natural world -- into useful and valuable products using solar energy is an important step in the transition to a more sustainable, circular economy. The results are reported in the journal Nature Synthesis.

"Converting waste into something useful using solar energy is a major goal of our research," said Professor Erwin Reisner from the Yusuf Hamied Department of Chemistry, the paper's senior author. "Plastic pollution is a huge problem worldwide, and often, many of the plastics we throw into recycling bins are incinerated or end up in landfill."

Reisner also leads the Cambridge Circular Plastics Centre (CirPlas), which aims to eliminate plastic waste by combining blue-sky thinking with practical measures.

Other solar-powered 'recycling' technologies hold promise for addressing plastic pollution and for reducing the amount of greenhouse gases in the atmosphere, but to date, they have not been combined in a single process.

"A solar-driven technology that could help to address plastic pollution and greenhouse gases at the same time could be a game-changer in the development of a circular economy," said Subhajit Bhattacharjee, the paper's co-first author.

"We also need something that's tuneable, so that you can easily make changes depending on the final product you want," said co-first author Dr Motiar Rahaman.

The researchers developed an integrated reactor with two separate compartments: one for plastic, and one for greenhouse gases. The reactor uses a light absorber based on perovskite -- a promising alternative to silicon for next-generation solar cells.

The team designed different catalysts, which were integrated into the light absorber. By changing the catalyst, the researchers could then change the end product. Tests of the reactor under normal temperature and pressure conditions showed that the reactor could efficiently convert PET plastic bottles and CO2 into different carbon-based fuels such as CO, syngas or formate, in addition to glycolic acid. The Cambridge-developed reactor produced these products at a rate that is also much higher than conventional photocatalytic CO2 reduction processes.

"Generally, CO2 conversion requires a lot of energy, but with our system, basically you just shine a light at it, and it starts converting harmful products into something useful and sustainable," said Rahaman. "Prior to this system, we didn't have anything that could make high-value products selectively and efficiently."

"What's so special about this system is the versatility and tuneability -- we're making fairly simple carbon-based molecules right now, but in future, we could be able to tune the system to make far more complex products, just by changing the catalyst," said Bhattacharjee.

Reisner recently received new funding from the European Research Council to help the development of their solar-powered reactor. Over the next five years, they hope to further develop the reactor to produce more complex molecules. The researchers say that similar techniques could someday be used to develop an entirely solar-powered recycling plant.

"Developing a circular economy, where we make useful things from waste instead of throwing it into landfill, is vital if we're going to meaningfully address the climate crisis and protect the natural world," said Reisner. "And powering these solutions using the Sun means that we're doing it cleanly and sustainably."

Read more at Science Daily