Feb 4, 2023

Scientists release newly accurate map of all the matter in the universe

Sometimes to know what the matter is, you have to find it first.

When the universe began, matter was flung outward and gradually formed the planets, stars and galaxies that we know and love today. By carefully assembling a map of that matter today, scientists can try to understand the forces that shaped the evolution of the universe.

A group of scientists, including several with the University of Chicago and Fermi National Accelerator Laboratory, have released one of the most precise measurements ever made of how matter is distributed across the universe today.

Combining data from two major telescope surveys of the universe, the Dark Energy Survey and the South Pole Telescope, the analysis involved more than 150 researchers and is published as a set of three articles Jan. 31 in Physical Review D.

Among other findings, the analysis indicates that matter is not as "clumpy" as we would expect based on our current best model of the universe, which adds to a body of evidence that there may be something missing from our existing standard model of the universe.

Cooling and clumps


After the Big Bang created all the matter in the universe in a very hot, intense few moments about 13 billion years ago, this matter has been spreading outward, cooling and clumping as it goes. Scientists are very interested in tracing the path of this matter; by seeing where all the matter ended up, they can try to recreate what happened and what forces would have had to have been in play.

The first step is collecting enormous amounts of data with telescopes.

In this study, scientists combined data from two very different telescope surveys: The Dark Energy Survey, which surveyed the sky over six years from a mountaintop in Chile, and the South Pole Telescope, which looks for the faint traces of radiation that are still traveling across the sky from the first few moments of the universe.

Combining two different methods of looking at the sky reduces the chance that the results are thrown off by an error in one of the forms of measurement. "It functions like a cross-check, so it becomes a much more robust measurement than if you just used one or the other," said UChicago astrophysicist Chihway Chang, one of the lead authors of the studies.

In both cases, the analysis looked at a phenomenon called gravitational lensing. As light travels across the universe, it can be slightly bent as it passes objects with lots of gravity, like galaxies.

This method catches both regular matter and dark matter -- the mysterious form of matter that we have only detected due to its effects on regular matter -- because both regular and dark matter exert gravity.

By rigorously analyzing these two sets of data, the scientists could infer where all the matter ended up in the universe. It is more precise than previous measurements -- that is, it narrows down the possibilities for where this matter wound up -- compared to previous analyses, the authors said.

The majority of the results fit perfectly with the currently accepted best theory of the universe.

But there are also signs of a crack -- one that has been suggested in the past by other analyses, too.

"It seems like there are slightly less fluctuations in the current universe, than we would predict assuming our standard cosmological model anchored to the early universe," said analysis coauthor and University of Hawaii astrophysicist Eric Baxter (UChicago PhD'14).

That is, if you make a model incorporating all the currently accepted physical laws, then take the readings from the beginning of the universe and extrapolate it forward through time, the results look slightly different from what we actually measure around us today.

Specifically, today's readings find the universe is less "clumpy" -- clustering in certain areas rather than evenly spread out -- than the model would predict.

If other studies continue to find the same results, scientists say, it may mean there is something missing from our existing model of the universe, but the results are not yet to the statistical level that scientists consider to be ironclad. That will take further study.

However, the analysis is a landmark as it yielded useful information from two very different telescope surveys. This is a much-anticipated strategy for the future of astrophysics, as more large telescopes come online in the next decades, but few had actually been carried out yet.

"I think this exercise showed both the challenges and benefits of doing these kinds of analyses," Chang said. "There's a lot of new things you can do when you combine these different angles of looking at the universe."

Read more at Science Daily

Evolution of wheat spikes since the Neolithic revolution

Around 12,000 years ago, the Neolithic revolution radically changed the economy, diet and structure of the first human societies in the Fertile Crescent of the Near East. With the beginning of the cultivation of cereals -- such as wheat and barley -- and the domestication of animals, the first cities emerged in a new social context marked by a productive economy. Now, a study published in the journal Trends in Plant Science and co-led by the University of Barcelona, the Agrotecnio centre and the University of Lleida, analyses the evolution of wheat spikes since its cultivation began by the inhabitants of ancient Mesopotamia -- the cradle of agriculture in the world -- between the Tigris and the Euphrates.

The authors of the study are Rut Sánchez-Bragado and Josep Lluís Araus-Ortega, from the UB Faculty of Biology and Agrotecnio-UdL; Gustavo A. Slafer, ICREA researcher at the UdL School of Agrifood and Forestry Science and Engineering, and Gemma Molero, from the International Maize and Wheat Improvement Center in Mexico, currently a researcher at KWS Seeds Inc.

A cereal that changed human history

The cultivation of wheat -- a grass that became basic food -- represented a turning point in the progress of human civilisation. Today it is the world's most important crop in terms of food security, but EU data warn that the impact of climate change could significantly increase its price and modify its production process in certain areas of the world.

Throughout the domestication process of wheat, the plant phenotype has undergone both rapid (within a few hundred years) and slow (thousands of years) changes, such as the weakening of the rachis, the increase in seed size, and the reduction or disappearance of the awns. In particular, awned and awnless wheat varieties are found all over the world, although the latter tend to be abundant in regions with arid climates, especially during the final stages of cultivation in late spring, a condition typical of Mediterranean environments.

"It is important to conduct studies that show which wheat varieties are best adapted to different environmental growing conditions, especially in a context of climate change. Studying the past retrospectively can give us an idea of the evolution of wheat cultivation over the millennia since agriculture appeared in ancient Mesopotamia," says Rut Sánchez-Bragado, first author of the study, who got a PhD at the UB.

"Awns are organs of the spike that have traditionally been associated with the plant's adaptations to drought conditions," says Josep Lluís Araus, professor at the Department of Evolutionary Biology, Ecology and Environmental Sciences of the Faculty of Biology.

"However, archaeological and historical records show that the wheat spike has existed predominantly with awns for more than ten millennia after the domestication of wheat. It is not until the last millennium that evidence shows in many cases the absence of awns, indicating a selection by farmers -- probably in an undirected way -- against this organ," stresses Araus, one of the most cited authors in the world according to Clarivate Analytics' Highly Cited Researchers (2022).

"The role of wheat awns in their performance remains controversial despite decades of studies," says researcher Gustavo A. Slafer, corresponding author of the study.

Spike awns: beneficial for the plant?

Is the presence of awns on the spike beneficial for the plant and the crops? Although there is no scientific consensus, "everything suggests that in conditions where the plant does not suffer from water stress, the extra photosynthetic capacity of the awns does not compensate for other potential negative effects (reduced susceptibility to fungal diseases, limitation in the total number of large ones that an ear supports, etc.)," says Araus.

"However, in wetter climates the awns accumulate moisture and can promote the spread of diseases," says Rut Sánchez-Bragado. "So, as the world's population is continuously growing, it is necessary to investigate the role of the awned spikes in the changing conditions of our climate in order to meet the world's demand for a primary food commodity such as wheat."

Read more at Science Daily

Feb 3, 2023

Hubble directly measures mass of a lone white dwarf

Astronomers using NASA's Hubble Space Telescope have for the first time directly measured the mass of a single, isolated white dwarf -- the surviving core of a burned-out, Sun-like star.

Researchers found that the white dwarf is 56 percent the mass of our Sun. This agrees with earlier theoretical predictions of the white dwarf's mass and corroborates current theories of how white dwarfs evolve as the end product of a typical star's evolution. The unique observation yields insights into theories of the structure and composition of white dwarfs.

Until now, previous white dwarf mass measurements have been gleaned from observing white dwarfs in binary star systems. By watching the motion of two co-orbiting stars, straightforward Newtonian physics can be used to measure their masses. However, these measurements can be uncertain if the white dwarf's companion star is in a long-period orbit of hundreds or thousands of years. Orbital motion can be measured by telescopes only over a brief slice of the dwarf's orbital motion.

For this companion-less white dwarf, researchers had to employ a trick of nature, called gravitational microlensing. The light from a background star was slightly deflected by the gravitational warping of space by the foreground dwarf star. As the white dwarf passed in front of the background star, microlensing caused the star to appear temporarily offset from its actual position on the sky.

The results are reported in the Monthly Notices of the Royal Astronomical Society. The lead author is Peter McGill, formerly of the University of Cambridge (now based at the University of California, Santa Cruz).

McGill used Hubble to precisely measure how light from a distant star bent around the white dwarf, known as LAWD 37, causing the background star to temporarily change its apparent position in the sky.

Kailash Sahu of the Space Telescope Science Institute in Baltimore, Maryland, the principal Hubble investigator on this latest observation, first used microlensing in 2017 to measure the mass of another white dwarf, Stein 2051 B. But that dwarf is in a widely separated binary system. "Our latest observation provides a new benchmark because LAWD 37 is all by itself," Sahu said.

The collapsed remains of a star that burned out 1 billion years ago, LAWD 37 has been extensively studied because it is only 15 light-years away in the constellation Musca. "Because this white dwarf is relatively close to us, we've got lots of data on it -- we've got information about its spectrum of light, but the missing piece of the puzzle has been a measurement of its mass," said McGill.

The team zeroed in on the white dwarf thanks to ESA's Gaia space observatory, which makes extraordinarily precise measurements of nearly 2 billion star positions. Multiple Gaia observations can be used to track a star's motion. Based on this data, astronomers were able to predict that LAWD 37 would briefly pass in front of a background star in November 2019.

Once this was known, Hubble was used to precisely measure over several years how the background star's apparent position in the sky was temporarily deflected during the white dwarf's passage.

"These events are rare, and the effects are tiny," said McGill. "For instance, the size of our measured offset is like measuring the length of a car on the Moon as seen from Earth."

Since the light from the background star was so faint, the main challenge for astronomers was extracting its image from the glare of the white dwarf, which is 400 times brighter than the background star. Only Hubble can make these kinds of high-contrast observations in visible light.

"The precision of LAWD 37's mass measurement allows us to test the mass-radius relationship for white dwarfs," said McGill. "This means testing the theory of degenerate matter (a gas so super-compressed under gravity it behaves more like solid matter) under the extreme conditions inside this dead star," he added.

The researchers say their results open the door for future event predictions with Gaia data. In addition to Hubble, these alignments can now be detected with NASA's James Webb Space Telescope. Because Webb works at infrared wavelengths, the blue glow of a foreground white dwarf looks dimmer in infrared light, and the background star looks brighter.

Based on Gaia's predictive powers, Sahu is observing another white dwarf, LAWD 66, with NASA's James Webb Space Telescope. The first observation was done in 2022. More observations will be taken as the deflection peaks in 2024 and then subsides.

"Gaia has really changed the game -- it's exciting to be able to use Gaia data to predict when events will happen, and then observe them happening," said McGill. "We want to continue measuring the gravitational microlensing effect and obtain mass measurements for many more types of stars."

In his 1915 theory of general relativity, Einstein predicted that when a massive compact object passes in front of a background star, the light from the star would bend around the foreground object due to the warping of space by its gravitational field.

Exactly a century before this latest Hubble observation, in 1919, two British-organized expeditions to the southern hemisphere first detected this lensing effect during a solar eclipse on May 19th. It was hailed as the first experimental proof of general relativity -- that gravity warps space. However, Einstein was pessimistic that the effect could ever be detected for stars outside our solar system because of the precision involved. "Our measurement is 625 times smaller than the effect measured at the 1919 solar eclipse," said McGill.

Read more at Science Daily

New ice is like a snapshot of liquid water

A collaboration between scientists at Cambridge and UCL has led to the discovery of a new form of ice that more closely resembles liquid water than any other and may hold the key to understanding this most famous of liquids.

The new form of ice is amorphous. Unlike ordinary crystalline ice where the molecules arrange themselves in a regular pattern, in amorphous ice the molecules are in a disorganised form that resembles a liquid.

In this paper, published in Science, the team created a new form of amorphous ice in experiment and achieved an atomic-scale model of it in computer simulation. The experiments used a technique called ball-milling, which grinds crystalline ice into small particles using metal balls in a steel jar. Ball-milling is regularly used to make amorphous materials, but it had never been applied to ice.

The team found that ball-milling created a novel amorphous form of ice, which unlike all other known ices, had a density similar to that of liquid water and whose state resembled water in solid form. They named the new ice medium-density amorphous ice (MDA).

To understand the process at the molecular scale the team employed computational simulation. By mimicking the ball-milling procedure via repeated random shearing of crystalline ice, the team successfully created a computational model of MDA.

"Our discovery of MDA raises many questions on the very nature of liquid water and so understanding MDA's precise atomic structure is very important" comments co-author Dr. Michael Davies, who carried out the computational modelling. "We found remarkable similarities between MDA and liquid water."

A happy medium

Amorphous ices have been suggested to be models for liquid water. Until now, there have been two main types of amorphous ice: high-density and low-density amorphous ice.

As the names suggest, there is a large density gap between them. This density gap, combined with the fact that the density of liquid water lies in the middle, has been a cornerstone of our understanding of liquid water. It has led in part to the suggestion that water consists of two liquids: one high- and one low-density liquid.

Senior author Professor Christoph Salzmann said: "The accepted wisdom has been that no ice exists within that density gap. Our study shows that the density of MDA is precisely within this density gap and this finding may have far-reaching consequences for our understanding of liquid water and its many anomalies."

A high-energy geophysical material

The discovery of MDA gives rise to the question: where might it exist in nature? Shear forces were discovered to be key to creating MDA in this study. The team suggests ordinary ice could undergo similar shear forces in the ice moons due to the tidal forces exerted by gas giants such as Jupiter.

Moreover, MDA displays one remarkable property that is not found in other forms of ice. Using calorimetry, they found that when MDA recrystallises to ordinary ice it releases an extraordinary amount of heat. The heat released from the recrystallization of MDA could play a role in activating tectonic motions. More broadly, this discovery shows water can be a high-energy geophysical material.

Read more at Science Daily

To know where the birds are going, researchers turn to citizen science and machine learning

Computer scientists at the University of Massachusetts Amherst, in collaboration with biologists at the Cornell Lab of Ornithology, recently announced in the journal Methods in Ecology and Evolution a new, predictive model that is capable of accurately forecasting where a migratory bird will go next -- one of the most difficult tasks in biology. The model is called BirdFlow, and while it is still being perfected, it should be available to scientists within the year and will eventually make its way to the general public.

"Humans have been trying to figure out bird migration for a really long time," says Dan Sheldon, professor of information and computer sciences at UMass Amherst, the paper's senior author and a passionate amateur birder. "But," adds Miguel Fuentes, the paper's lead author and graduate student in computer science at UMass Amherst, "it's incredibly difficult to get precise, real-time information on which birds are where, let alone where, exactly, they are going."

There have been many efforts, both previous and ongoing, to tag and track individual birds, which have yielded invaluable insights. But it's difficult to physically tag birds in large enough numbers -- not to mention the expense of such an undertaking -- to form a complete enough picture to predict bird movements. "It's really hard to understand how an entire species moves across the continent with tracking approaches," says Sheldon, "because they tell you the routes that some birds caught in specific locations followed, but not how birds in completely different locations might move."

In recent years, there's been an explosion in the number of citizen scientists who monitor and report sightings of migratory birds. Birders around the world contribute more than 200 million annual bird sightings through eBird, a project managed by the Cornell Lab of Ornithology and international partners. It's one of the largest biodiversity-related science projects in existence and has hundreds of thousands of users, facilitating state-of-the-art species distribution modeling through the Lab's eBird Status & Trends project. "eBird data is amazing because it shows where birds of a given species are every week across their entire range," says Sheldon, "but it doesn't track individuals, so we need to infer what routes individual birds follow to best explain the species-level patterns."

BirdFlow draws on eBird's Status & Trends database and its estimates of relative bird abundance and then runs that information through a probabilistic machine-learning model. This model is tuned with real-time GPS and satellite tracking data so that it can "learn" to predict where individual birds will move next as they migrate.

The researchers tested BirdFlow on 11 species of North American birds -- including the American Woodcock, Wood Thrush and Swainson's Hawk -- and found that not only did BirdFlow outperform other models for tracking bird migration, it can accurately predict migration flows without the real-time GPS and satellite tracking data, which makes BirdFlow a valuable tool for tracking species that may literally fly under the radar.

Read more at Science Daily

Study links adoption of electric vehicles with less air pollution and improved health

Electric vehicles are widely hailed as a key way to mitigate climate change through reduced emissions, but research on the dual benefits of reduced air pollution and improved health has been largely hypothetical.

A team of researchers from the Keck School of Medicine of USC have now begun to document the actual impact of electric vehicle adoption in the first study to use real-world data to link electric cars, air pollution and health. Leveraging publicly available datasets, the researchers analyzed a "natural experiment" occurring in California as residents in the state rapidly transitioned to electric cars, or light-duty zero emissions vehicles (ZEVs). The results were just published in the journal Science of the Total Environment.

The team compared data on total ZEV registration, air pollution levels and asthma-related emergency room visits across the state between 2013 to 2019. As ZEV adoption increased within a given zip code, local air pollution levels and emergency room visits dropped.

"When we think about the actions related to climate change, often it's on a global level," said Erika Garcia, PhD, MPH, an assistant professor of population and public health sciences at the Keck School of Medicine and the study's lead author. "But the idea that changes being made at the local level can improve the health of your own community could be a powerful message to the public and to policy makers."

The researchers also found that while total ZEVs increased over time, adoption was considerably slower in low-resource zip codes -- what the researchers refer to as the "adoption gap." That disparity points to an opportunity to restore environmental justice in communities that are disproportionately affected by pollution and related health problems.

"The impacts of climate change on health can be challenging to talk about because they can feel very scary," said Sandrah Eckel, PhD, an associate professor of population and public health sciences at the Keck School of Medicine and the study's senior author. "We're excited about shifting the conversation towards climate change mitigation and adaptation, and these results suggest that transitioning to ZEVs is a key piece of that."

Benefits for health and the climate

To study the effects of electric vehicle adoption, the research team analyzed and compared four different datasets. First, they obtained data on ZEVs (which includes battery electric, plug-in hybrid, and hydrogen fuel cell cars) from the California Department of Motor Vehicles and tabulated the total number registered in each zip code for every year between 2013 and 2019.

They also obtained data from U.S. Environmental Protection Agency air monitoring sites on levels of nitrogen dioxide (NO2), an air pollutant related to traffic, and zip code level asthma-related visits to the emergency room. Asthma is one of the health concerns long linked with air pollutants such as NO2, which can also cause and exacerbate other respiratory diseases, as well as problems with the heart, brain and other organ systems.

Finally, the researchers calculated the percentage of adults in each zip code who held bachelor's degrees. Educational attainment levels are frequently used as an indicator of a neighborhood's socioeconomic status.

At the zip code level, for every additional 20 ZEVs per 1,000 people, there was a 3.2% drop in the rate of asthma-related emergency visits and a small suggestive reduction in NO2 levels. On average across zip codes in the state, ZEVs increased from 1.4 to 14.6 per 1,000 people between 2013 and 2019. ZEV adoption was significantly lower in zip codes with lower levels of educational attainment. For example, a zip code with 17% of the population having a bachelor's degree had, on average, an annual increase of 0.70 ZEVs per 1,000 people compared to an annual increase of 3.6 ZEVs per 1,000 people for a zip code with 47% of the population having a bachelor's degree.

Past research has shown that underserved communities, such as lower-income neighborhoods, tend to face worse pollution and associated respiratory problems than more affluent areas. If ZEVs replace gas-powered cars in those neighborhoods, they could stand to benefit substantially.

"Should continuing research support our findings, we want to make sure that those communities that are overburdened with the traffic-related air pollution are truly benefiting from this climate mitigation effort," Garcia said.

More to learn

While climate change is a massive health threat, mitigating it offers a massive public health opportunity, Eckel said. As one of the first studies to quantify the real-world environmental and health benefits of ZEVs, the research can help demonstrate the power of this mitigation measure, including possibly reduced health care utilization and expenditures.

The findings are promising, Garcia said, but many questions remain. Future studies should consider additional impacts of ZEVs, including emissions related to brake and tire wear, mining of materials for their manufacture, and disposal of old cars. The researchers also hope to study additional types of pollutants and other classes of vehicles, in addition to conducting a follow-up study of the effects of the ever-growing share of ZEVs in the state.

Moving forward, transitioning to ZEVs is just one part of the solution, Eckel said. Shifting to public transport and active transport, including walking and biking, are other key ways to boost environmental and public health.

Read more at Science Daily

Feb 2, 2023

Astronomers uncover a one-in-ten-billion binary star system: Kilonova progenitor system

Astronomers using the SMARTS 1.5-meter Telescope at Cerro Tololo Inter-American Observatory in Chile, a Program of NSF's NOIRLab, have uncovered the first example of a phenomenally rare type of binary star system, one that has all the right conditions to eventually trigger a kilonova -- the ultra-powerful, gold-producing explosion created by colliding neutron stars. Such an arrangement is so vanishingly rare that only about 10 such systems are thought to exist in the entire Milky Way Galaxy. The findings are published today in the journal Nature.

This unusual system, known as CPD-29 2176, is located about 11,400 light-years from Earth. It was first identified by NASA's Neil Gehrels Swift Observatory. Later observations with the SMARTS 1.5-meter Telescope allowed astronomers to deduce the orbital characteristics and types of stars that make up this system -- a neutron star created by an ultra-stripped supernova and a closely orbiting massive star that is in the process of becoming an ultra-stripped supernova itself.

An ultra-stripped supernova is the end-of-life explosion of a massive star that has had much of its outer atmosphere stripped away by a companion star. This class of supernova lacks the explosive force of a traditional supernova, which would otherwise "kick" a nearby companion star out of the system.

"The current neutron star would have to form without ejecting its companion from the system. An ultra-stripped supernova is the best explanation for why these companion stars are in such a tight orbit," said Noel D. Richardson at Embry-Riddle Aeronautical University and lead author of the paper. "To one day create a kilonova, the other star would also need to explode as an ultra-stripped supernova so the two neutron stars could eventually collide and merge."

As well as representing the discovery of an incredibly rare cosmic oddity, finding and studying kilonova progenitor systems such as this can help astronomers unravel the mystery of how kilonovae form, shedding light on the origin of the heaviest elements in the Universe.

"For quite some time, astronomers speculated about the exact conditions that could eventually lead to a kilonova," said NOIRLab astronomer and co-author André-Nicolas Chené. "These new results demonstrate that, in at least some cases, two sibling neutron stars can merge when one of them was created without a classical supernova explosion."

Producing such an unusual system, however, is a long and unlikely process. "We know that the Milky Way contains at least 100 billion stars and likely hundreds of billions more. This remarkable binary system is essentially a one-in-ten-billion system," said Chené. "Prior to our study, the estimate was that only one or two such systems should exist in a spiral galaxy like the Milky Way."

Though this system has all the right stuff to eventually form a kilonova, it will be up to future astronomers to study that event. It will take at least one million years for the massive star to end its life as a titanic supernova explosion and leave behind a second neutron star. This new stellar remnant and the pre-existing neutron star will then need to gradually draw together in a cosmic ballet, slowly losing their orbital energy as gravitational radiation.

When they eventually merge, the resulting kilonova explosion will produce much more powerful gravitational waves and leave behind in its wake a large amount of heavy elements, including silver and gold.

Read more at Science Daily

New ancient 'marine crocodile' discovered on UK's Jurassic Coast -- and it's one of the oldest specimens of its type ever found

A new study has uncovered a new thalattosuchian -- an ancient 'sister' of modern-day crocodiles' ancestors.

The discovery of Turnersuchus hingleyae follows an impressive unearthing of fossils on the Jurassic Coast, in Dorset, UK, including part of the head, backbone, and limbs. In fact, the find at the Charmouth Mudstone Formation was so successful, Turnersuchus is the only complete enough thalattosuchian of its age -- dating back to the Early Jurassic, Pliensbachian period, around 185 million years ago -- to be named to date.

Published in the peer-reviewed Journal of Vertebrate Paleontology, experts state the discovery of this new predator helps fill a gap in the fossil record and suggests that thalattosuchians, with other crocodyliforms, should have originated around the end of the Triassic period -- around 15 million years further back in time than when Turnersuchus lived.

"We should now expect to find more thalattosuchians of the same age as Turnersuchus as well as older," states co-author Dr. Eric Wilberg, Assistant Professor at the Department of Anatomical Sciences, at Stony Brook University.

"In fact, during the publication of our paper, another paper was published describing a thalattosuchian skull discovered in the roof of a cave in Morocco from the Hettangian/Sinemurian (the time periods preceding the Pliensbachian where Turnersuchus was found), which corroborates this idea. I expect we will continue to find more older thalattosuchians and their relatives. Our analyses suggest that thalattosuchians likely first appeared in the Triassic and survived the end-Triassic mass extinction."

However, no digs have found thalattosuchians in Triassic rocks yet, which means there is a ghost lineage (a period during which we know a group must have existed, but we haven't yet recovered fossil evidence). Until the discovery of Turnersuchus, this ghost lineage extended from the end of the Triassic until the Toarcian, in the Jurassic, "but now we can reduce the ghost lineage by a few million years" the expert team states.

Thalattosuchians are referred to colloquially as 'marine crocodiles' or 'sea crocodiles', despite the fact they are not members of Crocodylia, but are more distantly related. Some thalattosuchians became very well adapted to life in the oceans, with short limbs modified into flippers, a shark-like tail fin, salt glands, and potentially the ability to give live birth (rather than lay eggs).

Turnersuchus is interesting as much of these recognized thalattosuchian features had yet to fully evolve. It lived in the Jurassic Ocean and preyed on marine wildlife. And, due to its relatively long, slender snout, would have looked similar in appearance to the currently living gharial crocodiles, which are found in all the major river systems of the northern Indian subcontinent.

"However," co-author Dr. Pedro Godoy, from the University of São Paulo in Brazil says, "unlike crocodiles, this approximately 2-meter-long predator lived purely in coastal marine habitats. And though their skulls look superficially similar to modern gharials, they were constructed quite differently."

Thalattosuchians had particularly large supratemporal fenestrae -- a region of the skull housing jaw muscles. This suggests that Turnersuchus and other thalattosuchians possessed enlarged jaw muscles that likely enabled fast bites; most of their likely prey were fast-moving fish or cephalopods. It's possible too, just as in modern-day crocodiles, that the supratemporal region of Turnersuchus had a thermoregulatory function -- to help buffer brain temperature.

Read more at Science Daily

319-million-year-old fish preserves the earliest fossilized brain of a backboned animal

The CT-scanned skull of a 319-million-year-old fossilized fish, pulled from a coal mine in England more than a century ago, has revealed the oldest example of a well-preserved vertebrate brain.

The brain and its cranial nerves are roughly an inch long and belong to an extinct bluegill-size fish. The discovery opens a window into the neural anatomy and early evolution of the major group of fishes alive today, the ray-finned fishes, according to the authors of a University of Michigan-led study scheduled for publication Feb. 1 in Nature.

The serendipitous find also provides insights into the preservation of soft parts in fossils of backboned animals. Most of the animal fossils in museum collections were formed from hard body parts such as bones, teeth and shells.

The CT-scanned brain analyzed for the new study belongs to Coccocephalus wildi, an early ray-finned fish that swam in an estuary and likely dined on small crustaceans, aquatic insects and cephalopods, a group that today includes squid, octopuses and cuttlefish. Ray-finned fishes have backbones and fins supported by bony rods called rays.

When the fish died, the soft tissues of its brain and cranial nerves were replaced during the fossilization process with a dense mineral that preserved, in exquisite detail, their three-dimensional structure.

"An important conclusion is that these kinds of soft parts can be preserved, and they may be preserved in fossils that we've had for a long time -- this is a fossil that's been known for over 100 years," said U-M paleontologist Matt Friedman, a senior author of the new study and director of the Museum of Paleontology.

The lead author is U-M doctoral student Rodrigo Figueroa, who did the work as part of his dissertation, under Friedman, in the Department of Earth and Environmental Sciences.

"Not only does this superficially unimpressive and small fossil show us the oldest example of a fossilized vertebrate brain, but it also shows that much of what we thought about brain evolution from living species alone will need reworking," Figueroa said.

"With the widespread availability of modern imaging techniques, I would not be surprised if we find that fossil brains and other soft parts are much more common than we previously thought. From now on, our research group and others will look at fossil fish heads with a new and different perspective."

The skull fossil from England is the only known specimen of its species, so only nondestructive techniques could be used during the U-M-led study.

The work on Coccocephalus is part of a broader effort by Friedman, Figueroa and colleagues that uses computed tomography (CT) scanning to peer inside the skulls of early ray-finned fishes. The goal of the larger study is to obtain internal anatomical details that provide insights about evolutionary relationships.

In the case of C. wildi, Friedman was not looking for a brain when he fired up his micro-CT scanner and examined the skull fossil.

"I scanned it, then I loaded the data into the software we use to visualize these scans and noticed that there was an unusual, distinct object inside the skull," he said.

The unidentified blob was brighter on the CT image -- and therefore likely denser -- than the bones of the skull or the surrounding rock.

"It is common to see amorphous mineral growths in fossils, but this object had a clearly defined structure," Friedman said.

The mystery object displayed several features found in vertebrate brains: It was bilaterally symmetrical, it contained hollow spaces similar in appearance to ventricles, and it had multiple filaments extending toward openings in the braincase, similar in appearance to cranial nerves, which travel through such canals in living species.

"It had all these features, and I said to myself, 'Is this really a brain that I'm looking at?'" Friedman said. "So I zoomed in on that region of the skull to make a second, higher-resolution scan, and it was very clear that that's exactly what it had to be. And it was only because this was such an unambiguous example that we decided to take it further."

Though preserved brain tissue has rarely been found in vertebrate fossils, scientists have had better success with invertebrates. For example, the intact brain of a 310-million-year-old horseshoe crab was reported in 2021, and scans of amber-encased insects have revealed brains and other organs. There is even evidence of brains and other parts of the nervous system recorded in flattened specimens more than 500 million years old.

The preserved brain of a 300-million-year-old shark relative was reported in 2009. But sharks, rays and skates are cartilaginous fishes, which today hold relatively few species compared to the ray-finned fish lineage containing Coccocephalus. Early ray-finned fishes like Coccocephalus can tell scientists about the initial evolutionary phases of today's most diverse fish group, which includes everything from trout to tuna, seahorses to flounder.

There are roughly 30,000 ray-finned fish species, and they account for about half of all backboned animal species. The other half is split between land vertebrates -- birds, mammals, reptiles and amphibians -- and less diverse fish groups like jawless fishes and cartilaginous fishes.

The Coccocephalus skull fossil is on loan to Friedman from England's Manchester Museum. It was recovered from the roof of the Mountain Fourfoot coal mine in Lancashire and was first scientifically described in 1925. The fossil was found in a layer of soapstone adjacent to a coal seam in the mine.

Though only its skull was recovered, scientists believe that C. wildi would have been 6 to 8 inches long. Judging from its jaw shape and its teeth, it was probably a carnivore, according to Figueroa.

When the fish died, scientists suspect it was quickly buried in sediments with little oxygen present. Such environments can slow the decomposition of soft body parts.

In addition, a chemical micro-environment inside the skull's braincase may have helped to preserve the delicate brain tissues and to replace them with a dense mineral, possibly pyrite, Figueroa said.

Evidence supporting this idea comes from the cranial nerves, which send electrical signals between the brain and the sensory organs. In the Coccocephalus fossil, the cranial nerves are intact inside the braincase but disappear as they exit the skull.

"There seems to be, inside this tightly enclosed void in the skull, a little micro-environment that is conducive to the replacement of those soft parts with some kind of mineral phase, capturing the shape of tissues that would otherwise simply decay away," Friedman said.

Detailed analysis of the fossil, along with comparisons to the brains of modern-fish specimens from the U-M Museum of Zoology collection, revealed that the brain of Coccocephalus has a raisin-size central body with three main regions that roughly correspond to the forebrain, midbrain and hindbrain in living fishes.

Cranial nerves project from both sides of the central body. Viewed as a single unit, the central body and the cranial nerves resemble a tiny crustacean, such as a lobster or a crab, with projecting arms, legs and claws.

Notably, the brain structure of Coccocephalus indicates a more complicated pattern of fish-brain evolution than is suggested by living species alone, according to the authors.

"These features give the fossil real value in understanding patterns of brain evolution, rather than simply being a curiosity of unexpected preservation," Figueroa said.

For example, all living ray-finned fishes have an everted brain, meaning that the brains of embryonic fish develop by folding tissues from the inside of the embryo outward, like a sock turned inside out.

All other vertebrates have evaginated brains, meaning that neural tissue in developing brains folds inward.

"Unlike all living ray-finned fishes, the brain of Coccocephalus folds inward," Friedman said. "So, this fossil is capturing a time before that signature feature of ray-finned fish brains evolved. This provides us with some constraints on when this trait evolved -- something that we did not have a good handle on before the new data on Coccocephalus."

Comparisons to living fishes showed that the brain of Coccocephalus is most similar to the brains of sturgeons and paddlefish, which are often called "primitive" fishes because they diverged from all other living ray-finned fishes more than 300 million years ago.

Friedman and Figueroa are continuing to CT scan the skulls of ray-finned fish fossils, including several specimens that Figueroa brought to Ann Arbor on loan from institutions in his home country, Brazil. Figueroa said his doctoral dissertation was delayed by the COVID-19 pandemic but is expected to be completed in summer 2024.

The Nature study includes data produced at U-M's Computed Tomography in Earth and Environmental Science facility, which is supported by the Department of Earth and Environmental Sciences and the College of Literature, Science, and the Arts.

The other authors of the paper are Sam Giles of London's Natural History Museum and the University of Birmingham; Danielle Goodvin and Matthew Kolmann of the U-M Museum of Paleontology; and Michael Coates and Abigail Caron of the University of Chicago.

Friedman and Figueroa said the discovery highlights the importance of preserving specimens in paleontology and zoology museums.

Read more at Science Daily

Seawater split to produce 'green' hydrogen

Researchers have successfully split seawater without pre-treatment to produce green hydrogen.

The international team was led by the University of Adelaide's Professor Shizhang Qiao and Associate Professor Yao Zheng from the School of Chemical Engineering.

"We have split natural seawater into oxygen and hydrogen with nearly 100 per cent efficiency, to produce green hydrogen by electrolysis, using a non-precious and cheap catalyst in a commercial electrolyser," said Professor Qiao.

A typical non-precious catalyst is cobalt oxide with chromium oxide on its surface.

"We used seawater as a feedstock without the need for any pre-treatment processes like reverse osmosis desolation, purification, or alkalisation," said Associate Professor Zheng.

"The performance of a commercial electrolyser with our catalysts running in seawater is close to the performance of platinum/iridium catalysts running in a feedstock of highly purified deionised water.

The team published their research in the journal Nature Energy.

"Current electrolysers are operated with highly purified water electrolyte. Increased demand for hydrogen to partially or totally replace energy generated by fossil fuels will significantly increase scarcity of increasingly limited freshwater resources," said Associate Professor Zheng.

Seawater is an almost infinite resource and is considered a natural feedstock electrolyte. This is more practical for regions with long coastlines and abundant sunlight. However, it isn't practical for regions where seawater is scarce.

Seawater electrolysis is still in early development compared with pure water electrolysis because of electrode side reactions, and corrosion arising from the complexities of using seawater.

"It is always necessary to treat impure water to a level of water purity for conventional electrolysers including desalination and deionisation, which increases the operation and maintenance cost of the processes," said Associate Professor Zheng.

"Our work provides a solution to directly utilise seawater without pre-treatment systems and alkali addition, which shows similar performance as that of existing metal-based mature pure water electrolyser."

The team will work on scaling up the system by using a larger electrolyser so that it can be used in commercial processes such as hydrogen generation for fuel cells and ammonia synthesis.

Researchers have successfully split seawater without pre-treatment to produce green hydrogen.

The international team was led by the University of Adelaide's Professor Shizhang Qiao and Associate Professor Yao Zheng from the School of Chemical Engineering.

"We have split natural seawater into oxygen and hydrogen with nearly 100 per cent efficiency, to produce green hydrogen by electrolysis, using a non-precious and cheap catalyst in a commercial electrolyser," said Professor Qiao.

A typical non-precious catalyst is cobalt oxide with chromium oxide on its surface.

"We used seawater as a feedstock without the need for any pre-treatment processes like reverse osmosis desolation, purification, or alkalisation," said Associate Professor Zheng.

"The performance of a commercial electrolyser with our catalysts running in seawater is close to the performance of platinum/iridium catalysts running in a feedstock of highly purified deionised water.

The team published their research in the journal Nature Energy.

"Current electrolysers are operated with highly purified water electrolyte. Increased demand for hydrogen to partially or totally replace energy generated by fossil fuels will significantly increase scarcity of increasingly limited freshwater resources," said Associate Professor Zheng.

Seawater is an almost infinite resource and is considered a natural feedstock electrolyte. This is more practical for regions with long coastlines and abundant sunlight. However, it isn't practical for regions where seawater is scarce.

Seawater electrolysis is still in early development compared with pure water electrolysis because of electrode side reactions, and corrosion arising from the complexities of using seawater.

"It is always necessary to treat impure water to a level of water purity for conventional electrolysers including desalination and deionisation, which increases the operation and maintenance cost of the processes," said Associate Professor Zheng.

"Our work provides a solution to directly utilise seawater without pre-treatment systems and alkali addition, which shows similar performance as that of existing metal-based mature pure water electrolyser."

Read more at Science Daily

Feb 1, 2023

The bubbling universe: A previously unknown phase transition in the early universe

Think of bringing a pot of water to the boil: As the temperature reaches the boiling point, bubbles form in the water, burst and evaporate as the water boils. This continues until there is no more water changing phase from liquid to steam.

This is roughly the idea of what happened in the very early universe, right after the Big Bang, 13.7 billion years ago.

The idea comes from particle physicists Martin S. Sloth from the Center for Cosmology and Particle Physics Phenomenology at University of Southern Denmark and Florian Niedermann from the Nordic Institute for Theoretical Physics (NORDITA) in Stockholm. Niedermann is a previous postdoc in Sloth's research group. In this new scientific article, they present an even stronger basis for their idea.

Many bubbles crashing into each other

"One must imagine that bubbles arose in various places in the early universe. They got bigger and they started crashing into each other. In the end, there was a complicated state of colliding bubbles, which released energy and eventually evaporated," said Martin S. Sloth.

The background for their theory of phase changes in a bubbling universe is a highly interesting problem with calculating the so-called Hubble constant; a value for how fast the universe is expanding. Sloth and Niedermann believe that the bubbling universe plays a role here.

The Hubble constant can be calculated very reliably by, for example, analyzing cosmic background radiation or by measuring how fast a galaxy or an exploding star is moving away from us. According to Sloth and Niedermann, both methods are not only reliable, but also scientifically recognized. The problem is that the two methods do not lead to the same Hubble constant. Physicists call this problem "the Hubble tension."

Is there something wrong with our picture of the early universe?

"In science, you have to be able to reach the same result by using different methods, so here we have a problem. Why don't we get the same result when we are so confident about both methods?" said Florian Niedermann.

Sloth and Niedermann believe they have found a way to get the same Hubble constant, regardless of which method is used. The path starts with a phase transition and a bubbling universe -- and thus an early, bubbling universe is connected to "the Hubble tension."

"If we assume that these methods are reliable -- and we think they are -- then maybe the methods are not the problem. Maybe we need to look at the starting point, the basis, that we apply the methods to. Maybe this basis is wrong."

An unknown dark energy

The basis for the methods is the so-called Standard Model, which assumes that there was a lot of radiation and matter, both normal and dark, in the early universe, and that these were the dominant forms of energy. The radiation and the normal matter were compressed in a dark, hot and dense plasma; the state of the universe in the first 380,000 years after Big Bang.

When you base your calculations on the Standard Model, you arrive at different results for how fast the universe is expanding -- and thus different Hubble constants.

But maybe a new form of dark energy was at play in the early universe? Sloth and Niedermann think so.

If you introduce the idea that a new form of dark energy in the early universe suddenly began to bubble and undergo a phase transition, the calculations agree. In their model, Sloth and Niedermann arrive at the same Hubble constant when using both measurement methods. They call this idea New Early Dark Energy -- NEDE.

Change from one phase to another -- like water to steam

Sloth and Niedermann believe that this new, dark energy underwent a phase transition when the universe expanded, shortly before it changed from the dense and hot plasma state to the universe we know today.

- This means that the dark energy in the early universe underwent a phase transition, just as water can change phase between frozen, liquid and steam. In the process, the energy bubbles eventually collided with other bubbles and along the way released energy, said Niedermann.

"It could have lasted anything from an insanely short time -- perhaps just the time it takes two particles to collide -- to 300,000 years. We don't know, but that is something we are working to find out," added Sloth.

Do we need new physics?


So, the phase transition model is based on the fact that the universe does not behave as the Standard Model tells us. It may sound a little scientifically crazy to suggest that something is wrong with our fundamental understanding of the universe; that you can just propose the existence of hitherto unknown forces or particles to solve the Hubble tension.

"But if we trust the observations and calculations, we must accept that our current model of the universe cannot explain the data, and then we must improve the model. Not by discarding it and its success so far, but by elaborating on it and making it more detailed so that it can explain the new and better data," said Martin S. Sloth, adding:

"It appears that a phase transition in the dark energy is the missing element in the current Standard Model to explain the differing measurements of the universe's expansion rate.

How fast is the universe expanding?

The Hubble constant is a value for how fast the universe is expanding.

In Martin S. Sloth and Florian Niedermann's model, the Hubble constant is 72. Approximately. After all, large distances are being calculated, so we must allow for uncertainty of a few decimals.

What does 72 mean? It means 72 km per second per Megaparsec. Megaparsecs are a measure of the distance between, for example, two galaxies, and one megaparsec is 30,000,000,000,000,000,000 km. For every megaparsec between us and, for example, a galaxy, the galaxy moves away from us at 72 km per second.

When you measure the distance to galaxies by supernovas, you get a Hubble constant of approx. 73 (km/s)/megaparsec. But when measuring on the first light particles (the cosmic background radiation), the Hubble constant is 67.4 (km/s)/megaparsec.

Read more at Science Daily

With rapidly increasing heat and drought, can plants adapt?

At a time when climate change is making many areas of the planet hotter and drier, it's sobering to think that deserts are relatively new biomes that have grown considerably over the past 30 million years. Widespread arid regions, like the deserts that today cover much of western North America, began to emerge only within the past 5 to 7 million years.

Understanding how plants that invaded these harsh deserts biomes were able to survive could help predict how ecosystems will fare in a drier future.

An intensive study of a group of plants that first invaded emerging deserts millions of years ago concludes that these pioneers -- rock daisies -- did not come unequipped to deal with heat, scorching sun and lack of water. They had developed adaptations to such stresses while living on dry, exposed rock outcroppings within older, more moist areas and even tropical forests, all of which made it easier for them to invade expanding arid areas.

The study by University of California, Berkeley, researcher Isaac Lichter-Marck is the first to provide evidence to resolve a long-standing evolutionary debate: Did iconic desert plants, like the stately saguaro cacti, the flaming ocotillos and the Seussian agaves, adapt to arid conditions only after they invaded deserts. Or did they come preadapted to the stresses of desert living?

The question has relevance today, Lichter-Marck said, because accelerating aridity due to climate change is challenging plants to adapt much more quickly than they have in the past. Already, about one-fifth of Earth's land surface is desert. If adaptation to arid conditions was only possible for plants that had already evolved to deal with such stresses, then many today may not be equipped with an adequate genetic tool kit to survive.

"If you think about aridity only as a stimulus to plant evolution, then in many cases people could say these plants are survivors, they are adaptable, and they will be fine. They will take advantage of these new conditions, and they will thrive," said Lichter-Marck, who is also a National Science Foundation postdoctoral research fellow at UCLA.

But the history of rock daisies suggests that "when the deserts emerged, those plants that had the necessary preadaptations to take advantage of new conditions were the ones that thrived," he said. "Adding more aridification to the system doesn't necessarily mean more rapid adaptive evolution will occur. There's a limited source of lineages that can take advantage of new levels of aridity, and that is important for understanding the effect of climate change on biodiversity."

Lichter-Marck and Bruce Baldwin, UC Berkeley professor of integrative biology, curator of the Jepson Herbarium and chief editor of The Jepson Desert Manual: Vascular Plants of Southeastern California (2002), published their study about the evolution of rock daisies in North American deserts this week in the journal Proceedings of the National Academy of Sciences.

Seven years roaming the desert

Botanists realized long ago that when plants invaded desert areas, they quickly diversified to fill the many niches created by this new type of habitat.

"Even as recently as 1 million to 1.5 million years ago, it would have been difficult to find widespread desert habitats like we see today in North America, which is kind of surprising because now deserts and arid habitats are the most widespread biome on earth," Lichter-Marck said. "But during the late Miocene Epoch, dry habitats spread, and the world's lineages of desert plants, especially the succulent lineages like the cacti, the agaves and the ice plants -- as well as many other drought tolerant lineages -- underwent a synchronous rapid diversification."

Paleontologists pointed out, however, that fossilized plants that thrived tens of millions of years before the proliferation of deserts had characteristics similar to those of desert plants today. Some scientists, like the late paleoecologist Daniel Axelrod of UCLA and UC Davis, argued that this meant the plants that thrived in the desert today evolved earlier and were preadapted -- or exapted -- to survive desert conditions by growing in dry microsites, such as rock outcrops, rain shadows or mountaintops. Others, like UC Berkeley's Ledyard Stebbins, an evolutionary biologist who helped found the UC Davis Department of Genetics, argued that aridity itself spurred plants to diversify and develop traits to withstand dryness, heat, intense sunlight and strong winds.

Despite the similarities between rocky outcrops and deserts, it has been hard to prove that desert plants descended from plants already adapted to the stresses of aridity, in part because fossils rarely form in dry habitats and cannot tell us much about the habitat in which these ancient plants were growing.

To Lichter-Marck and Baldwin, rock daisies, which are classified in the tribe Perityleae in the sunflower family, seemed like a good group in which to explore the connection. Some species live on dry, exposed rock in tropical areas of Mexico -- what might be considered "micro-deserts" -- while others have fully adapted to desert areas, such as the Mojave in California and the Great Basin, Chihuahuan and Sonoran deserts that cover most of western North America.

"Plants that live on rock outcrops face many of the same challenges as those living in a dry, desert habitat," Lichter-Marck said. "Rock outcrops tend to be exposed to UV light, wind and dry, desiccating conditions, as well as heat and frost. They also tend to be more exposed to herbivores.

"The ways that plants deal with them are diverse, but they usually involve some kind of specialized root morphology that helps them to anchor in rock outcrops, as well as deal with the heightened arid conditions. And they tend to have smaller leaves, or leaves with a dense covering of hairs that help buffer them against drought and block sunlight, including UV light. They also tend to have heightened chemical defenses against herbivores, because it takes a lot of energy to regenerate after being munched."

For his Ph.D. thesis in the Department of Integrative Biology and at the Jepson Herbarium, Lichter-Marck, a Southern California native, roamed the deserts of Arizona, California, Texas and Mexico for months at a time in a pickup truck, accompanied by his blue heeler, Rio, to collect hundreds of specimens of rock daises. Some rock daisies are among the most dramatic bloomers in spring, carpeting the desert with colorful blossoms. Many, however, are limited to small geographic regions where they grow only on vertical rock faces or sky island mountain ranges, making them hazardous to collect. Lichter-Marck is an experienced mountaineer, an important skillset for field work in rough terrain.

He later sequenced the DNA of these specimens -- 73 of the 84 recognized species of rock daisy -- and catalogued their life histories, such as where they grew, what type of root system they had, and whether they were annual or perennial, an herb or a shrub. He then compared them to fossilized daisies to develop a rough timeline of the evolution of these characteristics and the lineage's eventual shift into deserts.

This allowed him to conclude that most rock daises -- in particular, the genus Laphamia, which was the first to move into deserts and is the largest rock daisy genus -- had adapted to the stress of heat, aridity, wind and sun by virtue of their growth on cliffs before invading deserts.

"This is a clear empirical demonstration of what was originally Axelrod's hypothesis -- of a desert plant group originating in dry microclimates prior to the widespread emergence of desert habitats," said Lichter-Marck. "What this means is that the strategies for drought tolerance that are so characteristic of desert vegetation might not actually represent responses to the dry conditions found in deserts. Instead, they could be traits that evolved earlier in association with much older and more stable dry microclimates, such as rock outcrops in tropical settings."

Preadaptation may be the key to the success of many desert plants, including cacti, which are known to inhabit rock outcrops or grow as epiphytes in the canopies of trees within tropical areas, though these large lineages would require a much more extended analysis, he said.

Rock daisies, many of which live in specialized habitats that make them vulnerable to extinction, highlight the importance of conserving seemingly niche species.

"A lot of the rock daisies are very specialized and tend to be very narrow in their distribution and might be seen as less significant to the survival of the ecosystem as a whole. In evolutionary biology and in conservation biology, specialized organisms with narrow geographic ranges are often considered vulnerable lineages and have sometimes even been called evolutionary dead ends," he said. "An important implication here is that a group of ecological specialists growing on scattered cliffs in tropical habitats started this major radiation in the desert. So, it actually shows that specialists are not just these vulnerable lineages on the edge of extinction. They might actually be really important sources for innovation in evolution."

Read more at Science Daily

Over 4% of summer mortality in European cities is attributable to urban heat islands

Over four percent of deaths in cities during the summer months are due to urban heat islands, and one third of these deaths could be prevented by reaching a tree cover of 30%, according to a modelling study published in The Lancet and led by the Barcelona Institute for Global Health (ISGlobal), an institution supported by "la Caixa" Foundation. The study results, obtained with data from 93 European cities, highlight the substantial benefits of planting more trees in cities to attenuate the impact of climate change.

Exposure to heat has been associated with premature mortality, cardiorespiratory disease and hospital admissions. This is particularly true for heat waves, but also occurs with moderately high temperatures in summer. Cities are especially vulnerable to higher temperatures. Less vegetation, higher population density, and impermeable surfaces for buildings and roads, including asphalt, lead to a temperature difference between the city and surrounding areas -- a phenomenon called urban heat island. Given the ongoing global warming and urban growth, this effect is expected to worsen over the next decades.

"Predictions based on current emissions reveal that heat-related illness and death will become a bigger burden to our health services over the next decades," says ISGlobal researcher Tamara Iungman, first author of the study.

An international team led by Mark Nieuwenhuijsen, director of the Urban Planning, Environment and Health Initiative at ISGlobal, estimated mortality rates of residents aged over 20 in 93 European cities (a total of 57 million inhabitants), between June and August 2015, and collected data on daily rural and urban temperatures for each city. The analyses were performed at a high-resolution level (areas of 250m x 250m). First, they estimated the premature mortality by simulating a hypothetical scenario without urban heat island. Second, they estimated the temperature reduction that would be obtained by increasing tree cover to 30% and the associated mortality that could be avoided.

"Our goal is to inform local decision-makers about the benefits of integrating green areas into all neighborhoods in order to promote more sustainable, resilient and healthy urban environments," explains Nieuwenhuijsen.

The protective effect of trees

The results show that, from June to August 2015, cities were on average 1.5oC warmer than the surrounding countryside. In total, 6,700 premature deaths could be attributed to hotter urban temperatures, which represents 4.3% of total mortality during the summer months and 1.8% of year-round mortality. One third of these deaths (2,644) could have been prevented by increasing tree cover up to 30%, thereby reducing temperatures. Overall, cities with the highest excess heat-mortality rates were in Southern and Eastern Europe, with these cities benefiting the most from an increase in tree cover.

The study highlights the substantial benefits of planting more trees in cities, although the authors acknowledge that this can be challenging in some cities due to their design, and that tree planting should be combined with other interventions such as green roofs or other temperature-reducing alternatives.

"Our results also show the need to preserve and maintain the trees that we already have because they are a valuable resource and it takes a long time to grow new trees. It is not only about increasing trees in the city, it is also about how they are distributed," says Nieuwenhuijsen.

The analyses were done for 2015 because population data were not available for later years, but, as Iungman points out, the study provides valuable information for adapting our cities and making them more resilient to the health impact of climate change. "Here we only looked at the cooling effect of trees, but making cities greener has many other health benefits, including longer life expectancy, fewer mental health problems and better cognitive functioning," she adds.

Read more at Science Daily

1. 5-degree goal not plausible: Social change more important than physical tipping points

Limiting global warming to 1.5 degrees Celsius is currently not plausible, as is shown in a new study released by Universität Hamburg's Cluster of Excellence "Climate, Climatic Change, and Society" (CLICCS). Climate policy, protests, and the Ukraine crisis: the participating researchers systematically assessed to what extent social changes are already underway -- while also analyzing certain physical processes frequently discussed as tipping points. Their conclusion: social change is essential to meeting the temperature goals set in Paris. But what has been achieved to date is insufficient. Accordingly, climate adaptation will also have to be approached from a new angle.

The interdisciplinary team of researchers addressed ten important drivers of social change: "Actually, when it comes to climate protection, some things have now been set in motion. But if you look at the development of social processes in detail, keeping global warming under 1.5 degrees still isn't plausible," says CLICCS Speaker Prof. Anita Engels. According to the Hamburg Climate Futures Outlook, especially consumption patterns and corporate responses are slowing urgently needed climate protection measures. Other key factors like UN climate policy, legislation, climate protests and divestment from the fossil fuels are supporting efforts to meet the climate goals. As the analysis shows, however, this positive dynamic alone won't suffice to stay within the 1.5-degree limit. "The deep decarbonization required is simply progressing too slowly," says Engels.

In addition, the team assesses certain physical processes that are frequently discussed as tipping points: the loss of the Arctic sea ice and melting ice sheets are serious developments -- as are regional climate changes. But they will have very little influence on the global temperature until 2050. In this regard, a thawing permafrost, weakened Atlantic Meridional Overturning Circulation (AMOC), and the loss of the Amazon Forest are more important factors -- albeit only moderately. "The Fact is: these feared tipping points could drastically change the conditions for life on Earth -- but they're largely irrelevant for reaching the Paris Agreement temperature goals," explains CLICCS Co-Speaker Prof. Jochem Marotzke from the Max Planck Institute for Meteorology.

The study also covers COVID-19 and the Russian invasion of Ukraine: economic reconstruction programs have reinforced dependence on fossil fuels, which means the necessary changes are now less plausible than previously assumed. In contrast, whether efforts to safeguard Europe's power supply and the international community's attempts to become independent of Russian gas will undermine or accelerate the phasing out of fossil fuels in the long run remains unclear.

Importance of human agency, new approach to adaptation

The Outlook is currently the only assessment that interlinks social sciences and natural sciences analysis in an integrated study to assess the plausibility of certain climate futures. More than 60 experts have contributed. According to the study, the best hope for shaping a positive climate future lies in the ability of society to make fundamental changes ("human agency"). In addition, the Outlook reveals a range of conditions for doing so, for instance that transnational initiatives and non-government actors continue to support climate protection, and that protests keep up the pressure on politicians.

Read more at Science Daily

Jan 31, 2023

Evidence that Saturn's moon Mimas is a stealth ocean world

When a Southwest Research Institute scientist discovered surprising evidence that Saturn's smallest, innermost moon could generate the right amount of heat to support a liquid internal ocean, colleagues began studying Mimas' surface to understand how its interior may have evolved. Numerical simulations of the moon's Herschel impact basin, the most striking feature on its heavily cratered surface, determined that the basin's structure and the lack of tectonics on Mimas are compatible with a thinning ice shell and geologically young ocean.

"In the waning days of NASA's Cassini mission to Saturn, the spacecraft identified a curious libration, or oscillation, in Mimas' rotation, which often points to a geologically active body able to support an internal ocean," said SwRI's Dr. Alyssa Rhoden, a specialist in the geophysics of icy satellites, particularly those containing oceans, and the evolution of giant planet satellite systems. She is the second author of a new Geophysical Research Letters paper on the subject. "Mimas seemed like an unlikely candidate, with its icy, heavily cratered surface marked by one giant impact crater that makes the small moon look much like the Death Star from Star Wars. If Mimas has an ocean, it represents a new class of small, 'stealth' ocean worlds with surfaces that do not betray the ocean's existence."

Rhoden worked with Purdue graduate student Adeene Denton to better understand how a heavily cratered moon like Mimas could possess an internal ocean. Denton modeled the formation of the Hershel impact basin using iSALE-2D simulation software. The models showed that Mimas' ice shell had to be at least 34 miles (55 km) thick at the time of the Herschel-forming impact. In contrast, observations of Mimas and models of its internal heating limit the present-day ice shell thickness to less than 19 miles (30 km) thick, if it currently harbors an ocean. These results imply that a present-day ocean within Mimas must have been warming and expanding since the basin formed. It is also possible that Mimas was entirely frozen both at the time of the Herschel impact and at present. However, Denton found that including an interior ocean in impact models helped produce the shape of the basin.

"We found that Herschel could not have formed in an ice shell at the present-day thickness without obliterating the ice shell at the impact site," said Denton, who is now a post-doctoral researcher at the University of Arizona. "If Mimas has an ocean today, the ice shell has been thinning since the formation of Herschel, which could also explain the lack of fractures on Mimas. If Mimas is an emerging ocean world, that places important constraints on the formation, evolution and habitability of all of the mid-sized moons of Saturn."

"Although our results support a present-day ocean within Mimas, it is challenging to reconcile the moon's orbital and geologic characteristics with our current understanding of its thermal-orbital evolution," Rhoden said. "Evaluating Mimas' status as an ocean moon would benchmark models of its formation and evolution. This would help us better understand Saturn's rings and mid-sized moons as well as the prevalence of potentially habitable ocean moons, particularly at Uranus. Mimas is a compelling target for continued investigation."

Read more at Science Daily

Short-term bang of fireworks has long-term impact on wildlife

Popular fireworks should be replaced with cleaner drone and laser light shows to avoid the "highly damaging" impact on wildlife, domestic pets and the broader environment, new Curtin-led research has found.

The new research, published in Pacific Conservation Biology, examined the environmental toll of firework displays by reviewing the ecological effects of Diwali festivities in India, Fourth of July celebrations across the United States of America, and other events in New Zealand and parts of Europe.

Examples included fireworks in Spanish festivals impacting the breeding success of House Sparrows, July firework displays being implicated in the decline of Brandt's Cormorant colonies in California, and South American sea lions changing their behaviour during breeding season as a result of New Year's fireworks in Chile.

Lead author Associate Professor Bill Bateman, from Curtin's School of Molecular and Life Sciences, said fireworks remained globally popular despite the overwhelming evidence that they negatively impacted wildlife, domestic animals and the environment.

"Fireworks create short-term noise and light disturbances that cause distress in domestic animals that may be managed before or after a firework event, but the impacts to wildlife can be on a much larger scale," Associate Professor Bateman said.

"The annual timing of some large-scale firework events coincides with the migratory or reproductive movements of wildlife, and may therefore have adverse long-term population effects on them. Fireworks also produce significant pulses of highly pollutant materials that also contribute significantly to the chemical pollution of soil, water, and air, which has implications for human as well as animal health."

Associate Professor Bateman said firework bans at sensitive periods for wildlife migration or mating periods could limit the impact, as well as drone or other light-based shows.

"Other than horses, for which there is some evidence that they can be gradually familiarised with flashes of light, there is very little that can be done to address the disturbing impact of noise from fireworks on animals and wildlife," Associate Professor Bateman said.

"The future of firework displays may be in the use of safer and greener alternatives such as drones, eco-friendly fireworks or visible-wavelength lasers for light shows.

Read more at Science Daily

Transforming the way cancer vaccines are designed and made

A new way to significantly increase the potency of almost any vaccine has been developed by researchers from the International Institute for Nanotechnology (IIN) at Northwestern University. The scientists used chemistry and nanotechnology to change the structural location of adjuvants and antigens on and within a nanoscale vaccine, greatly increasing vaccine performance. The antigen targets the immune system, and the adjuvant is a stimulator that increases the effectiveness of the antigen.

The scientists used chemistry and nanotechnology to change the structural location of adjuvants and antigens on and within a nanoscale vaccine, greatly increasing vaccine performance. The antigen targets the immune system, and the adjuvant is a stimulator that increases the effectiveness of the antigen.

The study will be published Jan. 30 in Nature Biomedical Engineering.

"The work shows that vaccine structure and not just the components is a critical factor in determining vaccine efficacy," said lead investigator Chad A. Mirkin, director of the IIN. "Where and how we position the antigens and adjuvant within a single architecture markedly changes how the immune system recognizes and processes it.

Mirkin also is the George B. Rathmann Professor of Chemistry at the Weinberg College of Arts and Sciences and a professor of medicine at Northwestern University Feinberg School of Medicine.

This new heightened emphasis on structure has the potential to improve the effectiveness of conventional cancer vaccines, which historically have not worked well, Mirkin said.

Mirkin's team has studied the effect of vaccine structure in the context of seven different types of cancer to date, including triple-negative breast cancer, papillomavirus-induced cervical cancer, melanoma, colon cancer and prostate cancer to determine the most effective architecture to treat each disease.

Conventional vaccines take a blender approach

With most conventional vaccines, the antigen and the adjuvant are blended and injected into a patient. There is no control over the vaccine structure, and, consequently, limited control over the trafficking and processing of the vaccine components. Thus, there is no control over how well the vaccine works.

"A challenge with conventional vaccines is that out of that blended mish mosh, an immune cell might pick up 50 antigens and one adjuvant or one antigen and 50 adjuvants," said study author and former Northwestern postdoctoral associate Michelle Teplensky, who is now an assistant professor at Boston University. "But there must be an optimum ratio of each that would maximize the vaccine's effectiveness."

Enter SNAs (spherical nucleic acids), which are the structural platform -- invented and developed by Mirkin -- used in this new class of modular vaccines. SNAs allow scientists to pinpoint exactly how many antigens and adjuvants are being delivered to cells. SNAs also enable scientists to tailor how these vaccine components are presented, and the rate at which they are processed. Such structural considerations, which greatly impact vaccine effectiveness, are largely ignored in conventional approaches.

Vaccines developed through 'rational vaccinology' offer precise dosing for maximum effectiveness

This approach to systematically control antigen and adjuvant locations within modular vaccine architectures was created by Mirkin, who coined the term rational vaccinology to describe it. It is based on the concept that the structural presentation of vaccine components is as important as the components themselves in driving efficacy.

"Vaccines developed through rational vaccinology deliver the precise dose of antigen and adjuvant to every immune cell, so they are all equally primed to attack cancer cells," said Mirkin, who also is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. "If your immune cells are soldiers, a traditional vaccine leaves some unarmed; our vaccine arms them all with a powerful weapon with which to kill cancer. Which immune cell 'soldiers' do you want to attack your cancer cells?" Mirkin asked rhetorically.

Building an (even) better vaccine

The team developed a cancer vaccine that doubled the number of cancer antigen-specific T cells and increased the activation of these cells by 30% by reconfiguring the architecture of the vaccine to contain multiple targets to help the immune system find tumor cells.

The team investigated differences in how well two antigens were recognized by the immune system depending on their placement -- on the core or perimeter -- of the SNA structure. For an SNA with optimum placement, they could increase the immune response and how quickly the nanovaccine triggered cytokine (an immune cell protein) production to boost T cells attacking the cancer cells. The scientists also studied how the different placements affected the immune system's ability to remember the invader, and whether the memory was long-term.

"Where and how we position the antigens and adjuvant within a single architecture markedly changes how the immune system recognizes and processes it," Mirkin said.

The most powerful structure throws two punches to outsmart the wily, mutating tumor

The study data show that attaching two different antigens to an SNA comprising a shell of adjuvant was the most potent approach for a cancer vaccine structure. It led to a 30% increase in antigen-specific T-cell activation and doubled the number of proliferating T cells compared to a structure in which the same two antigens were attached to two separate SNAs.

These engineered SNA nanostructures stalled tumor growth in multiple animal models.

"It is remarkable," Mirkin said. "When altering the placement of antigens in two vaccines that are nearly identical from a compositional standpoint, the treatment benefit against tumors is dramatically changed. One vaccine is potent and useful, while the other is much less effective."

Many current cancer vaccines are designed to primarily activate cytotoxic T cells, only one defense against a cancer cell. Because tumor cells are always mutating, they can easily escape this immune cell surveillance, quickly rendering the vaccine ineffective. The odds are higher that the T cell will recognize a mutating cancer cell if it has more ways -- multiple antigens -- to recognize it.

"You need more than one type of T cell activated, so you can more easily attack a tumor cell," Teplensky said. "The more types of cells the immune system has to go after tumors, the better. Vaccines consisting of multiple antigens targeting multiple immune cell types are necessary to induce enhanced and long-lasting tumor remission."

Another advantage of the rational vaccinology approach, especially when used with a nanostructure like an SNA, is that it's easy to alter the structure of a vaccine to go after a different type of disease. Mirkin said they simply switch out a peptide, a snippet of a cancer protein with a chemical handle that "clips" onto the structure, not unlike adding a new charm to a bracelet.

Path to most effective vaccine for any cancer type

"The collective importance of this work is that it lays the foundation for developing the most effective forms of vaccine for almost any type of cancer," Teplensky said. "It is about redefining how we develop vaccines across the board, including ones for infectious diseases."

In a previously published paper, Mirkin, Teplensky and colleagues demonstrated the importance of vaccine structure for COVID-19 by creating vaccines that exhibited protective immunity in 100% of animals against a lethal viral infection.

"Small changes in antigen placement on a vaccine significantly elevate cell-to-cell communication, cross-talk and cell synergy," Mirkin said. "The developments made in this work provide a path forward to rethinking the design of vaccines for cancer and other diseases as a whole."

Read more at Science Daily

Three or more concussions linked with worse brain function in later life

Experiencing three or more concussions is linked with worsened brain function in later life, according to major new research.

The study -- the largest of its kind -- also found having just one moderate-to-severe concussion, or traumatic brain injury (TBI), can have a long-term impact on brain function, including memory.

Led by teams at the University of Oxford and the University of Exeter, the research included data from more than 15,000 participants of the online PROTECT study, who were aged between 50 and 90 and based in the UK. They reported the severity and frequency of concussions they had experienced throughout their lives, and completed annual, computerised tests for brain function.

Published in the Journal of Neurotrauma, the paper found that people who reported three or more concussions had significantly worse cognitive function, which got successively worse with each subsequent concussion after that. Attention and completion of complex tasks were particularly affected.

Researchers say people who have had concussions should be warned of the dangers of continuing high-risk sport or work.

Lead investigator Dr Vanessa Raymont, from the University of Oxford, said: "We know that head injuries are a major risk factor for dementia, and this large-scale study gives the greatest detail to date on a stark finding -- the more times you injure your brain in life, the worse your brain function could be as you age.

"Our research indicates that people who have experienced three or more even mild episodes of concussion should be counselled on whether to continue high-risk activities. We should also encourage organisations operating in areas where head impact is more likely to consider how they can protect their athletes or employees."

The team found that participants who reported three episodes of even mild concussion throughout their lives had significantly worse attention and ability to complete complex tasks. Those who had four or more mild concussion episodes also showed worsened processing speed and working memory. Each additional reported concussion was linked to progressively worse cognitive function.

Furthermore, the researchers found that reporting even one moderate-to-severe concussion was associated with worsened attention, completion of complex tasks and processing speed capacity.

In the online PROTECT study, participants share detailed lifestyle information, and complete a suite of cognitive tests every year, for up to 25 years. This rich mine of data helps researchers understand how the brain ages, and the factors involved in maintaining a healthier brain in later life.

Dr Helen Brooker, a study co-author from the University of Exeter, said: "As our population ages, we urgently need new ways to empower people to live healthier lives in later life. This paper highlights the importance of detailed long-term studies like PROTECT in better understating head injuries and the impact to long term cognitive function, particularly as concussion has also been linked to dementia. We're learning that life events that might seem insignificant, life experiencing a mild concussion, can have an impact on the brain. Our findings indicate that cognitive rehabilitation should focus on key functions such as attention and completion of complex tasks, which we found to be susceptible to long-term damage."

Read more at Science Daily

Jan 30, 2023

Honey bee colony loss in the U.S. linked to mites, extreme weather, pesticides

About one-third of the food eaten by Americans comes from crops pollinated by honey bees, yet the insect is dying off at alarming rates. In one year alone, between April of 2019 and April of 2020, one study reported a 43% colony loss in honey bees across the United States.

A new study led by Penn State researchers provides preliminary insight on the potential effects of several variables, including some linked to climate change, on honey bees. Their findings show that honey bee colony loss in the U.S. over the last five years is primarily related to the presence of parasitic mites, extreme weather events, nearby pesticides, as well as challenges with overwintering, according to a new study led by Penn State researchers. The study took advantage of novel statistical methods and is the first to concurrently consider a variety of potential honey bee stressors at a national scale. The study, published online in the journal Scientific Reports, suggests several areas of concern to prioritize in beekeeping practices.

"Honey bees are vital pollinators for more than 100 species of crops in the United States, and the widespread loss of honey bee colonies is increasingly concerning," said Luca Insolia, first author of the study, a visiting graduate student in the Department of Statistics at Penn State at the time of the research, and currently a postdoctoral researcher at the University of Geneva in Switzerland. "Some previous studies have explored several potential stressors related to colony loss in a detailed way but are limited to narrow, regional areas. The one study that we know of at the national level in the United States explored only a single potential stressor. For this study, we integrated many large datasets at different spatial and temporal resolutions and used new, sophisticated statistical methods to assess several potential stressors associated with colony collapse across the U.S."

The research team, composed of statisticians, geographers, and entomologists, gathered publicly available data about honey bee colonies, land use, weather, and other potential stressors from the years 2015 to 2021. Because these data came from a variety of sources, they varied in resolution over both space and time. The weather data, for example, contained daily data points for areas only few square miles in size, but data on honey bee colonies was at the state level for a several-month period.

"In order to analyze the data all together, we had to come up with a technique to match the resolution of the various data sources," said Martina Calovi, corresponding author of the study, a postdoctoral researcher in the Department of Ecosystem Science and Management at Penn State at the time of the research, and currently an associate professor of geography at the Norwegian University of Science and Technology. "We could have just taken an average of all the weather measurements we had within a state, but that boils all the information we have into one number and loses a lot of information, especially about any extreme values. In addition to averaging weather data, we used an 'upscaling' technique to summarize the data in several different ways, which allowed us to retain more information, including about the frequency of extreme temperature and precipitation events."

The researchers used the resulting integrated resolution-matched dataset -- which they have made available for use by other researchers -- alongside sophisticated statistical modeling techniques that they developed to assess the large number of potential stressors at the same time.

The research team found that several stressors impacted honey bee colony loss at the national level, including the presence of nearby pesticides, frequent extreme weather events, and weather instability. Colony loss was also related to the presence of parasitic mites, Varroa destructor, which reproduce in honey bee colonies, weaken the bees, and potentially expose them to viruses. The researchers also found that losses typically occurred between January and March, likely related to challenges with overwintering, but that some states do not follow this pattern.

"Our results largely reinforce what regional studies have observed and confirm that regional patterns around these stressors are actually more widespread," said Insolia, a beekeeper himself. "These results also inform actions that beekeepers could take to help circumvent these stressors and protect their colonies, including treatments for the Varroa mite‚ especially in areas of weather instability. Beekeepers could also consider strategies to move their colonies to areas with high food availability or away from nearby pesticides or to provide supplementary food during certain seasons or months with frequent extreme weather events."

The researchers note that having data about beekeeping practices and colony loss at a finer resolution would allow validation of their results and a more nuanced look at honey bee stressors.

"It would be incredibly beneficial to explore beekeeping practices at a finer scale than the state level," said Calovi. "In many cases, beekeeping associations and other organizations collect this data, but it is not made available to researchers. We hope our study will help motivate more detailed data collection as well as efforts to share that data -- including from smaller organizations such as regional beekeeper associations."

The research team also found a strong relationship between colony loss and a broad category of beekeeping practices noted on a USDA survey as "other," which contained everything from hives being destroyed to food scarcity to queen failure. They noted that collecting this data in more detail and breaking up this catch-all type variable would improve their ability to connect particular stressors to colony collapse.

"A changing climate and high-profile extreme weather events like Hurricane Ian -- which threatened about 15% of the nation's bees that were in its path as well as their food sources -- are important reminders that we urgently need to better understand the stressors that are driving honey bee colony collapse and to develop strategies to mitigate them," said Francesca Chiaromonte, professor of statistics and the holder of the Lloyd and Dorothy Foehr Huck Chair in Statistics for the Life Sciences at Penn State and a senior member of the research team. "Our results highlight the role of parasitic mites, pesticide exposure, extreme weather events, and overwintering in bee colony collapse. We hope that they will help inform improved beekeeping practices and direct future data collection efforts that allow us to understand the problem at finer and finer resolutions."

Read more at Science Daily

Ancestral variation guides future environmental adaptations

The speed of environmental change is very challenging for wild organisms. When exposed to a new environment individual plants and animals can potentially adjust their biology to better cope with new pressures they are exposed to -- this is known as phenotypic plasticity.

Plasticity is likely to be important in the early stages of colonising new places or when exposed to toxic substances in the environment. New research published in Nature Ecology & Evolution, shows that early plasticity can influence the ability to subsequently evolve genetic adaptations to conquer new habitats.

Sea campion, a coastal wildflower from the UK and Ireland has adapted to toxic, zinc rich industrial-era mining waste which kills most other plant species. The zinc-tolerant plants have evolved from zinc-sensitive, coastal populations separately in different places, several times.

To understand the role of plasticity in rapid adaptation, a team of researchers lead by Bangor University conducted experiments on sea campion.

As zinc-tolerance has evolved several times, this gave the researchers the opportunity to investigate whether ancestral plasticity made it more likely that the same genes would be used by different populations that were exposed to the same environment.

By exposing the tolerant and sensitive plants to both benign and zinc contaminated environments and measuring changes in the expression of genes in the plant's roots, the researchers were able to see how plasticity in the coastal ancestors has paved the way for adaptation to take place very quickly.

Dr Alex Papadopulos, senior lecturer at Bangor University explained:

"Sea campion usually grow on cliffs and shingle beaches, but mining opened up a new niche for them that other plants weren't able to exploit. Our research has shown that some of the beneficial plasticity in the coastal plants has helped the mine plants to adapt so quickly."

Alex added,

"Remarkably, if a gene responds to the new environment in a beneficial way in the ancestral plants, it is much more likely that that gene will be reused in all of the lineages that are independently adapting to the new environment. Phenotypic plasticity may make it more likely that there would be the same evolutionary outcome if the tape of life were replayed. If we understand the plastic responses that species have to environmental change, we may be better equipped to predict the impacts of climate change on biodiversity."

Read more at Science Daily