Apr 6, 2023

Hubble sees possible runaway black hole creating a trail of stars

There's an invisible monster on the loose, barreling through intergalactic space so fast that if it were in our solar system, it could travel from Earth to the Moon in 14 minutes. This supermassive black hole, weighing as much as 20 million Suns, has left behind a never-before-seen 200,000-light-year-long "contrail" of newborn stars, twice the diameter of our Milky Way galaxy. It's likely the result of a rare, bizarre game of galactic billiards among three massive black holes.

Rather than gobbling up stars ahead of it, like a cosmic Pac-Man, the speedy black hole is plowing into gas in front of it to trigger new star formation along a narrow corridor. The black hole is streaking too fast to take time for a snack. Nothing like it has ever been seen before, but it was captured accidentally by NASA's Hubble Space Telescope.

"We think we're seeing a wake behind the black hole where the gas cools and is able to form stars. So, we're looking at star formation trailing the black hole," said Pieter van Dokkum of Yale University in New Haven, Connecticut. "What we're seeing is the aftermath. Like the wake behind a ship we're seeing the wake behind the black hole." The trail must have lots of new stars, given that it is almost half as bright as the host galaxy it is linked to.

The black hole lies at one end of the column, which stretches back to its parent galaxy. There is a remarkably bright knot of ionized oxygen at the outermost tip of the column. Researchers believe gas is probably being shocked and heated from the motion of the black hole hitting the gas, or it could be radiation from an accretion disk around the black hole. "Gas in front of it gets shocked because of this supersonic, very high-velocity impact of the black hole moving through the gas. How it works exactly is not really known," said van Dokkum.

"This is pure serendipity that we stumbled across it," van Dokkum added. He was looking for globular star clusters in a nearby dwarf galaxy. "I was just scanning through the Hubble image and then I noticed that we have a little streak. I immediately thought, 'oh, a cosmic ray hitting the camera detector and causing a linear imaging artifact.' When we eliminated cosmic rays we realized it was still there. It didn't look like anything we've seen before."

Because it was so weird, van Dokkum and his team did follow-up spectroscopy with the W. M. Keck Observatories in Hawaii. He describes the star trail as "quite astonishing, very, very bright and very unusual." This led to the conclusion that he was looking at the aftermath of a black hole flying through a halo of gas surrounding the host galaxy.

This intergalactic skyrocket is likely the result of multiple collisions of supermassive black holes. Astronomers suspect the first two galaxies merged perhaps 50 million years ago. That brought together two supermassive black holes at their centers. They whirled around each other as a binary black hole.

Then another galaxy came along with its own supermassive black hole. This follows the old idiom: "two's company and three's a crowd." The three black holes mixing it up led to a chaotic and unstable configuration. One of the black holes robbed momentum from the other two black holes and got thrown out of the host galaxy. The original binary may have remained intact, or the new interloper black hole may have replaced one of the two that were in the original binary, and kicked out the previous companion.

When the single black hole took off in one direction, the binary black holes shot off in the opposite direction. There is a feature seen on the opposite side of the host galaxy that might be the runaway binary black hole. Circumstantial evidence for this is that there is no sign of an active black hole remaining at the galaxy's core. The next step is to do follow-up observations with NASA's James Webb Space Telescope and the Chandra X-ray Observatory to confirm the black hole explanation.

Read more at Science Daily

US forests face an unclear future with climate change

When you walk through a forest, you are surrounded by carbon. Every branch and every leaf, every inch of trunk and every tendril of unseen root contains carbon pulled from the atmosphere through photosynthesis. And as long as it stays stored away inside that forest, it's not contributing to the rising concentrations of carbon dioxide that cause climate change. So it's only natural that we might want to use forests' carbon-storage superpower as a potential climate solution in addition to reducing human greenhouse gas emissions.

But climate change itself might compromise how permanently forests are able to store carbon and keep it out of the air, according to a new study led by University of Utah researchers. A study of how different regions and tree species will respond to climate change finds a wide range of estimates of how much carbon forests in different regions might gain or lose as the climate warms. Importantly, the researchers found, the regions most at risk to lose forest carbon through fire, climate stress or insect damage are those regions where many forest carbon offset projects have been set up.

"This tells us there's a really urgent need to update these carbon offsets protocols and policies with the best available science of climate risks to U.S. forests," said William Anderegg, study senior author and director of the U's Wilkes Center for Climate Science and Policy.

The study is published in Nature Geoscience. Find an interactive tool showing carbon storage potential in forests in the U.S. here.

A multi-perspective modeling approach

For this study, the researchers were interested in forecasting changes in the amount of aboveground carbon storage in forests of different regions in the United States. Aboveground carbon refers to any living parts of a tree that are above ground, including wood and leaves or needles.

Scientists can look at the future of forests under climate change in a few different ways. They can look at historical and future projections of climate, or look at datasets from long-term forest plots. They can also use machine learning to identify which climate niches tree species most prefer. Or they can use complex models that include interactions between the ecosystem and the atmosphere.

Anderegg and colleagues, including first author and postdoctoral scholar Chao Wu, chose all of the above.

"Each different method has inherent advantages and limitations," Wu said. "No model is perfect."

"By bringing in many different approaches and different model types and comparing them," Anderegg said, "we can get a sense of what the different models are telling us and how can we learn to improve the models. And we might have much more confidence if all of the models and all of the approaches tell us the same story in a given region."

Analyzing the combined model outputs, the researchers found that although the models' forecasts differed in some ways, they did show some consistency in predictions of how different regions' carbon storage might change in the future. The Great Lakes and Northeastern US, for example, as well as parts of the Southeastern US and the northern Rockies, consistently showed carbon gains in future projections.

But the models also showed significant risks of losing carbon from forests through the climate triple threat of fire, climate stress and insect damage. With those risks, the models projected a net carbon gain in forests nationwide of between 3 and 5 petagrams of carbon by the end of the 21st century (a petagram is a quadrillion grams -- about 25 times the mass of all humans on Earth). Without those climate stresses, forests might be able to pack away a net 9.4 petagrams of carbon.

The researchers also applied their analysis to 139 current projects to offset carbon emissions to the atmosphere by aiming to increase the carbon stored in forests through various approaches.

"For carbon offsets to be effective," Anderegg said, "they have to store carbon for a pretty long amount of time -- multiple decades to centuries. So if fire's burning them down or insects are wiping out different areas, it could vastly undermine their effectiveness as climate change solutions."

Depending on the model method and the climate scenario, the researchers found that large numbers of carbon offset forest projects, particularly in the Southeastern US and on the West Coast, are projected to lose carbon by the end of the century.

What we still need to know

The results, Wu said, highlight that different climate and ecological models have different strengths and weaknesses, and considering them together reveals the areas of research needed to improve climate projections.

Tree demographic models, for example, include simulations of forest dynamics as old trees die and new trees grow. "But these current models didn't consider the disturbance-vegetation feedback," Wu said, referring to the different types of vegetation besides trees that appear following a disturbance like a forest fire and how they might influence the odds of another disturbance. "And also they didn't consider CO2 fertilization," or the potential for rising carbon dioxide levels to actually improve plant growth.

Anderegg identified three research questions that could help:
 

  • How much rising CO2 concentrations might benefit plants and trees and help them grow more.
  • Better data and understanding of climate-driven tree mortality from fire, climate stress, and insects.
  • How biomes will shift around. Following a disturbance, for example, some forests may be able to grow back but some may transition to grasslands and be lost entirely.


"These are some of the biggest unknowns that the field is really racing to tackle," he said.

In the meantime, while science works to understand how climate change affects forests, society can help by slowing the pace of climate change.

Read more at Science Daily

The unexpected contribution of medieval monks to volcanology

By observing the night sky, medieval monks unwittingly recorded some of history's largest volcanic eruptions. An international team of researchers, led by the University of Geneva (UNIGE), drew on readings of 12th and 13th century European and Middle Eastern chronicles, along with ice core and tree ring data, to accurately date some of the biggest volcanic eruptions the world has ever seen. Their results, reported in the journal Nature, uncover new information about one of the most volcanically active periods in Earth's history, which some think helped to trigger the Little Ice Age, a long interval of cooling that saw the advance of European glaciers.

It took the researchers almost five years to examine hundreds of annals and chronicles from across Europe and the Middle East, in search of references to total lunar eclipses and their colouration. Total lunar eclipses occur when the moon passes into the Earth's shadow. Typically, the moon remains visible as a reddish orb because it is still bathed in sunlight bent round the Earth by its atmosphere. But after a very large volcanic eruption, there can be so much dust in the stratosphere -- the middle part of the atmosphere starting roughly where commercial aircraft fly -- that the eclipsed moon almost disappears.

Medieval chroniclers recorded and described all kinds of historical events, including the deeds of kings and popes, important battles, and natural disasters and famines. Just as noteworthy were the celestial phenomena that might foretell such calamities. Mindful of the Book of Revelation, a vision of the end times that speaks of a blood-red moon, the monks were especially careful to take note of the moon's coloration. Of the 64 total lunar eclipses that occurred in Europe between 1100 and 1300, the chroniclers had faithfully documented 51. In five of these cases, they also reported that the moon was exceptionally dark.

The contribution of Japanese scribes

Asked what made him connect the monks' records of the brightness and colour of the eclipsed moon with volcanic gloom, the lead author of the work, Sébastien Guillet, senior research associate at the Institute for environmental sciences at the UNIGE, said: "I was listening to Pink Floyd's Dark Side of the Moon album when I realised that the darkest lunar eclipses all occurred within a year or so of major volcanic eruptions. Since we know the exact days of the eclipses, it opened the possibility of using the sightings to narrow down when the eruptions must have happened."

The researchers found that scribes in Japan took equal note of lunar eclipses. One of the best known, Fujiwara no Teika, wrote of an unprecedented dark eclipse observed on 2 December 1229: 'the old folk had never seen it like this time, with the location of the disk of the Moon not visible, just as if it had disappeared during the eclipse... It was truly something to fear.' The stratospheric dust from large volcanic eruptions was not only responsible for the vanishing moon. It also cooled summer temperatures by limiting the sunlight reaching the Earth's surface. This in turn could bring ruin to agricultural crops.

Cross-checking text and data

"We know from previous work that strong tropical eruptions can induce global cooling on the order of roughly 1°C over a few years," said Markus Stoffel, full professor at the Institute for environmental sciences at the UNIGE and last author of the study, a specialist in converting measurements of tree rings into climate data, who co-designed the study. "They can also lead to rainfall anomalies with droughts in one place and floods in another."

Despite these effects, people at the time could not have imagined that the poor harvests or the unusual lunar eclipses had anything to do with volcanoes -- the eruptions themselves were all but one undocumented. "We only knew about these eruptions because they left traces in the ice of Antarctica and Greenland," said co-author Clive Oppenheimer, professor at the Department of Geography at the University of Cambridge. "By putting together the information from ice cores and the descriptions from medieval texts we can now make better estimates of when and where some of the biggest eruptions of this period occurred."

Climate and society affected

To make the most of this integration, Sébastien Guillet worked with climate modellers to compute the most likely timing of the eruptions. "Knowing the season when the volcanoes erupted is essential, as it influences the spread of the volcanic dust and the cooling and other climate anomalies associated with these eruptions," he said.

Read more at Science Daily

Researchers correlate Arctic warming to extreme winter weather in midlatitude and its future

A warmer Arctic has been linked to extreme winter weather in the midlatitude regions. But, it is not clear how global warming affects this link. In a new study, researchers from Korea and USA show, using weather data and climate models, that while the "Warm Arctic-Cold Continent" pattern will continue as the climate continues to warm, Arctic warming will become a less reliable predictor of extreme winter weather in the future.

Pictures of melting glaciers and stranded polar bears on shrinking sea ice in the Arctic are perhaps the most striking images that have been used to highlights the effects of global warming. However, they do not convey the full extent of the consequences of warmer Arctic. In recent years, there has been growing recognition of the Arctic's role in driving extreme weather events in other parts of the world. While the Arctic has been warming at a rate twice as fast as the global average, winters in the midlatitude regions have experienced colder and more severe weather events. For instance, the winter of 2022-2023 saw record-breaking cold temperatures and snowfall in Japan, China, and Korea. Similarly, many parts of Eurasia and North America have experienced severe cold snaps, with heavy snowfall and prolonged periods of sub-zero temperatures.

While there are multiple theories for this climate phenomenon, an international team of researchers led by Professor Jin-Ho Yoon from Gwangju Institute of Science and Technology (GIST), Korea set out to examine the relationship between the severe winters in the Northern Hemisphere and the melting sea ice in the Arctic region, a phenomenon referred to as the "Warm Arctic-Cold Continent" (WACC), and how this relationship changed with the warming climate.

In their study published online on 27 March 2023 in the journal npj Climate and Atmospheric Science, the researchers looked at historic climate data and turned to climate projection models to explore the potential connection and assess how this phenomenon might be influenced by different global warming scenarios.

Based on the climate data from the European Center for Medium-Range Weather Forecasting (ECMWF) going back almost 40 years, the researchers correlated winter temperatures in East Asia and North America to the temperatures of the Barents-Kara Sea and the East Siberian-Chukchi Sea in the Arctic region. They observed that lower winter temperatures in East Asia and North America are usually accompanied by warmer Arctic Sea temperatures. However, they also found that in some winters, such as the 2017/18 winter in East Asia, this pattern did not hold, suggesting that this linkage include uncertainty likely due to factors other than Arctic Sea temperatures were at play.

Nonetheless, using climate projections from the Half degree Additional warming, Prognosis and Projected Impacts (HAPPI) experiments which were targeted to project future climate under 1.5°C to 2°C warming scenarios, the researchers found the WACC pattern to persist even when global temperatures rose. However, they found that the correlation between the Arctic Sea temperature and the East Asia temperatures became more uncertain with the intensification of global warming. "We found that the relationship between Arctic warming and cold weather events in midlatitude would become more uncertain under warmer climates, challenging the forecast of winter temperature in the future," says Mr. Yungi Hong, a Ph.D. student at GIST and a member of the research team.

"Our study shows that while one can expect the Arctic warming-triggered cold waves in the midlatitudes to persist in a warmer future, they will become more difficult to predict," adds Prof. Jin-Ho Yoon.

Read more at Science Daily

Apr 5, 2023

Researchers devise new membrane mirrors for large space-based telescopes

Researchers have developed a new way to produce and shape large, high-quality mirrors that are much thinner than the primary mirrors previously used for telescopes deployed in space. The resulting mirrors are flexible enough to be rolled up and stored compactly inside a launch vehicle.

"Launching and deploying space telescopes is a complicated and costly procedure," said Sebastian Rabien from Max Planck Institute for Extraterrestrial Physics in Germany. "This new approach -- which is very different from typical mirror production and polishing procedures -- could help solve weight and packaging issues for telescope mirrors, enabling much larger, and thus more sensitive, telescopes to be placed in orbit."

In the Optica Publishing Group journal Applied Optics, Rabien reports successful fabrication of parabolic membrane mirror prototypes up to 30 cm in diameter. These mirrors, which could be scaled up to the sizes needed in space telescopes, were created by using chemical vapor deposition to grow membrane mirrors on a rotating liquid inside a vacuum chamber. He also developed a method that uses heat to adaptively correct imperfections that might occur after the mirror is unfolded.

"Although this work only demonstrated the feasibility of the methods, it lays the groundwork for larger packable mirror systems that are less expensive," said Rabien. "It could make lightweight mirrors that are 15 or 20 meters in diameter a reality, enabling space-based telescopes that are orders of magnitude more sensitive than ones currently deployed or being planned."

Applying an old process in a new way

The new method was developed during the COVID-19 pandemic, which Rabien says gave him some extra time to think and try out new concepts. "In a long series of tests, we researched many liquids to find out their usability for the process, investigated how the polymer growth can be carried out homogeneously, and worked to optimize the process," he said.

For chemical vapor deposition, a precursor material is evaporated and thermally split into monomeric molecules. Those molecules deposit on the surfaces in a vacuum chamber and then combine to form a polymer. This process is commonly used to apply coatings such as the ones that make electronics water-resistant, but this is the first time it has been used to create parabolic membrane mirrors with the optical qualities necessary for use in telescopes.

To create the precise shape necessary for a telescope mirror, the researchers added a rotating container filled with a small amount of liquid to the inside of the vacuum chamber. The liquid forms a perfect parabolic shape onto which the polymer can grow, forming the mirror base. When the polymer is thick enough, a reflective metal layer is applied to the top via evaporation and the liquid is washed away.

"It has long been known that rotating liquids that are aligned with the local gravitational axis will naturally form a paraboloid surface shape," said Rabien. "Utilizing this basic physics phenomenon, we deposited a polymer onto this perfect optical surface, which formed a parabolic thin membrane that can be used as the primary mirror of a telescope once coated with a reflecting surface such as aluminum."

Although other groups have created thin membranes for similar purposes, these mirrors are typically shaped using a high-quality optical mold. Using a liquid to form the shape is much more affordable and can be more easily scaled up to large sizes.

Reshaping a folded mirror

The thin and lightweight mirror created using this technique can easily be folded or rolled up during the trip to space. However, it would be nearly impossible to get it back to the perfect parabolic shape after unpacking. To reshape the membrane mirror, the researchers developed a thermal method that uses a localized temperature change created with light to enable adaptive shape control that can bring the thin membrane into the desired optical shape.

The researchers tested their approach by creating 30-cm diameter membrane mirrors in a vacuum deposition chamber. After much trial and error, they were able to produce high quality mirrors with a surface shape suitable for telescopes. They also showed that their thermal radiative adaptive shaping method worked well, as demonstrated with an array of radiators and illumination from a digital light projector.

The new membrane-based mirrors could also be used in adaptive optics systems. Adaptive optics can improve the performance of optical systems by using a deformable mirror to compensate for distortion in incoming light. Because the surface of the new membrane mirrors is deformable, these mirrors could be shaped with electrostatic actuators to create deformable mirrors that are less expensive to make than those created with conventional methods.

Read more at Science Daily

Lab-grown fat could give cultured meat real flavor and texture

Researchers have successfully bulk-produced fat tissue in the lab that has a similar texture and make-up to naturally occurring fats from animals.

The results, described in a study published today in eLife, could be applied to the production of cultured meat grown entirely from cells, giving it a more realistic texture and flavour.

Cultivated meat has been making waves in the news lately, with reports from startup companies around the world developing cell-grown chicken, beef, pork and fish -- mostly in early stages of development, not ready for large-scale production and with a couple of exceptions, not yet approved for commercial sale. Most of those products in development are in the form of an unstructured mixture of cells -- like chicken nuggets rather than a slice of chicken breast. What is lacking is the texture of real meat, created by muscle fibres, connective tissue and fat -- and it's the fat that gives meat flavour.

In fact, consumer testing with natural beef of different fat content showed that the highest scores were registered for beef containing 36% fat.

However, producing cultured fat tissue in sufficient quantities has been a major challenge because, as the fat grows into a mass, the cells in the middle become starved of oxygen and nutrients. In nature, blood vessels and capillaries deliver oxygen and nutrients throughout the tissue. Researchers still have no way to replicate that vascular network at a large scale in lab grown tissue, so they can only grow muscle or fat to a few millimetres in size.

To get around this limitation, the researchers grew fat cells from mice and pigs first in a flat, two-dimensional layer, then harvested those cells and aggregated them into a three-dimensional mass with a binder such as alginate and mTG, which are both already used in some foods.

"Our goal was to develop a relatively simple method of producing bulk fat. Since fat tissue is predominantly cells with few other structural components, we thought that aggregating the cells after growth would be sufficient to reproduce the taste, nutrition and texture profile of natural animal fat," says first author John Yuen Jr, a graduate student at the Tufts University Center for Cellular Architecture (TUCCA), Massachusetts, US. "This can work when creating the tissue solely for food, since there's no requirement to keep the cells alive once we gather the fat in bulk."

The aggregated fat cells immediately had the appearance of fat tissue, but to see if they truly reproduced the features of native fat from animals, the team carried out a series of further experiments.

First, they explored the texture, by compressing the fat tissue and seeing how much pressure it could withstand compared to natural animal fat. They found that cell-grown fat bound with sodium alginate was able to withstand a similar amount of pressure to fat from livestock and poultry, but the cell-grown fat that was bound with mTG behaved more like rendered fat -- similar to lard or tallow. This suggests it could be possible to fine-tune the texture of cultured fat, so it best resembles the real-life texture of fat within meat, using different types and amounts of binders.

Cooking releases hundreds of compounds that add flavour to the meat, and most of those compounds originate from fat, including lipids and their component fatty acids. The team therefore examined the composition of molecules from the cell-grown fat and found that the mix of fatty acids from cultured mouse fat differed from native mouse fat. However, the cultured pig fat had a much closer fatty acid profile to the native tissue. The team's preliminary research suggests it might be possible to supplement growing fat cells with the required lipids to ensure that they more closely match the composition of natural meat.

Read more at Science Daily

Obesity turning arthritic joint cells into pro-inflammation 'bad apples'

Being overweight may be physically changing the environment within people's joints, as new research suggests that obesity is promoting pro-inflammatory conditions which worsen arthritis.

In a new study published in Clinical and Translational Medicine today, researchers from the University of Birmingham have found that specific cells in the joint lining tissue (synovium) of patients with osteoarthritis are being changed due to factors associated with obesity.

Previous research has shown that fat tissue that has been metabolically altered by obesity releases proteins called cytokines and adipokines, which are known to promote inflammation around the body. The newly published study funded by Versus Arthritis observed that in cells taken from biopsies of arthritic joints, obesity also changes the environment within the joint itself, leaving cells in the joint vulnerable to being 'turned' into those that promote inflammation.

Dr Susanne Wijesinghe from the Institute of Inflammation and Ageing at the University of Birmingham said: "We have seen that obesity can promote the kind of destructive inflammation in joints that goes far beyond what we might expect to see from wear and tear alone, even in non weight-bearing joints such as the hands.

"Obesity is creating an environment in the body, which is negatively affecting cells called synovial fibroblasts, which are stem cells involved in regulating the lubricating fluid of the joints. The effect is that these cells get recoded into those that promote inflammation within the fluid around the joints. Then, like bad apples in a barrel, they begin to affect the whole joint, increasing secretion of chemicals such as CHI3L1 which degrade the joint and increase the progression of osteoarthritis."

Hips don't lie -- weight isn't driving factor in load-bearing joints

Weight wasn't determined to be the driving factor for impacting the joint cells leading to greater inflammation, the research found.

The team of researchers used biopsy information from a range of joints including both weight-bearing joints such as hips and knees as well as the hands to determine whether the additional physical strain on joints associated with obesity was driving pro-inflammatory cytokines. The results found that there were independent impacts of obesity on load bearing and non-load bearing joints, and that among the 16 patients with BMI of over 30, weight alone didn't account for the molecular changes in those joints.

Simon Jones, Professor in Musculoskeletal Ageing in the Institute of Inflammation and Ageing at the University of Birmingham said: "This research helps us to both design better studies that more accurately understand the different conditions that affect patients with osteoarthritis, and it also better guides the way we develop drugs for the condition in the future.

"Potential targets and ways of delivering drugs can now be specifically considered for patients who do and don't have metabolic changes driven by obesity. In addition, if we treat osteoarthritis patients with obesity as a clinical sub-group we can also see whether specific therapies that address the metabolic element driving the disease can halt that underlying risk."

Zoe Chivers, Director of Services and Influencing at the charity Versus Arthritis said: "This study provides further evidence that osteoarthritis (OA) is not just inevitable 'wear and tear,' but the result of complex and diverse biochemical changes in the joint.

"The research reveals that obesity can lead to a change in the cells in the joint lining to make them more inflammatory, and that these changes occur not only in load bearing joints such as the knee and hips, but also in non-load bearing joints such as the hand.

Read more at Science Daily

Legacy industrial contamination in the Arctic permafrost

Many of us picture the Arctic as largely untouched wilderness. But that has long-since ceased to be true for all of the continent. It is also home to oilfields and pipelines, mines and various other industrial activities. The corresponding facilities were built on a foundation once considered to be particularly stable and reliable: permafrost. This unique type of soil, which can be found in large expanses of the Northern Hemisphere, only thaws at the surface in summer. The remainder, extending up to hundreds of metres down, remains frozen year-round.

Accordingly, permafrost has not only been viewed as a solid platform for buildings and infrastructure. "Traditionally, it's also been considered a natural barrier that prevents the spread of pollutants," explains Moritz Langer from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI). "Consequently, industrial waste from defunct or active facilities was often simply left on-site, instead of investing the considerable effort and expense needed to remove it." As a result of the industrial expansion during the cold war, over the decades this led to micro-dumps full of toxic sludge from oil and gas exploration, stockpiles of mining debris, abandoned military installations, and lakes in which pollutants were intentionally poured. "In many cases, the assumption was that the permafrost would reliably and permanently seal off these toxic substances, which meant there was no need for costly disposal efforts," says Guido Grosse, who heads the AWI's Permafrost Research Section. "Today, this industrial legacy still lies buried in the permafrost or on its surface. The substances involved range from toxic diesel fuel to heavy metals and even radioactive waste."

But as climate change progresses, this "sleeping giant" could soon become an acute threat: since the permafrost regions are warming between twice as fast and four times as fast as the rest of the world, the frozen soil is increasingly thawing. When this happens, it changes the hydrology of the region in question, and the permafrost no longer provides an effective barrier. As a result, contaminants that have accumulated in the Arctic over decades can be released, spreading across larger regions.

In addition, thawing permafrost becomes more and more unstable, which can lead to further contamination. When the ground collapses, it can damage pipelines, chemical stockpiles and depots. Just how real this risk already is can be seen in a major incident from May 2020 near the industrial city Norilsk in northern Siberia: a destabilized storage tank released 17,000 metric tons of diesel, which polluted the surrounding rivers, lakes and tundra. According to Langer: "Incidents like this could easily become more frequent in the future."

In order to more accurately assess such risks, he and an international team of experts from Germany, the Netherlands and Norway took a closer look at industrial activities in the High North. To do so, they first analysed freely available data from the portal OpenStreetMap and from the Atlas of Population, Society and Economy in the Arctic. According to these sources, the Arctic permafrost regions contain ca. 4,500 industrial sites that either store or use potentially hazardous substances.

"But this alone didn't tell us what types of facilities they were, or how badly they could potentially pollute the environment," says Langer. More detailed information on contaminated sites is currently only available for North America, where roughly 40 percent of the global permafrost lies. The data from Canada and Alaska showed that, using the location and type of facility, it should be possible to accurately estimate where hazardous substances were most likely to be found.

For Alaska, the Contaminated Sites Program also offers insights into the respective types of contaminants. For example, roughly half of the contaminations listed can be attributed to fuels like diesel, kerosene and petrol. Mercury, lead and arsenic are also in the top 20 documented environmental pollutants. And the problem isn't limited to the legacy of past decades: although the number of newly registered contaminated sites in the northernmost state of the USA declined from ca. 90 in 1992 to 38 in 2019, the number of affected sites continues to rise.

There are no comparable databases for Siberia's extensive permafrost regions. "As such, our only option there was to analyse reports on environmental problems that were published in the Russian media or other freely accessible sources between 2000 and 2020," says Langer. "But the somewhat sparse information available indicates that industrial facilities and contaminated sites are also closely linked in Russia's permafrost regions."

Using computer models, the team calculated the occurrence of contaminated sites for the Arctic as a whole. According to the results, the 4,500 industrial facilities in the permafrost regions have most likely produced between 13,000 and 20,000 contaminated sites. 3,500 to 5,200 of them are located in regions where the permafrost is still stable, but will start to thaw before the end of the century. "But without more extensive data, these findings should be considered a rather conservative estimate," Langer emphasises. "The true scale of the problem could be even greater."

Making matters worse, the interest in pursuing commercial activities in the Arctic continues to grow. As a result, more and more industrial facilities are being constructed, which could also release toxic substances into nearby ecosystems. Further, this is happening at a time when removing such environmental hazards is getting harder and harder -- after all, doing so often requires vehicles and heavy gear, which can hardly be used on vulnerable tundra soils that are increasingly affected by thaw.

Read more at Science Daily

Apr 4, 2023

A new measurement could change our understanding of the Universe

The Universe is expanding -- but how fast exactly? The answer appears to depend on whether you estimate the cosmic expansion rate -- referred to as the Hubble's constant, or H0 -- based on the echo of the Big Bang (the cosmic microwave background, or CMB) or you measure H0 directly based on today's stars and galaxies. This problem, known as the Hubble tension, has puzzled astrophysicists and cosmologists around the world.

A study carried out by the Stellar Standard Candles and Distances research group, lead by Richard Anderson at EPFL's Institute of Physics, adds a new piece to the puzzle. Their research, published in Astronomy & Astrophysics, achieved the most accurate calibration of Cepheid stars -- a type of variable star whose luminosity fluctuates over a defined period -- for distance measurements to date based on data collected by the European Space Agency's (ESA's) Gaia mission. This new calibration further amplifies the Hubble tension.

The Hubble constant (H0) is named after the astrophysicist who, together with Georges Lemaître, discovered the phenomenon in the late 1920s. It's measured in kilometers per second per megaparsec (km/s/Mpc), where 1 Mpc is around 3.26 million light years.

The best direct measurement of H0 uses a "cosmic distance ladder," whose first rung is set by the absolute calibration of the brightness of Cepheids, now recalibrated by the EPFL study. In turn, Cepheids calibrate the next rung of the ladder, where supernovae -- powerful explosions of stars at the end of their lives -- trace the expansion of space itself. This distance ladder, measured by the Supernovae, H0, for the Equation of State of dark energy (SH0ES) team led by Adam Riess, winner of the 2011 Nobel Prize in Physics, puts H0 at 73.0 ± 1.0 km/s/Mpc.

First radiation after the Big Bang

H0 can also be determined by interpreting the CMB -- which is the ubiquitous microwave radiation left over from the Big Bang more than 13 billion years ago. However, this "early Universe" measurement method has to assume the most detailed physical understanding of how the Universe evolves, rendering it model dependent. The ESA's Planck satellite has provided the most complete data on the CMB, and according to this method, H0 is 67.4 ± 0.5 km/s/Mpc.

The Hubble tension refers to this discrepancy of 5.6 km/s/Mpc, depending on whether the CMB (early Universe) method or the distance ladder (late Universe) method is used. The implication, provided that the measurements performed in both methods are correct, is that there is something wrong in the understanding of the basic physical laws that govern the Universe. Naturally, this major issue underscores how essential it is for astrophysicists' methods to be reliable.

The new EPFL study is so important because it strengthens the first rung of the distance ladder by improving the calibration of Cepheids as distance tracers. Indeed, the new calibration allows us to measure astronomical distances to within ± 0.9%, and this lends strong support to the late Universe measurement. Additionally, the results obtained at EPFL, in collaboration with the SH0ES team, helped to refine the H0 measurement, resulting in improved precision and an increased significance of the Hubble tension.

"Our study confirms the 73 km/s/Mpc expansion rate, but more importantly, it also provides the most precise, reliable calibrations of Cepheids as tools to measure distances to date," says Anderson. "We developed a method that searched for Cepheids belonging to star clusters made up of several hundreds of stars by testing whether stars are moving together through the Milky Way. Thanks to this trick, we could take advantage of the best knowledge of Gaia's parallax measurements while benefiting from the gain in precision provided by the many cluster member stars. This has allowed us to push the accuracy of Gaia parallaxes to their limit and provides the firmest basis on which the distance ladder can be rested."

Rethinking basic concepts

Why does a difference of just a few km/s/Mpc matter, given the vast scale of the Universe? "This discrepancy has a huge significance," says Anderson. "Suppose you wanted to build a tunnel by digging into two opposite sides of a mountain. If you've understood the type of rock correctly and if your calculations are correct, then the two holes you're digging will meet in the center. But if they don't, that means you've made a mistake -- either your calculations are wrong or you're wrong about the type of rock. That's what's going on with the Hubble constant. The more confirmation we get that our calculations are accurate, the more we can conclude that the discrepancy means our understanding of the Universe is mistaken, that the Universe isn't quite as we thought."

The discrepancy has many other implications. It calls into question the very fundamentals, like the exact nature of dark energy, the time-space continuum, and gravity. "It means we have to rethink the basic concepts that form the foundation of our overall understanding of physics," says Anderson.

Read more at Science Daily

One of Swedish warship Vasa's crew was a woman

When the human remains found on board the warship Vasa were investigated, it was determined that the skeleton designated G was a man. New research now shows that the skeleton is actually from a woman.

About thirty people died when Vasa sank on its maiden voyage in 1628. We cannot know who most of them were, only one person is named in the written sources. When the ship was raised in 1961 it was the scene of a comprehensive archaeological excavation, in which numerous human bones were found on board and examined.

“Through osteological analysis it has been possible to discover a great deal about these people, such as their age, height and medical history. Osteologists recently suspected that G could be female, on the basis of the pelvis. DNA analysis can reveal even more”, says Dr Fred Hocker, director of research at the Vasa Museum, in Stockholm, Sweden.

Since 2004 the Vasa Museum has collaborated with the Department of Immunology, Genetics and Pathology at Uppsala University in Sweden to investigate all of the remains from Vasa and find out as much as possible about each individual. Initially the project focused on confirming if certain bones belonged to a specific person. Marie Allen, professor of forensic genetics, has led the work.

“For us, it is both interesting and challenging to study the skeletons from Vasa. It is very difficult to extract DNA from bone which has been on the bottom of the sea for 333 years, but not impossible”, says Marie Allen. She continues:

“Already some years ago we had indications that skeleton G was not a man but a woman. Simply put, we found no Y-chromosomes in G’s genetic material. But we could not be certain and wanted to confirm the result”.

The result has now been confirmed thanks to an interlaboratory study with Dr Kimberly Andreaggi of the Armed Forces Medical Examiner System’sArmed Forces DNA Identification Laboratory (AFMES-AFDIL) in Delaware, USA. The AFMES-AFDIL is the American Department of Defense’s laboratory, specializing in human remains DNA testing from deceased military personnel. They have established a new testing method for the analysis of many different genetic variants.

“We took new samples from bones for which we had specific questions. AFMES-AFDIL has now analysed the samples, and we have been able to confirm that G was a woman, thanks to the new test”, says Marie Allen.

For Marie Allen and Kimberly Andreaggi, the analysis of the Vasa skeletons is a way to develop their forensic methods, which can then be used to analyse DNA in criminal investigations or to identify fallen soldiers.

For the Vasa Museum the results of the DNA analysis are an important puzzle piece in the museum’s research into the people on the ship. Dr. Anna Maria Forssberg, historian and researcher at the museum, explains:

“We want to come as close to these people as we can. We have known that there were women on board Vasa when it sank, and now we have received confirmation that they are among the remains. I am currently researching the wives of seamen, so for me this is especially exciting, since they are often forgotten even though they played an important role for the navy“.

More results are expected shortly from the new samples. Marie Allen and Kimberly Andreaggi will be able to say something about how individuals looked, what colour their hair and eyes had, and possibly where their families came from.

“Today we can extract much more information from historic DNA than we could earlier and methods are being continuously refined. We can say if a person was predisposed to certain illnesses, or even very small details, such as if they had freckles and wet or dry ear wax”, says Marie Allen.

Read more at Science Daily

Absolute zero in the quantum computer

The absolute lowest temperature possible is -273.15 degrees Celsius. It is never possible to cool any object exactly to this temperature -- one can only approach absolute zero. This is the third law of thermodynamics.

A research team at TU Wien (Vienna) has now investigated the question: How can this law be reconciled with the rules of quantum physics? They succeeded in developing a "quantum version" of the third law of thermodynamics: Theoretically, absolute zero is attainable. But for any conceivable recipe for it, you need three ingredients: Energy, time and complexity. And only if you have an infinite amount of one of these ingredients can you reach absolute zero.

Information and thermodynamics: an apparent contradiction

When quantum particles reach absolute zero, their state is precisely known: They are guaranteed to be in the state with the lowest energy. The particles then no longer contain any information about what state they were in before. Everything that may have happened to the particle before is perfectly erased. From a quantum physics point of view, cooling and deleting information are thus closely related.

At this point, two important physical theories meet: Information theory and thermodynamics. But the two seem to contradict each other: "From information theory, we know the so-called Landauer principle. It says that a very specific minimum amount of energy is required to delete one bit of information," explains Prof. Marcus Huber from the Atomic Institute of TU Wien. Thermodynamics, however, says that you need an infinite amount of energy to cool anything down exactly to absolute zero. But if deleting information and cooling to absolute zero are the same thing -- how does that fit together?

Energy, time and complexity

The roots of the problem lie in the fact that thermodynamics was formulated in the 19th century for classical objects -- for steam engines, refrigerators or glowing pieces of coal. At that time, people had no idea about quantum theory. If we want to understand the thermodynamics of individual particles, we first have to analyse how thermodynamics and quantum physics interact -- and that is exactly what Marcus Huber and his team did.

"We quickly realised that you don't necessarily have to use infinite energy to reach absolute zero," says Marcus Huber. "It is also possible with finite energy -- but then you need an infinitely long time to do it." Up to this point, the considerations are still compatible with classical thermodynamics as we know it from textbooks. But then the team came across an additional detail of crucial importance:

"We found that quantum systems can be defined that allow the absolute ground state to be reached even at finite energy and in finite time -- none of us had expected that," says Marcus Huber. "But these special quantum systems have another important property: they are infinitely complex." So you would need infinitely precise control over infinitely many details of the quantum system -- then you could cool a quantum object to absolute zero in finite time with finite energy. In practice, of course, this is just as unattainable as infinitely high energy or infinitely long time.

Erasing data in the quantum computer

"So if you want to perfectly erase quantum information in a quantum computer, and in the process transfer a qubit to a perfectly pure ground state, then theoretically you would need an infinitely complex quantum computer that can perfectly control an infinite number of particles," says Marcus Huber. In practice, however, perfection is not necessary -- no machine is ever perfect. It is enough for a quantum computer to do its job fairly well. So the new results are not an obstacle in principle to the development of quantum computers.

Read more at Science Daily

A miniature heart in a petri dish: Organoid emulates development of the human heart

A team at the Technical University of Munich (TUM) has induced stem cells to emulate the development of the human heart. The result is a sort of "mini-heart" known as an organoid. It will permit the study of the earliest development phase of our heart and facilitate research on diseases.

The human heart starts forming approximately three weeks after conception. This places the early phase of heart development in a time when women are often still unaware of their pregnancy. That is one reason why we still have little knowledge of many details of how the heart is formed. Findings from animal studies are not fully transferable to humans. An organoid developed at TUM could prove helpful to researchers.

A ball of 35,000 cells

The team working with Alessandra Moretti, Professor of Regenerative Medicine in Cardiovascular Disease, has developed a method for making a sort of "mini-heart" using pluripotent stem cells. Around 35,000 cells are spun into a sphere in a centrifuge. Over a period of several weeks, different signaling molecules are added to the cell culture under a fixed protocol. "In this way, we mimic the signaling pathways in the body that control the developmental program for the heart," explains Alessandra Moretti. The group has now published its work in the journal Nature Biotechnology.

First-ever "epicardioids"

The resulting organoids are about half a millimeter in diameter. Although they do not pump blood, they can be stimulated electrically and are capable of contracting like human heart chambers. Prof. Moretti and her team are the first researchers in the world to successfully create an organoid containing both heart muscle cells (cardiomyocytes) and cells of the outer layer of the heart wall (epicardium). In the young history of heart organoids -- the first were described in 2021 -- researchers had previously created only organoids with cardiomyocytes and cells from the inner layer of the heart wall (endocardium).

"To understand how the heart is formed, epicardium cells are decisive," says Dr. Anna Meier, first author of the study. "Other cell types in the heart, for example in connecting tissues and blood vessels, are formed from these cells. The epicardium also plays a very important role in forming the heart chambers." The team has appropriately named the new organoids "epicardioids."

New cell type discovered


Along with the method for producing the organoids, the team has reported its first new discoveries. Through the analysis of individual cells they have determined that precursor cells of a type only recently discovered in mice are formed around the seventh day of the development of the organoid. The epicardium is formed from these cells. "We assume that these cells also exist in the human body -- if only for a few days," says Prof. Moretti.

These insights may also offer clues as to why the fetal heart can repair itself, a capability almost entirely absent in the heart of an adult human. This knowledge could help to find new treatment methods for heart attacks and other conditions.

Producing "personalized organoids"

The team also showed that the organoids can be used to investigate the illnesses of individual patients. Using pluripotent stem cells from a patient suffering from Noonan syndrome, the researchers produced organoids that emulated characteristics of the condition in a Petri dish. Over the coming months the team plans to use comparable personalized organoids to investigate other congenital heart defects.

With the possibility of emulating heart conditions in organoids, drugs could be tested directly on them in the future. "It is conceivable that such tests could reduce the need for animal experiments when developing drugs," says Alessandra Moretti.

Read more at Science Daily

Apr 3, 2023

How cosmic winds transform galactic environments

Much like how wind plays a key role in life on Earth by sweeping seeds, pollen and more from one place to another, galactic winds -- high-powered streams of charged particles and gases -- can change the chemical make-up of the host galaxies they form in, simply by blowing in a specific direction.

Using observations made by NASA's Chandra X-ray Observatory, a new study details how these energetic winds, once released from the center of a galaxy, directly influence the temperature and metal distribution of the rest of the region.

"Galactic winds are a large part of galaxy evolution in general," said Sebastian Lopez, lead author of the study and a graduate student in astronomy at The Ohio State University. "As they blow from one end of a galaxy to another, they alter the distribution of metals across the disk and enrich the surrounding intergalactic space."

In investigating the nearby spiral galaxy NGC 253, researchers found that while the amount of these elements can vary, the abundances of oxygen, neon, magnesium, silicon, sulfur and iron peaked in the center of the galaxy and decreased with distance from it. This indicates that as hot gas cools the farther away it travels from the center, it leaves behind a lower concentration of these elements.

Learning more about how the celestial detritus that make up these vast galaxies are disseminated across the cosmos could help astronomers more deeply understand how galactic formation works in other areas of the universe. "Our research could reflect that the size of a galaxy, or even its morphology, could impact how gas leaves these systems," Lopez said. The study was published online in The Astrophysical Journal.

Between 1999 and 2018, Chandra observed NGC 253 only seven times, but by analyzing image and spectral data taken from those observations, Lopez and his team were able to use specialized computer software to identify the emission lines left by passing winds. While compiling this data, they found that the research runs counter to previous X-ray studies done on NGC 253, which posit that galactic winds expand spherically, or in a bubble-like shape.

Instead, the models Lopez's team created show how the winds move in opposite directions from the middle of the galaxy and then radiate outwards toward the upper right and lower left regions. Lopez places much of this discrepancy on the data available at the time of the previous studies and the technological strides scientists have made since.

Still, there were a few similarities to previous work that did catch researchers' interest. To determine how galactic emission differences arise and if these differences depend on the galaxy's properties, they compared NGC 253 to the results of studies done on the galaxy M82, a similar starburst system located some 12 million light-years away from Earth. After detecting the same metals and similar distributions within M82 that they did with NGC 253, Lopez said that comparing the two led the team to discern that a process called charge exchange -- the stripping of an electron from a neutral atom by an ion -- plays a large part in X-ray emission.

"In order for scientists to create a realistic galaxy in simulations, we need to know where these heavy elements are going," Lopez said. "Because if you were to model it and not include charge exchange into these models, they wouldn't match up." If such calculations were inherently wrong, he said, scientists would have a hard time using their observations to make educated guesses about what the universe looks like and how it operates.

But Lopez imagines the more accurate models created from this study will help astronomers study the winds of other galaxies, such as calculating their velocities and discovering what makes them so good at creating unique stellar environments. "Next, we want to do this analysis for a larger set of different galaxies and see how things change," Lopez said.

Read more at Science Daily

Obesity treatment could offer dramatic weight loss without surgery or nausea

Imagine getting the benefits of gastric bypass surgery without going under the knife -- a new class of compounds could do just that. In lab animals, these potential treatments reduce weight dramatically and lower blood glucose. The injectable compounds also avoid the side effects of nausea and vomiting that are common with current weight-loss and diabetes drugs. Now, scientists report that the new treatment not only reduces eating but also boosts calorie burn.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS).

"Obesity and diabetes were the pandemic before the COVID-19 pandemic," says Robert Doyle, Ph.D., one of the two principal investigators on the project, along with Christian Roth, M.D. "They are a massive problem, and they are projected to only get worse."

Gastric bypass and related procedures, known collectively as bariatric surgery, offer one solution, often resulting in lasting weight loss and even remission of diabetes. But these operations carry risk, aren't suitable for everyone and aren't accessible for many of the hundreds of millions of people worldwide who are obese or diabetic. As an alternative, Doyle says, they could tackle their metabolic problems with a drug that replicates the long-term benefits of surgery.

Those benefits are linked to a post-bypass-surgery change in the gut's secretion levels of certain hormones -- including glucagon-like peptide-1 (GLP-1) and peptide YY (PYY) -- that signal fullness, curb appetite and normalize blood sugar. Current drugs that aim to replicate this effect primarily activate cellular receptors for GLP-1 in the pancreas and brain. That approach has shown great success in reducing weight and treating type 2 diabetes, drawing a lot of social media postings from celebrities in recent months. But many people can't tolerate the drugs' side effects, says Doyle. "Within a year, 80 to 90% of people who start on these drugs are no longer taking them." Doyle is at Syracuse University and SUNY Upstate Medical University, and Roth is at Seattle Children's Research Institute.

To address that drawback, various researchers have designed other treatments that interact with more than one type of gut hormone receptor. For example, Doyle's group created a peptide that activates two receptors for PYY, as well as the receptor for GLP-1. Dubbed GEP44, this compound caused obese rats to eat up to 80% less than they would typically eat. By the end of one 16-day study, they lost an average of 12% of their weight. That was more than three times the amount lost by rats treated with liraglutide, an injected drug that activates only the GLP-1 receptor and that is approved by the U.S. Food and Drug Administration for treating obesity. In contrast to liraglutide, tests with GEP44 in rats and shrews (a mammal that, unlike rats, is capable of vomiting) revealed no sign of nausea or vomiting, possibly because activating multiple receptors may cancel out the intracellular signaling pathway that drives those symptoms, Doyle says.

In its latest results, his team is now reporting that the weight loss caused by GEP44 can be traced not only to decreased eating, but also to higher energy expenditure, which can take the form of increased movement, heart rate or body temperature.

GEP44 has a half-life in the body of only about an hour, but Doyle's group has just designed a peptide with a much longer half-life. That means it could be injected only once or twice a week instead of multiple times a day. The researchers are now reporting that rats treated with this next-generation compound keep their new, slimmer physique even after treatment ends, which often isn't the case with currently approved drugs, Doyle says.

But weight loss isn't the only benefit of the peptide treatments. They also reduce blood sugar by pulling glucose into muscle tissue, where it can be used as fuel, and by converting certain cells in the pancreas into insulin-producing cells, helping replace those that are damaged by diabetes. And there's yet another benefit: Doyle and Heath Schmidt, Ph.D., of the University of Pennsylvania, recently reported that GEP44 reduces the craving for opioids such as fentanyl in rats. If that also works in humans, Doyle says, it could help addicts quit the illicit drugs or fend off a relapse.

The researchers have filed for patents on their compounds, and they plan to test their peptides in primates. They will also study how the treatments change gene expression and rewire the brain, and what that could mean for these compounds, as well as other types of medication.

Read more at Science Daily

Most of world's salt marshes likely to be underwater by 2100, study concludes

Cape Cod's salt marshes are as iconic as they are important. These beautiful, low-lying wetlands are some of the most biologically productive ecosystems on Earth. They play an outsized role in nitrogen cycling, act as carbon sinks, protect coastal development from storm surge, and provide critical habitats and nurseries for many fish, shellfish, and coastal birds.

And, according to new research from the Marine Biological Laboratory (MBL), more than 90 percent of the world's salt marshes are likely to be underwater by the end of the century.

The findings come from a 50-year study in Great Sippewissett Marsh in Falmouth, Massachusetts. Since 1971, scientists from the MBL Ecosystems Center have mapped vegetative cover in experimental plots in this marsh to examine whether increased nitrogen in the environment would impact species of marsh grass. Due to the study's length, they also were able to detect the effects of climate change on the ecosystem, especially those driven by accelerating sea level rise.

The researchers found that increased nitrogen favored higher levels of vegetation and accretion of the marsh surface, but that no matter what the concentration of nitrogen they applied to the marsh, these ecosystems won't be able to outpace submergence from global sea level rise.

"Places like Great Sippewissett Marsh will likely become shallow inlets by the turn of the century," says MBL Distinguished Scientist Ivan Valiela, lead author of the study. "Even under conservative sea level estimates…more than 90% of the salt marshes of the world will likely be submerged and disappear or be diminished by the end of the century."

"This is not a prediction from isolated scientists worried about little details. Major changes are going to be taking place on the surface of the Earth that will change the nature of coastal environments," says Valiela.

An Ecosystem Engineer

Salt marshes are gently sloping ecosystems and their plants have very narrow preferences for the elevations in which they can grow. Different species grow in the upper elevations (high marsh) versus the low elevation closer to the ocean (low marsh) and have different responses to changes in nitrogen supply. When change happens slowly enough, the grasses can migrate to their preferred elevation.

In the low marsh, cordgrass (Spartina alterniflora) prospered as scientists increased the nitrogen supply. Among high marsh species, the abundance of marsh hay (Spartina patens) in the experimental plots decreased with sea level rise. Saltgrass (Distichlis spicata) increased with nitrogen supply and also acted as what the researchers called an "ecosystem engineer" -- increasing the rate at which marsh elevation rose. Accretion of biomass left behind by the decomposing saltgrass compensated for the increased submergence resulting from rising sea level in these areas.

"Saltgrass disappeared after a few decades, but it left a legacy behind," says MBL Research Scientist Javier Lloret, adding that it was "extremely cool to see that interaction in the dataset."

Regardless of how much nitrogen was added to the environment, the research showed that at the current and future forecasted sea level rise, low marsh species will completely replace high marsh species. As sea levels continue to rise, even these species will be submerged.

"At some point, if sea level continues to increase at the rates that we anticipate, there will even be no more room for the low marsh plants. They're just going to be too submerged to survive." says Valiela.

The only alternative would be for salt marshes to migrate landward.

A Coastal Squeeze

Marshes around the world face what Lloret calls a "coastal squeeze," where sea level rise pushes from one direction and human development pushes from the other. A seawall that may protect a home from flooding will prevent the migration of a marsh naturally moving to higher ground.

"These barriers, whether they be geographic like a hill or a cliff, or people building along the edges of the ecosystem, constrain the potential for landward marsh migration," says MBL Research Assistant Kelsey Chenoweth. "On top of that, sea level rise is accelerating and marshes are having a hard time keeping up."

In a sea level rise scenario like the one we're facing, "the only solution for the plants will be to colonize new areas, to go uphill," says Lloret. "But that migration may just be impossible in some places."

"Sea level rise is the most important threat to salt marshes. We really need to figure out what's going to happen to these ecosystems and learn how to prevent some of the losses from happening or try to adapt to them, so marshes can continue to play these important roles for nature as well as humans," says Lloret.

Half a Century of Science

In 1971, the scientists at the MBL Ecosystems Center had no idea they would be using their data to study global sea level rise.

"This was an experiment that started looking at one ecological control (nitrogen), and then because of the longevity of the project, we were able to add new knowledge about this major accelerating agent of global change -- global sea level rise," says Valiela.

That's the benefit of long-term datasets like the one at Great Sippewissett Marsh.

"You're setting a baseline to the problems that haven't even happened yet," says Chenoweth.

When measuring ecological processes like climate change and eutrophication, the data can ebb and flow over the course of years as the ecosystem responds to external stimuli. The changes operate on a much longer time scale than changes on other biological systems.

"To study a tree, you look at changes through seasons and you should be able to see its whole cycle. For a leaf, you look at patterns between day and night. In single cells, you look at processes that take place at the timescale of minutes or seconds … but for an entire ecosystem, we're talking many years or decades," says Lloret. "You need to be thinking at the scale of decades or even centuries in order to be able to see substantial changes."

Read more at Science Daily

Ancient DNA reveals Asian ancestry introduced to East Africa in early modern times

While serfs toiled and knights jousted in Europe and samurai and shoguns rose to power in Japan, the medieval peoples of the Swahili civilization on the coast of East Africa lived in multicultural, coral-stone towns and engaged in trade networks spanning the Indian Ocean.

Archaeologists, anthropologists, and linguists have been locked in a century-long debate about how much people from outside Africa contributed to Swahili culture and ancestry. Swahili communities have their own histories, and evidence points in multiple directions.

The largest-yet analysis of ancient DNA in Africa, which includes the first ancient DNA recovered from members of the Swahili civilization, has now broken the stalemate.

The study reveals that a significant number of people from Southwest Asia moved to the Swahili coast in medieval and early modern times and had children with the people living there. Yet the research also shows that hallmarks of the Swahili civilization predated those arrivals.

"Archaeological evidence overwhelmingly showed that the medieval Swahili civilization was an African one, but we still wanted to understand and contextualize the nonlocal heritage," said co-senior author Chapurukha Kusimba, professor of anthropology at the University of South Florida.

"Taking a genetics pathway to find the answers took courage and opened doors beyond which lie answers that force us to think in new ways," he said.

The analyses, published online March 29 in Nature, included the newly sequenced ancient DNA of 80 individuals from the Swahili coast and inland neighbors dating from 1300 CE to 1900 CE.

They also included new genomic sequences from 93 present-day Swahili speakers and previously published genetic data from a variety of ancient and present-day eastern African and Eurasian groups.

The international team was led by Kusimba and David Reich, professor of genetics in the Blatavnik Institute at Harvard Medical School and professor of human evolutionary biology at Harvard University.

Mixing between Asia and Africa


The study revealed that around 1000 CE, a stream of migrants from Southwest Asia intermingled with African people at multiple locations along the Swahili coast, contributing close to half of the ancestry of the analyzed ancient individuals.

"The results provide unambiguous evidence of ongoing cultural mixing on the East African coast for more than a millennium, in which African people interacted and had families with immigrants from other parts of Africa and the Indian Ocean world," said Reich.

The study confirmed that the bedrock of Swahili culture remained unchanged even as the newcomers arrived and Islam became a dominant regional religion, said Kusimba; the primary language, tomb architecture, cuisine, material culture, and matrilocal marriage residence and matriarchal kinship remained African and Bantu in nature.

The findings contradict one widely discussed scholarly view, which held that there was little contribution from foreigners to Swahili peoples, the authors said.

The researchers added that the findings also refute a diametrically opposed viewpoint prevalent in colonial times, which held that Africans provided little contribution to the Swahili towns.

"Ancient DNA allowed us to address a longstanding controversy that could not be tested without genetic data from these times and places," Reich said.

The researchers found that the initial waves of newcomers were mainly from Persia. These findings align with the oldest Swahili oral stories, which tell of Persian (Shirazi) merchants or princes arriving on the Swahili shores.

"It was exciting to find biological evidence that Swahili oral history probably depicts Swahili genetic ancestry as well as cultural legacy," said Esther Brielle, research fellow in genetics in the Reich lab.

Brielle is co-first author of the paper with Stephanie Wynne-Jones at the University of York and Jeffrey Fleisher at Rice University.

After about 1500 CE, ancestry sources became increasingly Arabian. In later centuries, intermingling with other populations from Asia and Africa further changed the genetic makeup of Swahili-coast communities.

Ancestry contributions from women from India

Analyses also showed that the initial stream of migrants had about 90 percent ancestry from Persian men and 10 percent ancestry from Indian women.

Although South Asian-associated artifacts are well documented at Swahili archaeological sites and Indian words have been integrated into Swahili, "no one had previously hypothesized an important role for Indian people in contributing to the populations of the medieval Swahili towns," said Reich.

Extreme sex differences in genetic contributions

The predominant groups that contributed to Swahili-coast populations during the initial influx in 1000 CE were male Persians and femaleAfricans. Similar genetic signatures of sex imbalances in other populations around the world sometimes indicate that incoming men forcibly married local women, but that scenario does not align with the tradition of matriarchal Swahili societies, the authors said.

A more likely explanation, said Reich, is that "Persian men allied with and married into local trading families and adopted local customs to enable them to be more successful traders."

The authors say their hypothesis is supported by the fact that the children of Persian fathers and Swahili-coast mothers passed down the language of their mothers and that the region's matriarchal traditions did not change even after locals settled down with people from traditionally patriarchal regions in Persia and Arabia and practiced the Islamic religion of their male ancestors.

Genetics and identity

The team found that the proportion of Persian-Indian ancestry has decreased among many people of the Swahili coast in the last several centuries. Many among those in present-day Kenya who identify as Swahili and had their genomes analyzed were "genetically very different" from the people who lived in the region during medieval times, the authors found, while others retained substantial medieval ancestry.

"These results highlight an important lesson from ancient DNA: While we can learn about the past with genetics, it does not define present-day identity," said Reich.

Decolonizing history

In addition to helping to diversify the populations included in ancient DNA research, the study pushes back against "a profoundly difficult history" of more than 500 years of colonization in this region of Africa, which continues to be a major problem today, said Reich.

"The story of Swahili origins has been molded almost entirely by non-Swahili people," he said.

Read more at Science Daily

Apr 2, 2023

Scientists observe flattest explosion ever seen in space

Astronomers have observed an explosion 180 million light years away which challenges our current understanding of explosions in space, that appeared much flatter than ever thought possible.

  • Explosions are almost always expected to be spherical, as the stars themselves are spherical, but this one is the flattest ever seen
  • The explosion observed was an extremely rare Fast Blue Optical Transient (FBOT) -- known colloquially amongst astronomers as "the cow" -- only four others have ever been seen, and scientists don't know how they occur, but this discovery has helped solve part of the puzzle
  • A potential explanation for how this explosion occurred is that the star itself may have been surrounding by a dense disk or it may have been a failed supernova


An explosion the size of our solar system has baffled scientists, as part of its shape -- similar to that of an extremely flat disc -- challenges everything we know about explosions in space.

The explosion observed was a bright Fast Blue Optical Transient (FBOT) -- an extremely rare class of explosion which is much less common than other explosions, such as supernovas. The first bright FBOT was discovered in 2018 and given the nickname "the cow."

Explosions of stars in the universe are almost always spherical in shape, as the stars themselves are spherical. However, this explosion, which occurred 180 million light years away, is the most aspherical ever seen in space, with a shape like a disc emerging a few days after it was discovered. This section of the explosion may have come from material shed by the star just before it exploded.

It's still unclear how bright FBOT explosions occur, but it's hoped that this observation, published in Monthly Notices of the Royal Astronomical Society, will bring us closer to understanding them.

Dr Justyn Maund, Lead Author of the study from the University of Sheffield's Department of Physics and Astronomy, said: "Very little is known about FBOT explosions -- they just don't behave like exploding stars should, they are too bright and they evolve too quickly. Put simply, they are weird, and this new observation makes them even weirder.

"Hopefully this new finding will help us shed a bit more light on them -- we never thought that explosions could be this aspherical. There are a few potential explanations for it: the stars involved may have created a disc just before they died or these could be failed supernovas, where the core of the star collapses to a black hole or neutron star which then eats the rest of the star.

"What we now know for sure is that the levels of asymmetry recorded are a key part of understanding these mysterious explosions, and it challenges our preconceptions of how stars might explode in the Universe."

Scientists made the discovery after spotting a flash of polarised light completely by chance. They were able to measure the polarisation of the blast -- using the astronomical equivalent of polaroid sunglasses -- with the Liverpool Telescope (owned by Liverpool John Moores University) located on La Palma.

By measuring the polarisation, it allowed them to measure the shape of the explosion, effectively seeing something the size of our Solar System but in a galaxy 180 million light years away. They were then able to use the data to reconstruct the 3D shape of the explosion, and were able to map the edges of the blast -- allowing them to see just how flat it was.

The mirror of the Liverpool Telescope is only 2.0m in diameter, but by studying the polarisation the astronomers were able to reconstruct the shape of the explosion as if the telescope had a diameter of about 750km.

Read more at Science Daily

Predatory dinosaurs such as T. rex sported lizard-like lips

A new study suggests that predatory dinosaurs, such as Tyrannosaurus rex, did not have permanently exposed teeth as depicted in films such as Jurassic Park, but instead had scaly, lizard-like lips covering and sealing their mouths.

Researchers and artists have debated whether theropod dinosaurs, the group of two-legged dinosaurs that includes carnivores and top predators like T. rex and Velociraptor, as well as birds, had lipless mouths where perpetually visible upper teeth hung over their lower jaws, similar to the mouth of a crocodile.

However, an international team of researchers challenge some of the best-known depictions, and say these dinosaurs had lips similar to those of lizards and their relative, the tuatara -- a rare reptile found only in New Zealand, which are the last survivors of an order of reptiles that thrived in the age of the dinosaurs.

In the most detailed study of this issue yet, the researchers examined the tooth structure, wear patterns and jaw morphology of lipped and lipless reptile groups and found that theropod mouth anatomy and functionality resembles that of lizards more than crocodiles. This implies lizard-like oral tissues, including scaly lips covering their teeth.

These lips were probably not muscular, like they are in mammals. Most reptile lips cover their teeth but cannot be moved independently -- they cannot be curled back into a snarl, or make other sorts of movements we associate with lips in humans or other mammals.

Study co-author Derek Larson, Collections Manager and Researcher in Palaeontology at the Royal BC Museum in Canada, said: "Palaeontologists often like to compare extinct animals to their closest living relatives, but in the case of dinosaurs, their closest relatives have been evolutionarily distinct for hundreds of millions of years and today are incredibly specialised.

"It's quite remarkable how similar theropod teeth are to monitor lizards. From the smallest dwarf monitor to the Komodo dragon, the teeth function in much the same way. So, monitors can be compared quite favourably with extinct animals like theropod dinosaurs based on this similarity of function, even though they are not closely related."

Co-author Dr Mark Witton from the University of Portsmouth said: "Dinosaur artists have gone back and forth on lips since we started restoring dinosaurs during the 19th century, but lipless dinosaurs became more prominent in the 1980s and 1990s. They were then deeply rooted in popular culture through films and documentaries -- Jurassic Park and its sequels, Walking with Dinosaurs and so on.

"Curiously, there was never a dedicated study or discovery instigating this change and, to a large extent, it probably reflected preference for a new, ferocious-looking aesthetic rather than a shift in scientific thinking. We're upending this popular depiction by covering their teeth with lizard-like lips. This means a lot of our favourite dinosaur depictions are incorrect, including the iconic Jurassic Park T. rex."

The results, published in the journal Science, found that tooth wear in lipless animals was markedly different from that seen in carnivorous dinosaurs and that dinosaur teeth were no larger, relative to skull size, than those of modern lizards, implying they were not too big to cover with lips.

Also, the distribution of small holes around the jaws, which supply nerves and blood to the gums and tissues around the mouth, were more lizard-like in dinosaurs than crocodile-like. Furthermore, modelling mouth closure of lipless theropod jaws showed that the lower jaw either had to crush jaw-supporting bones or disarticulate the jaw joint to seal the mouth.

"As any dentist will tell you, saliva is important for maintaining the health of your teeth. Teeth that are not covered by lips risk drying out and can be subject to more damage during feeding or fighting, as we see in crocodiles, but not in dinosaurs," said co-author Kirstin Brink, Assistant Professor of Palaeontology at the University of Manitoba.

She added: "Dinosaur teeth have very thin enamel and mammal teeth have thick enamel (with some exceptions). Crocodile enamel is a bit thicker than dinosaur enamel, but not as thick as mammalian enamel. There are some mammal groups that do have exposed enamel, but their enamel is modified to withstand exposure."

Thomas Cullen, Assistant Professor of Paleobiology at Auburn University and study lead author, said: "Although it's been argued in the past that the teeth of predatory dinosaurs might be too big to be covered by lips, our study shows that, in actuality, their teeth were not atypically large. Even the giant teeth of tyrannosaurs are proportionally similar in size to those of living predatory lizards when compared for skull size, rejecting the idea that their teeth were too big to cover with lips."

The results provide new insights into how we reconstruct the soft-tissues and appearance of dinosaurs and other extinct species. This can give crucial information on how they fed, how they maintained their dental health, and the broader patterns of their evolution and ecology.

Dr Witton said: "Some take the view that we're clueless about the appearance of dinosaurs beyond basic features like the number of fingers and toes. But our study, and others like it, show that we have an increasingly good handle on many aspects of dinosaur appearance. Far from being clueless, we're now at a point where we can say 'oh, that doesn't have lips? Or a certain type of scale or feather?' Then that's as realistic a depiction of that species as a tiger without stripes."

Read more at Science Daily

Path to net-zero carbon capture and storage may lead to ocean

Lehigh Engineering researcher Arup SenGupta has developed a novel way to capture carbon dioxide from the air and store it in the "infinite sink" of the ocean.

The approach uses an innovative copper-containing polymeric filter and essentially converts CO2 into sodium bicarbonate (aka baking soda) that can be released harmlessly into the ocean. This new hybrid material, or filter, is called DeCarbonHIX (i.e., decarbonization through hybrid ion exchange material), and is described in a paper recently published in the journal Science Advances.

The research, which demonstrated a 300 percent increase in the amount of carbon captured compared with existing direct air capture methods, has garnered international attention from media outlets like the BBC, CNN, Fast Company, and The Daily Beast, and professional organizations like the American Chemical Society. SenGupta himself has been fielding interest in the technology from companies based in Brazil, Ireland, and the Middle East.

"The climate crisis is an international problem," says SenGupta, who is a professor of chemical and biomolecular engineering and civil and environmental engineering in Lehigh's P.C. Rossin College of Engineering and Applied Science. "And I believe we have a responsibility to build direct air capture technology in a way that it can be implemented by people and countries around the world. Anyone who can operate a cell phone should be able to operate this process. This is not technology for making money. It's for saving the world."

The work is yet another extension of SenGupta's personal and professional commitment to developing technologies that benefit humanity, and in particular, marginalized communities around the world. His research on water science and technology has included drinking water treatment methodologies, desalination, municipal wastewater reuse, and resource recovery. He invented the first reusable, arsenic-selective hybrid anion exchanger nanomaterial (HAIX-Nano), and as a result, more than two million people around the world now drink arsenic-safe water. Two of his patents have been recognized as "Patents for Humanity" by the US patent and Trademark Office.

His invention of DeCarbonHIX was the outcome of an ongoing CO2-driven wastewater desalination project funded by the Bureau of Reclamation under the jurisdiction of the U.S. Department of the Interior. SenGupta and his students were on the lookout for a reliable supply of CO2 even in remote places. That quest led the way to the field of direct air capture, or DAC, and the creation of DeCarbonHIX. This subject was the dissertation topic for environmental engineering student Hao Chen '23 PhD, who successfully defended his PhD in March and will receive his doctorate in May.

Capturing carbon at lower concentrations

The most abundant of the greenhouse gasses contributing to global warming is carbon dioxide. In 2021, global emissions of CO2 rose by 6 percent from the previous year -- to 36.3 gigatons, according to the International Energy Agency. Just one gigaton (equal to 1 billion tons) is the equivalent of the mass of all land mammals on earth.

Emissions from greenhouse gasses have increased global temperatures by approximately 1.1 degrees Celsius above pre-industrial levels, according to the Intergovernmental Panel on Climate Change. In its 2021 working group report, the IPCC estimates the average yearly temperature over the next 20 years is expected to rise by at least 1.5 degrees Celsius. The warmer the earth gets, the greater the fallout in terms of rising sea levels, extreme storm events, and ecological disruption, all of which have repercussions on global health, security, and stability.

"The worst part of this crisis is that people who are marginalized, who are poor, will suffer 10 times more than those who contributed to this situation," says SenGupta.

There are three ways to reduce CO2, he says. The first -- government action -- can reduce emissions, but that won't address what's already in the air.

"The second way is removing it from point sources, places like chimneys and stacks where carbon dioxide is being emitted in huge amounts," he says. "The good thing about that is you can remove it at very high concentrations, but it only targets emissions from specific sources."

The newest method is called direct air capture, which, he says, "allows you to remove CO2 from anywhere, even your own backyard."

With DAC, chemical processes remove CO2 from the atmosphere, after which it's typically stored underground. However, says SenGupta, the technology is limited by its capacity. It can't capture enough CO2 to overcome the energy cost of running the process.

"If you're capturing carbon dioxide from a chimney at a plant, the amount of CO2 in the air can be upwards of 100,000 parts per million," he says. "At that concentration, it's easy to remove. But generally speaking, the CO2 level in the air is around 400 parts per million. That's very high from a climate change point of view, but for removal purposes, we consider that ultra-dilute. Current filter materials just can't collect enough of it."

Another challenge with DAC involves storage. After the CO2 is captured, it's dissolved, put under pressure, liquified, and typically stored miles underground. A DAC operation must then be located in an area with enough geological storage -- and stability. A country like Japan, for instance, can't pump CO2 underground because the area is prone to earthquakes.

Seeing a solution in seawater

SenGupta has developed a DAC method that overcomes both the capture problem and the issue of storage.

For the capture problem, he developed DeCarbonHIX -- a mechanically strong, chemically stable sorbent (a material used to absorb liquids or gasses) -- that contains copper.

"The copper changes an intrinsic property of the parent polymer material and enhances the capturing capacity by 300 percent," he says. "We showed that for direct air capture from air with 400 parts per million of CO2, we achieve capacity, meaning capacity is no longer a function of how much carbon dioxide is in the air. The filter will get saturated completely at any concentration, which means you can perform DAC in your backyard, in the middle of the desert, or in the middle of the ocean."

The ocean is actually SenGupta's solution to the storage problem. His DAC process starts with air blowing through the filter to capture CO2. Once the filter is saturated with gas molecules (determined by measuring the amount of gas going into the filter versus coming out of it), seawater is passed through the filter. The seawater converts the carbon dioxide to sodium bicarbonate (you likely know it as baking soda, but lose the visual as we're talking about a dissolved solution here). The dissolved sodium bicarbonate is then released directly into the ocean, what Sengupta calls "an infinite sink."

"And it has no adverse impact on the ocean whatsoever," says SenGupta. "It doesn't change the salinity at all."

In fact, he says, the sodium bicarbonate, which is slightly alkaline, may improve the health of the ocean. That's because elevated levels of CO2 in the atmosphere have gradually reduced the pH of the ocean, causing acidification. More acidic waters harm the growth and reproduction of marine life like corals and plankton and can create catastrophic collapses in the food chain.

Read more at Science Daily