Jan 20, 2024

Moon rocks with unique dust found

Our Earth's Moon is almost completely covered in dust. Unlike on Earth, this dust is not smoothed by wind and weather, but is sharp-edged and also electrostatically charged. This dust has been studied since the Apollo era at the end of the 1960s. Now, an international research team led by Dr. Ottaviano Rüsch from the University of Münster has for the first time discovered anomalous meter-sized rocks on the lunar surface that are covered in dust and presumably exhibit unique properties -- such as magnetic anomalies. The scientists' most important finding is that only very few boulders on the Moon have a layer of dust with very special reflective properties. For example, the dust on these newly discovered boulders reflects sunlight differently than on previously known rocks. These new findings help scientists to understand the processes that form and change the lunar crust. The results of the study have been published in the Journal of Geophysical Research -- Planets.

It is known that there are magnetic anomalies on the lunar surface, particularly near a region called Reiner Gamma.

However, the question of whether rocks can be magnetic has never been investigated.

"Current knowledge of the Moon's magnetic properties is very limited, so these new rocks will shed light on the history of the Moon and its magnetic core," says Ottaviano Rüsch from the 'Institut für Planetologie', categorizing the discovery.

"For the first time, we have investigated the interactions of dust with rocks in the Reiner Gamma region -- more precisely, the variations in the reflective properties of these rocks. For example, we can deduce to what extent and in which direction the sunlight is reflected by these large rocks." The images were taken by NASA's Lunar Reconnaissance Orbiter spacecraft, which orbits the Moon.

The research team was originally interested in cracked rocks.

They first used artificial intelligence to search through around one million images for fractured rocks -- these images were also taken by the Lunar Reconnaissance Orbiter.

"Modern data processing methods allow us to gain completely new insights into global contexts -- at the same time, we keep finding unknown objects in this way, such as the anomalous rocks that we are investigating in this new study," says Valentin Bickel from the Center for Space and Habitability at the University of Bern.

The search algorithm identified around 130,000 interesting rocks, half of which were scrutinized by the scientists.

"We recognized a boulder with distinctive dark areas on just one image. This rock was very different from all the others, as it scatters less light back towards the sun than other rocks. We suspect that this is due to the particular dust structure, such as the density and grain size of the dust," Ottaviano Rüsch explains.

"Normally, lunar dust is very porous and reflects a lot of light back in the direction of illumination. However, when the dust is compacted, the overall brightness usually increases. This is not the case with the observed dust-covered rocks," adds Marcel Hess from TU Dortmund University.

This is a fascinating discovery -- however, the scientists are still in the early stages of understanding this dust and its interactions with the rock.

In the coming weeks and months, the scientists want to further investigate the processes that lead to the interactions between dust and rocks and to the formation of the special dust structure.

These processes include, for example, the lifting of the dust due to electrostatic charging or the interaction of the solar wind with local magnetic fields.

Read more at Science Daily

Researchers pump brakes on 'blue acceleration' harming the world ocean

Protecting the world's oceans against accelerating damage from human activities could be cheaper and take up less space than previously thought, new research has found.

The University of Queensland's Professor Anthony Richardson collaborated on the study, which looks to halt the rapid decline of marine biodiversity from expanding industrial activities in marine areas beyond national jurisdictions (ABNJ).

"This 'blue acceleration' as we call it, has seen a greater diversity of stakeholders interested in ABNJs, such as the high seas and the international seabed beyond exclusive economic zones," Professor Richardson said.

"This has led to an issue where current marine protection methods look at each sector separately -- such as fishing, shipping, and deep-sea mining industries -- all of which have their own suite of impacts on species, communities, and ecosystems."

In response, researchers assessed the design of different networks of marine protection areas (MPA) across the Indian Ocean that target rich biodiversity areas with minimal impact on profitable human activity.

"Essentially, we assessed the potential trade-offs associated with including multiple stakeholders in a cross-sectoral, as opposed to sector-specific, protected area network, for ABNJs in the Indian Ocean," Professor Richardson said.

"First, we created three sector-specific plans -- involving fishing, shipping, and mining separately -- to identify optimal locations for strict, no-take, MPAs.

"We then created a cross-sectoral no-take plan that minimises the opportunity cost to all stakeholders simultaneously, looking at the overall picture with each stakeholder in mind.

"After generating these plans, we compared the three sector-specific solutions, as well as their sum, to the cross sectoral solution."

Lead researcher from the Royal Belgian Institute of Natural Sciences, Léa Fourchault, said the cross-sectoral approach met the same conservation targets at much lower additional costs for each stakeholder than if all sector-specific plans are implemented without coordination.

"For example, the fishing sector might lose 20 per cent of its potential revenue under the cross-sectoral plan, but it would lose 54 per cent if all sector-specific plans were implemented simultaneously without coordination," Ms Fourchault said.

"This was consistent for the shipping and mining sectors, with the shipping sector now losing two per cent, instead of 26 per cent of its potential revenue, and the mining sector now losing one per cent instead of close to eight per cent.

"Our results also show that we can reduce the size of MPAs from 25 per cent of the spatial plan to eight per cent while meeting the same conservation objectives.

"This would still achieve 30 per cent coverage for important biodiversity features, including key life-cycle areas for marine megafauna, areas of biological and ecological interest, and areas important to deep-sea ecosystems, such as seamounts, vents, and plateaus."

Researchers believe the cross-sectoral approach can be a first step to implementing the conservation objectives of the recently signed United Nations High Seas Treaty.

"The code from our research is available online and can be used by scientists, conservationists and politicians alike -- and can be applied to any ocean on Earth," Ms Fourchault said.

Read more at Science Daily

Ancient 'chewing gum' reveals stone age diet

What did people eat on the west coast of Scandinavia 10,000 years ago? A new study of the DNA in a chewing gum shows that deer, trout and hazelnuts were on the diet. It also shows that one of the individuals had severe problems with her teeth.

Some 9,700 years ago, a group of people were camping on the west coast of Scandinavia, north of what is today Göteborg.

They had been fishing, hunting and collecting resources for food.

And some teenagers, both boys and girls, were chewing resin to produce glue, just after munching on trout and deer, as well as on hazelnuts.

Due to a bad case of periodontitis (severe gum infection that can lead to tooth loss and bone loss), one of the teenagers had problems eating the chewy deer-meat, as well as preparing the resin by chewing it.

We know this because an international research team has been working with the chewed resin from Huseby Klev for some time.

"There is a richness of DNA sequences in the chewed mastic from Huseby-Klev, and in it we find both the bacteria that we know are related to periodontitis, and DNA from plants and animals that they had chewed before," says Dr. Emrah Kırdök, from Mersin University Department of Biotechnology, who coordinated the metagenomic work on the Mesolithic chewing gum.

Emrah Kırdök started to analyse the material when he was a postdoc at the Department of Archaeology and Classical Studies at Stockholm University, but the study has grown much since then.

The site Huseby Klev on the island Orust was excavated 30 years ago.

Chewed resin was found together with remains of stone tools in a context dated to c. 9700 years ago.

The stone material also indicated a Mesolithic chronology. The chewed material from Huseby Klev has already generated a study on the human genetic data from three individuals, and the DNA in the material that was not of human origin has also been analysed and published.

Identifying the different species present in the kind of mix of DNA that was present in the Mesolithic chewing gum was challenging.

Dr Andrés Aravena, from the Department of Molecular Biology and Genetics at Istanbul University spent much time on the computer analysing the data together with Dr. Emrah Kırdök. "We had to apply several computational heavy analytical tools to single out the different species and organisms. All the tools we needed were not ready to be applied to ancient DNA; but much of our time was spent on adjusting them so that we could apply them," concludes Andrés Aravena.

Metagenomics on ancient DNA is an expanding area, but there have yet only been a few studies on this type of chewed material.

Read more at Science Daily

Jan 19, 2024

Space solar power project ends first in-space mission with successes and lessons

One year ago, Caltech's Space Solar Power Demonstrator (SSPD-1) launched into space to demonstrate and test three technological innovations that are among those necessary to make space solar power a reality.

The spaceborne testbed demonstrated the ability to beam power wirelessly in space; it measured the efficiency, durability, and function of a variety of different types of solar cells in space; and gave a real-world trial of the design of a lightweight deployable structure to deliver and hold the aforementioned solar cells and power transmitters.

Now, with SSPD-1's mission in space concluded, engineers on Earth are celebrating the testbed's successes and learning important lessons that will help chart the future of space solar power.

"Solar power beamed from space at commercial rates, lighting the globe, is still a future prospect. But this critical mission demonstrated that it should be an achievable future," says Caltech President Thomas F. Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics.

SSPD-1 represents a major milestone in a project that has been underway for more than a decade, garnering international attention as a tangible and high-profile step forward for a technology being pursued by multiple nations. It was launched on January 3, 2023, aboard a Momentus Vigoride spacecraft as part of the Caltech Space Solar Power Project (SSPP), led by professors Harry Atwater, Ali Hajimiri, and Sergio Pellegrino. It consists of three main experiments, each testing a different technology:

  •     DOLCE (Deployable on-Orbit ultraLight Composite Experiment): a structure measuring 1.8 meters by 1.8 meters that demonstrates the novel architecture, packaging scheme, and deployment mechanisms of the scalable modular spacecraft that will eventually make up a kilometer-scale constellation to serve as a power station.
  •     ALBA: a collection of 32 different types of photovoltaic (PV) cells to enable an assessment of the types of cells that can withstand punishing space environments.
  •     MAPLE (Microwave Array for Power-transfer Low-orbit Experiment): an array of flexible, lightweight microwave-power transmitters based on custom integrated circuits with precise timing control to focus power selectively on two different receivers to demonstrate wireless power transmission at distance in space.


"It's not that we don't have solar panels in space already. Solar panels are used to power the International Space Station, for example," says Atwater, Otis Booth Leadership Chair of Division of Engineering and Applied Science; Howard Hughes Professor of Applied Physics and Materials Science; director of the Liquid Sunlight Alliance; and one of the principal investigators of SSPP. "But to launch and deploy large enough arrays to provide meaningful power to Earth, SSPP has to design and create solar power energy transfer systems that are ultra-lightweight, cheap, flexible, and deployable."

DOLCE: Deploying the Structure

Though all of the experiments aboard SSPD-1 were ultimately successful, not everything went according to plan. For the scientists and engineers leading this effort, however, that was exactly the point. The authentic test environment for SSPD-1 provided an opportunity to evaluate each of the components and the insights gleaned will have a profound impact on future space solar power array designs.

For example, during the deployment of DOLCE -- which was intended to be a three- to four-day process -- one of the wires connecting the diagonal booms to the corners of the structure, which allowed it to unfurl, became snagged. This stalled the deployment and damaged the connection between one of the booms and the structure.

With the clock ticking, the team used cameras on DOLCE as well as a full-scale working model of DOLCE in Pellegrino's lab to identify and try to solve the problem. They established that the damaged system would deploy better when warmed directly by the Sun and also by solar energy reflected off Earth.

Once the diagonal booms had been deployed and the structure was fully uncoiled, a new complication arose: Part of the structure became jammed under the deployment mechanism, something that had never been seen in laboratory testing. Using images from the DOLCE cameras, the team was able to reproduce this kind of jamming in the lab and developed a strategy to fix it. Ultimately, Pellegrino and his team completed the deployment through a motion of DOLCE's actuators that vibrated the whole structure and worked the jam free. Lessons from the experience, Pellegrino says, will inform the next deployment mechanism.

"The space test has demonstrated the robustness of the basic concept, which has allowed us to achieve a successful deployment in spite of two anomalies," says Pellegrino, Joyce and Kent Kresa Professor of Aerospace and Civil Engineering and co-director of SSPP. "The troubleshooting process has given us many new insights and has sharply focused us on the connection between our modular structure and the diagonal booms. We have developed new ways to counter the effects of self-weight in ultralight deployable structures."

ALBA: Harvesting Solar Energy

Meanwhile, the photovoltaic performance of three entirely new classes of ultralight research-grade solar cells, none of which had ever been tested in orbit before, were measured over the course of more than 240 days of operation by the ALBA team, led by Atwater. Some of the solar cells were custom-fabricated using facilities in the SSPP labs and the Kavli Nanoscience Institute (KNI) at Caltech, which gave the team a reliable and fast way to get small cutting-edge devices quickly ready for flight. In future work, the team plans to test large-area cells made using highly scalable inexpensive manufacturing methods that can dramatically reduce both the mass and the cost of these space solar cells.

Space solar cells presently available commercially are typically 100 times more expensive than the solar cells and modules widely deployed on Earth. This is because their manufacture employs an expensive step called epitaxial growth, in which crystalline films are grown in a specific orientation on a substrate. The SSPP solar cell team achieved low-cost nonepitaxial space cells by using cheap and scalable production processes like those used to make today's silicon solar cells. These processes employ high-performance compound semiconductor materials such as gallium arsenide that are typically used to make high-efficiency space cells today.

The team also tested perovskite cells, which have captured the attention of solar manufacturers because they are cheap and flexible, and luminescent solar concentrators with the potential to be deployed in large flexible polymer sheets.

Over ALBA's lifespan, the team collected enough data to be able to observe changes in the operation of individual cells in response to space weather events like solar flares and geomagnetic activity. They found, for example, tremendous variability in the performance of the perovskite cells, whereas the low-cost gallium arsenide cells consistently performed well overall.

"SSPP gave us a unique opportunity to take solar cells directly from the lab at Caltech into orbit, accelerating the in-space testing that would normally have taken years to be done. This kind of approach has dramatically shortened the innovation-cycle time for space solar technology," says Atwater.

MAPLE: Wireless Power Transfer in Space

Finally, as announced in June, MAPLE demonstrated its ability to transmit power wirelessly in space and to direct a beam to Earth -- a first in the field. MAPLE experiments continued for eight months after the initial demonstrations, and in this subsequent work, the team pushed MAPLE to its limits to expose and understand its potential weaknesses so that lessons learned could be applied to future design.

The team compared the performance of the array early in the mission with its performance at the end of the mission, when MAPLE was intentionally stressed. A drop in the total transmitted power was observed. Back in the lab on Earth, the group reproduced the power drop, attributing it to the degradation of a few individual transmitting elements in the array as well as some complex electrical-thermal interactions in the system.

"These observations have already led to revisions in the design of various elements of MAPLE to maximize its performance over extended periods of time," says Hajimiri, Bren Professor of Electrical Engineering and Medical Engineering and co-director of SSPP. "Testing in space with SSPD-1 has given us more visibility into our blind spots and more confidence in our abilities."

SSPP: Moving Forward

SSPP began after philanthropist Donald Bren, chairman of Irvine Company and a life member of the Caltech community, first learned about the potential for space-based solar energy manufacturing as a young man in an article in Popular Science magazine. Intrigued by the potential for space solar power, Bren approached Caltech's then-president Jean-Lou Chameau in 2011 to discuss the creation of a space-based solar power research project. In the years to follow, Bren and his wife, Brigitte Bren, a Caltech trustee, agreed to make a series of donations (yielding a total commitment of over $100 million) through the Donald Bren Foundation to fund the project and to endow a number of Caltech professorships.

"The hard work and dedication of the brilliant scientists at Caltech have advanced our dream of providing the world with abundant, reliable, and affordable power for the benefit of all humankind," Donald Bren says.

In addition to the support received from the Brens, Northrop Grumman Corporation provided Caltech with $12.5 million between 2014 and 2017 through a sponsored research agreement that aided technology development and advanced the project's science.

With SSPD-1 winding down its mission, the testbed stopped communications with Earth on November 11. The Vigoride-5 vehicle that hosted SSPD-1 will remain in orbit to support continued testing and demonstration of the vehicle's Microwave Electrothermal Thruster engines that use distilled water as a propellant. It will ultimately deorbit and disintegrate in Earth's atmosphere.

Read more at Science Daily

Butterflies could lose spots as climate warms

Female Meadow Brown butterflies have fewer spots if they develop in warmer weather -- so climate change could make them less spotty, new research shows.

University of Exeter scientists found females that developed at 11°C had six spots on average, while those developing 15°C had just three.

The findings challenge long-held scientific views about why these butterflies have varying numbers of spots.

"Meadow Browns always have large 'eyespots' on their forewings, probably for startling predators," said Professor Richard ffrench-Constant, from the Centre for Ecology and Conservation on Exeter's Penryn Campus in Cornwall.

"They also have smaller spots on their hindwings, probably useful for camouflage when the butterfly is at rest.

"Our findings show that fewer of these hindwing spots appear when females experience higher temperatures during their pupal stage (in a chrysalis before emerging as a butterfly).

"This suggests the butterflies adapt their camouflage based on the conditions. For example, with fewer spots they may be harder to spot on dry, brown grass that would be more common in hot weather.

"We did not observe such a strong effect in males, possibly because their spots are important for sexual selection (attracting females)."

Since the classic work of biologist EB Ford, eyespot variation in the Meadow Brown butterfly has been used as an example of "genetic polymorphism" (the co-existence of multiple genetic forms in a single population).

However, the new study shows the eyespot variation is caused by thermal plasticity (the ability to react to changing temperatures).

"This is a family story for me, as my father collected butterflies for EB Ford here in Cornwall," Professor ffrench-Constant said.

"In the new study, we looked at current Cornish populations -- collecting males and females from the same field every day throughout the flight season -- and historical collections from Eton and Buckingham."

The researchers predict that spotting will decrease year on year as our climate warms.

Professor ffrench-Constant added: "This is an unexpected consequence of climate change. We tend to think about species moving north, rather than changing appearance."

Read more at Science Daily

Despite intensive scientific analyses, this centaur head remains a mystery

At the National Museum in Copenhagen, Denmark, there is a marble head that was once part of the ancient Greek Parthenon temple on the Acropolis in Athens. The head originally belonged to a centaur figure and was part of a scene depicting the Greek mythological Lapiths' battle against the centaurs (mythical creatures that were half-horse, half-human).

For reasons that have yet to be explained, parts of the centaur head are coated with a thin brown film, as are several other marble fragments from the Parthenon.

The mysterious brown film was first examined by the British Museum in 1830.

Back then, attempts were made to determine if the color originated from ancient paint, but it was eventually concluded that it might be a result of a chemical reaction between the marble and the air, or that the marble contained iron particles that had migrated to the surface, coloring it brown.

Oxalic acid, algae and fungi

"There have been many attempts to explain the peculiar brown film. In 1851, German chemist, Justus von Liebig, performed the first actual scientific investigation and determined that the brown film contained oxalates -- salts of oxalic acid. This has been confirmed by later analyses, but the origin of the oxalates has remained a mystery," says Professor emeritus Kaare Lund Rasmussen, an expert in chemical analyses of historical and archaeological artifacts, Department of Physics, Chemistry and Pharmacy, University of Southern Denmark.

Along with University of Southern Denmark colleagues Frank Kjeldsen and Vladimir Gorshkov from the Department of Biochemistry and Molecular Biology, Bodil Bundgaard Rasmussen, former head of the Antiquities Collection at the National Museum of Denmark, Thomas Delbey from Cranfield University in England, and Ilaria Bonaduce from the University of Pisa, Italy, he has published a scientific article describing the results of their investigations into the brown-colored centaur head from the National Museum.

The article is published in Heritage Science, and you can find it here.

"We especially wanted to examine whether the brown film could have been formed by some biological organism, such as lichen, bacteria, algae, or fungi. This theory had been suggested before, but no specific organism had been identified. The same goes for the theory that it could be remnants of applied paint -- perhaps to protect or tone the marble surface," says Kaare Lund Rasmussen.

For their investigations, the research team was allowed to take five small samples from the back of the centaur head.

These samples underwent various analyses in SDU's laboratories, including protein analysis and so-called Laser Ablation Inductively Coupled Plasma Mass Spectrometry.

"We found no traces of biological matter in the brown layers -- only from our own fingerprints and perhaps a bird egg that broke on the marble in ancient times. This doesn't prove that there never was a biological substance, but it significantly reduces the probability, making the theory of a biological organism less probable now," says Kaare Lund Rasmussen.

Similarly, it is now less probably that the marble surface was painted or preserved, according to the researchers, who also specifically searched for traces of paint.

Ancient paints were typically based on natural products such as eggs, milk, and bones, and no traces of such ingredients were found in the brown stain alone.

The mystery remains

Through their investigations, the research team also discovered that the brown film consists of two separate layers.

These two layers are approximately equally thick, around 50 micrometers each, and they differ in terms of trace element composition.

However, both layers contain a mixture of the oxalate minerals weddellite and whewellite.

The fact that there are two distinct layers argues against the theory that they were created by the migration of material, such as iron particles, from the interior of the marble.

It also contradicts the theory that they resulted from a reaction with the air.

Air pollution is also unlikely for another reason; the centaur head has been indoors in Copenhagen since before the modern industrialization began in the 18th century.

In fact, this makes the heads at the National Museum particularly valuable compared to the marble pieces on the Acropolis, of which some have only recently been brought indoors.

"As there are two different brown layers with different chemical compositions, it is likely that they have different origins. This could suggest that someone applied paint or a conservation treatment, but since we haven't found traces of such substances, the brown color remains a mystery," concludes Kaare Lund Rasmussen.

Read more at Science Daily

Mini-robots modeled on insects may be smallest, lightest, fastest ever developed

Two insect-like robots, a mini-bug and a water strider, developed at Washington State University, are the smallest, lightest and fastest fully functional micro-robots ever known to be created.

Such miniature robots could someday be used for work in areas such as artificial pollination, search and rescue, environmental monitoring, micro-fabrication or robotic-assisted surgery.

Reporting on their work in the proceedings of the IEEE Robotics and Automation Society's International Conference on Intelligent Robots and Systems, the mini-bug weighs in at eight milligrams while the water strider weighs 55 milligrams.

Both can move at about six millimeters a second.

"That is fast compared to other micro-robots at this scale although it still lags behind their biological relatives," said Conor Trygstad, a PhD student in the School of Mechanical and Materials Engineering and lead author on the work.

An ant typically weighs up to five milligrams and can move at almost a meter per second.

The key to the tiny robots is their tiny actuators that make the robots move.

Trygstad used a new fabrication technique to miniaturize the actuator down to less than a milligram, the smallest ever known to have been made.

"The actuators are the smallest and fastest ever developed for micro-robotics," said Néstor O. Pérez-Arancibia, Flaherty Associate Professor in Engineering at WSU's School of Mechanical and Materials Engineering who led the project.

The actuator uses a material called a shape memory alloy that is able to change shapes when it's heated.

It is called 'shape memory' because it remembers and then returns to its original shape.

Unlike a typical motor that would move a robot, these alloys don't have any moving parts or spinning components.

"They're very mechanically sound," said Trygstad. "The development of the very lightweight actuator opens up new realms in micro-robotics."

Shape memory alloys are not generally used for large-scale robotic movement because they are too slow.

In the case of the WSU robots, however, the actuators are made of two tiny shape memory alloy wires that are 1/1000 of an inch in diameter.

With a small amount of current, the wires can be heated up and cooled easily, allowing the robots to flap their fins or move their feet at up to 40 times per second.

In preliminary tests, the actuator was also able to lift more than 150 times its own weight.

Compared to other technologies used to make robots move, the SMA technology also requires only a very small amount of electricity or heat to make them move.

"The SMA system requires a lot less sophisticated systems to power them," said Trygstad.

Trygstad, an avid fly fisherman, has long observed water striders and would like to further study their movements.

While the WSU water strider robot does a flat flapping motion to move itself, the natural insect does a more efficient rowing motion with its legs, which is one of the reasons that the real thing can move much faster.

Read more at Science Daily

Jan 18, 2024

Astronomers detect oldest black hole ever observed

Researchers have discovered the oldest black hole ever observed, dating from the dawn of the universe, and found that it is 'eating' its host galaxy to death.

The international team, led by the University of Cambridge, used the NASA/ESA/CSA James Webb Space Telescope (JWST) to detect the black hole, which dates from 400 million years after the big bang, more than 13 billion years ago. The results, which lead author Professor Roberto Maiolino says are "a giant leap forward," are reported in the journal Nature.

That this surprisingly massive black hole -- a few million times the mass of our Sun -- even exists so early in the universe challenges our assumptions about how black holes form and grow. Astronomers believe that the supermassive black holes found at the centre of galaxies like the Milky Way grew to their current size over billions of years. But the size of this newly-discovered black hole suggests that they might form in other ways: they might be 'born big' or they can eat matter at a rate that's five times higher than had been thought possible.

According to standard models, supermassive black holes form from the remnants of dead stars, which collapse and may form a black hole about a hundred times the mass of the Sun. If it grew in an expected way, this newly-detected black hole would take about a billion years to grow to its observed size. However, the universe was not yet a billion years old when this black hole was detected.

"It's very early in the universe to see a black hole this massive, so we've got to consider other ways they might form," said Maiolino, from Cambridge's Cavendish Laboratory and Kavli Institute of Cosmology. "Very early galaxies were extremely gas-rich, so they would have been like a buffet for black holes."

Like all black holes, this young black hole is devouring material from its host galaxy to fuel its growth. Yet, this ancient black hole is found to gobble matter much more vigorously than its siblings at later epochs.

The young host galaxy, called GN-z11, glows from such an energetic black hole at its centre. Black holes cannot be directly observed, but instead they are detected by the tell-tale glow of a swirling accretion disc, which forms near the edges of a black hole. The gas in the accretion disc becomes extremely hot and starts to glow and radiate energy in the ultraviolet range. This strong glow is how astronomers are able to detect black holes.

GN-z11 is a compact galaxy, about one hundred times smaller than the Milky Way, but the black hole is likely harming its development. When black holes consume too much gas, it pushes the gas away like an ultra-fast wind. This 'wind' could stop the process of star formation, slowly killing the galaxy, but it will also kill the black hole itself, as it would also cut off the black hole's source of 'food'.

Maiolino says that the gigantic leap forward provided by JWST makes this the most exciting time in his career. "It's a new era: the giant leap in sensitivity, especially in the infrared, is like upgrading from Galileo's telescope to a modern telescope overnight," he said. "Before Webb came online, I thought maybe the universe isn't so interesting when you go beyond what we could see with the Hubble Space Telescope. But that hasn't been the case at all: the universe has been quite generous in what it's showing us, and this is just the beginning."

Maiolino says that the sensitivity of JWST means that even older black holes may be found in the coming months and years. Maiolino and his team are hoping to use future observations from JWST to try to find smaller 'seeds' of black holes, which may help them untangle the different ways that black holes might form: whether they start out large or they grow fast.

Read more at Science Daily

US air pollution rates on the decline but pockets of inequities remain

Over the last decades, air pollution emissions have decreased substantially; however, the magnitude of the change varies by demographics, according to a new study by Columbia University Mailman School of Public Health. The results indicate there are racial/ethnic and socioeconomic disparities in air pollution emissions reductions, particularly in the industry and energy generation sectors. The findings are published in the journal Nature Communications.

The research provides a national investigation of air pollution emission changes in the 40 years following the enactment of the Clean Air Act (CAA). Until now, studies have primarily focused on evaluating air pollution disparities at a single time point, focusing on pollutant concentrations instead of emissions. A focus on emissions, however, has more direct implications for regulations and policies. In this study, the researchers used county-level data to evaluate racial/ethnic and socioeconomic disparities in air pollution emissions changes in the contiguous U.S. from 1970 to 2010.

"The analyses provide insight on the socio-demographic characteristics of counties that have experienced disproportionate decreases in air pollution emissions over the last forty years," said Yanelli Nunez, PhD, the study's first author, who is a scientist in the Department of Environmental Health Sciences at Columbia Mailman School of Public Health and affiliated with PSE Healthy Energy. Additionally, by analyzing air pollution emissions, the researchers identified specific pollution source sectors that are potentially important contributors to air pollution exposure disparities.

Nunez and colleagues leveraged air pollution emissions data from the Global Burden of Disease Major Air Pollution Sources inventory to analyze air pollutant emissions from six pollution source sectors: industry (sulfur dioxide), energy (sulfur dioxide and nitrogen oxides), agriculture (ammonia), on-road transportation (nitrogen oxides), commercial (nitrogen oxides), and residential (particles of organic carbon).

On average, national U.S. air pollution emissions declined substantially from 1970 to 2010 from all source sectors the researchers considered except for ammonia emissions from agriculture and organic carbon particle emissions from the residential sector, which the researchers indicate is primarily from using solid biofuels for indoor heating. The most pronounced emission decreases were observed for sulfur dioxidefrom industrial and energy generation activities. Nitrogen oxide emissions from transportation, commercial activities, and energy generation decreased moderately.

Despite the overall downward trends for most pollutants, the researchers found that certain populations experienced relatively smaller reductions or even increases in air pollution emissions. For instance, an increase in a county's average Hispanic or Indian American population percentage resulted in a relative increase in sulfur dioxide, nitrogen oxides, and ammonia emissions from the industry, energy generation, and agriculture sectors, respectively. Additionally, an increase in the county median family income was linked with an increase in the magnitude of emissions reductions in every pollution source sector the researchers analyzed, except agriculture.

"Air pollution emissions do not perfectly capture population air pollution exposure, and we also know that neighborhood-level air pollution inequities are common, which we were not able to analyze in this study given the data at hand," noted Marianthi-Anna Kioumourtzoglou, ScD, associate professor of environmental health sciences at Columbia Mailman School, and senior author. "In this study, we provide information about potential racial/ethnic and socioeconomic inequalities in air pollution reductions nationwide from major air pollution sources, which can inform regulators and complement local-level analysis."

"Policies specifically targeting reductions in overburdened populations could support more just reductions in air pollution and reduce disparities in air pollution exposure," observed Dr. Nunez. "This is an important lesson gained from 53 years of Clean Air Act implementation, which is particularly relevant as we develop policies to transition to renewable energy sources, which will have a collateral impact on air quality and, as a result, on public health."

Read more at Science Daily

Woolly mammoth movements tied to earliest Alaska hunting camps

Researchers have linked the travels of a 14,000-year-old woolly mammoth with the oldest known human settlements in Alaska, providing clues about the relationship between the iconic species and some of the earliest people to travel across the Bering Land Bridge.

Scientists made those connections by using isotope analysis to study the life of a female mammoth, named Élmayųujey'eh, by the Healy Lake Village Council.

A tusk from Elma was discovered at the Swan Point archaeological site in Interior Alaska.

Samples from the tusk revealed details about Elma and the roughly 1,000-kilometer journey she took through Alaska and northwestern Canada during her lifetime.

Isotopic data, along with DNA from other mammoths at the site and archaeological evidence, indicates that early Alaskans likely structured their settlements to overlap with areas where mammoths congregated.

Those findings, highlighted in the new issue of the journal Science Advances, provide evidence that mammoths and early hunter-gatherers shared habitat in the region.

The long-term predictable presence of woolly mammoths would have attracted humans to the area.

"She wandered around the densest region of archaeological sites in Alaska," said Audrey Rowe, a University of Alaska Fairbanks Ph.D. student and lead author of the paper.

"It looks like these early people were establishing hunting camps in areas that were frequented by mammoths."

The mammoth tusk was excavated and identified in 2009 by Charles Holmes, affiliate research professor of anthropology at UAF, and François Lanoë, research associate in archaeology at the University of Alaska Museum of the North.

They found Elma's tusk and the remains of two related juvenile mammoths, along with evidence of campfires, the use of stone tools, and butchered remains of other game.

All of this "indicates a pattern consistent with human hunting of mammoths," said Ben Potter, an archaeologist and professor of anthropology at UAF.

Researchers at UAF's Alaska Stable Isotope Facility then analyzed thousands of samples from Elma's tusk to recreate her life and travels.

Isotopes provide chemical markers of an animal's diet and location.

The markers are then recorded in the bones and tissues of animals and remain even after they die.

Mammoth tusks are well-suited to isotopic study because they grew throughout the ancient animals' lives, with clearly visible layers appearing when split lengthwise.

Those growth bands give researchers a way to collect a chronological record of a mammoth's life by studying isotopes in samples along the tusk.

Much of Elma's journey overlapped with that of a previously studied male mammoth who lived 3,000 years earlier, demonstrating long-term movement patterns by mammoths over several millennia.

In Elma's case, they also indicated she was a healthy 20-year-old female.

"She was a young adult in the prime of life. Her isotopes showed she was not malnourished and that she died in the same season as the seasonal hunting camp at Swan Point where her tusk was found," said senior author Matthew Wooller, who is director of the Alaska Stable Isotope Facility and a professor at UAF's College of Fisheries and Ocean Sciences.

The era in which Elma lived may have compounded the challenges posed by the relatively recent appearance of humans.

The grass- and shrub-dominated steppe landscape that had been common in Interior Alaska was beginning to shift toward more forested terrain.

"Climate change at the end of the ice age fragmented mammoths' preferred open habitat, potentially decreasing movement and making them more vulnerable to human predation," Potter said.

Read more at Science Daily

Surprisingly simple model explains how brain cells organize and connect

A new study by physicists and neuroscientists from the University of Chicago, Harvard and Yale describes how connectivity among neurons comes about through general principles of networking and self-organization, rather than the biological features of an individual organism.

The research, published on January 17, 2024 in Nature Physics, accurately describes neuronal connectivity in a variety of model organisms and could apply to non-biological networks like social interactions as well.

"When you're building simple models to explain biological data, you expect to get a good rough cut that fits some but not all scenarios," said Stephanie Palmer, PhD, Associate Professor of Physics and Organismal Biology and Anatomy at UChicago and senior author of the paper.

"You don't expect it to work as well when you dig into the minutiae, but when we did that here, it ended up explaining things in a way that was really satisfying."

Understanding how neurons connect

Neurons form an intricate web of connections between synapses to communicate and interact with each other.

While the vast number of connections may seem random, networks of brain cells tend to be dominated by a small number of connections that are much stronger than most.

This "heavy-tailed" distribution of connections (so-called because of the way it looks when plotted on a graph) forms the backbone of circuitry that allows organisms to think, learn, communicate and move.

Despite the importance of these strong connections, scientists were unsure if this heavy-tailed pattern arises because of biological processes specific to different organisms, or due to basic principles of network organization.

To answer these questions, Palmer and Christopher Lynn, PhD, Assistant Professor of Physics at Yale University, and Caroline Holmes, PhD, a postdoctoral researcher at Harvard University, analyzed connectomes, or maps of brain cell connections.

The connectome data came from several different classic lab animals, including fruit flies, roundworms, marine worms and the mouse retina.

To understand how neurons form connections to one another, they developed a model based on Hebbian dynamics, a term coined by Canadian psychologist Donald Hebb in 1949 that essentially says, "neurons that fire together, wire together." This means the more two neurons activate together, the stronger their connection becomes.

Across the board, the researchers found these Hebbian dynamics produce "heavy-tailed" connection strengths just like they saw in the different organisms.

The results indicate that this kind of organization arises from general principles of networking, rather than something specific to the biology of fruit flies, mice, or worms.

The model also provided an unexpected explanation for another networking phenomenon called clustering, which describes the tendency of cells to link with other cells via connections they share.

A good example of clustering occurs in social situations. If one person introduces a friend to a third person, those two people are more likely to become friends with them than if they met separately.

"These are mechanisms that everybody agrees are fundamentally going to happen in neuroscience," Holmes said.

"But we see here that if you treat the data carefully and quantitatively, it can give rise to all of these different effects in clustering and distributions, and then you see those things across all of these different organisms."

Accounting for randomness

As Palmer pointed out, though, biology doesn't always fit a neat and tidy explanation, and there is still plenty of randomness and noise involved in brain circuits.

Neurons sometimes disconnect and rewire with each other -- weak connections are pruned, and stronger connections can be formed elsewhere.

This randomness provides a check on the kind of Hebbian organization the researchers found in this data, without which strong connections would grow to dominate the network.

The researchers tweaked their model to account for randomness, which improved its accuracy.

"Without that noise aspect, the model would fail," Lynn said.

"It wouldn't produce anything that worked, which was surprising to us. It turns out you actually need to balance the Hebbian snowball effect with the randomness to get everything to look like real brains."

Since these rules arise from general networking principles, the team hopes they can extend this work beyond the brain.

Read more at Science Daily

Jan 17, 2024

NASA analysis confirms 2023 as warmest year on record

Earth's average surface temperature in 2023 was the warmest on record, according to an analysis by NASA. Global temperatures last year were around 2.1 degrees Fahrenheit (1.2 degrees Celsius) above the average for NASA's baseline period (1951-1980), scientists from NASA's Goddard Institute for Space Studies (GISS) in New York reported.

"NASA and NOAA's global temperature report confirms what billions of people around the world experienced last year; we are facing a climate crisis," said NASA Administrator Bill Nelson. "From extreme heat, to wildfires, to rising sea levels, we can see our Earth is changing. There's still more work to be done, but President Biden and communities across America are taking more action than ever to reduce climate risks and help communities become more resilient -- and NASA will continue to use our vantage point of space to bring critical climate data back down to Earth that is understandable and accessible for all people. NASA and the Biden-Harris Administration are working to protect our home planet and its people, for this generation -- and the next."

In 2023, hundreds of millions of people around the world experienced extreme heat, and each month from June through December set a global record for the respective month. July was the hottest month ever recorded. Overall, Earth was about 2.5 degrees Fahrenheit (or about 1.4 degrees Celsius) warmer in 2023 than the late 19th-century average, when modern record-keeping began.

"The exceptional warming that we're experiencing is not something we've seen before in human history," said Gavin Schmidt, director of GISS. "It's driven primarily by our fossil fuel emissions, and we're seeing the impacts in heat waves, intense rainfall, and coastal flooding."

Though scientists have conclusive evidence that the planet's long-term warming trend is driven by human activity, they still examine other phenomena that can affect yearly or multi-year changes in climate such as El Niño, aerosols and pollution, and volcanic eruptions.

Typically, the largest source of year-to-year variability is the El Niño -- Southern Oscillation ocean climate pattern in the Pacific Ocean. The pattern has two phases -- El Niño and La Niña -- when sea surface temperatures along the equator switch between warmer, average, and cooler temperatures. From 2020-2022, the Pacific Ocean saw three consecutive La Niña events, which tend to cool global temperatures. In May 2023, the ocean transitioned from La Niña to El Niño, which often coincides with the hottest years on record.

However, the record temperatures in the second half of 2023 occurred before the peak of the current El Niño event. Scientists expect to see the biggest impacts of El Niño in February, March, and April.

Scientists have also investigated possible impacts from the January 2022 eruption of the Hunga Tonga-Hunga Ha'apai undersea volcano, which blasted water vapor and fine particles, or aerosols, into the stratosphere. A recent study found that the volcanic aerosols -- by reflecting sunlight away from Earth's surface -- led to an overall slight cooling of less than 0.2 degrees Fahrenheit (or about 0.1 degrees Celsius) in the Southern Hemisphere following the eruption.

"Even with occasional cooling factors like volcanoes or aerosols, we will continue to break records as long as greenhouse gas emissions keep going up," Schmidt said. "And, unfortunately, we just set a new record for greenhouse gas emissions again this past year."

"The record-setting year of 2023 underscores the significance of urgent and continued actions to address climate change," said NASA Deputy Administrator Pam Melroy. "Recent legislation has delivered the U.S. government's largest-ever climate investment, including billions to strengthen America's resilience to the increasing impacts of the climate crisis. As an agency focused on studying our changing climate, NASA's fleet of Earth observing satellites will continue to provide critical data of our home planet at scale to help all people make informed decisions."

Open Science in Action

NASA assembles its temperature record using surface air temperature data collected from tens of thousands of meteorological stations, as well as sea surface temperature data acquired by ship- and buoy-based instruments. This data is analyzed using methods that account for the varied spacing of temperature stations around the globe and for urban heating effects that could skew the calculations.

Independent analyses by NOAA and the Hadley Centre (part of the United Kingdom Met Office) concluded the global surface temperatures for 2023 were the highest since modern record-keeping began. These scientists use much of the same temperature data in their analyses but use different methodologies. Although rankings can differ slightly between the records, they are in broad agreement and show the same ongoing long-term warming in recent decades.

Building on a half century of research, observations, and models, the Biden-Harris Administration including NASA and several federal partners recently launched the U.S. Greenhouse Gas Center to make critical climate data readily available to decisionmakers and citizens. The center supports collaboration across U.S. government agencies and the non-profit and private sectors to make air-, ground-, and space-borne data and resources available online.

Read more at Science Daily

Scaling up urban agriculture: Research team outlines roadmap

Urban agriculture has the potential to decentralize food supplies, provide environmental benefits like wildlife habitat, and mitigate environmental footprints, but researchers have identified knowledge gaps regarding both the benefits and risks of urban agriculture and the social processes of growing more food in urban areas.

In a new paper published in Nature Food, an interdisciplinary group of experts, including a researcher from the University of Illinois Urbana-Champaign, survey existing international studies on the benefits and downsides of urban agriculture and propose a framework for scaling it up.

Study co-author Chloe Wardropper, assistant professor in the Department of Natural Resources and Environmental Sciences in the College of Agricultural, Consumer and Environmental Sciences (ACES) at U. of I., says more than two-thirds of the global population is expected to live in urban areas by 2050, and the resilience of these areas may be compromised by their heavy reliance on imported food.

Increasing urban agriculture could reinforce the sustainability and resilience of urban regions in the future, but Wardropper says there are open questions about how best to scale up and what environmental, health, and equity concerns would need to be addressed.

"We propose a framework of three interconnected phases to better understand and shape urban agriculture growth in the future," Wardropper said.

"The first phase of growth would include expanding individuals' interest in, knowledge of, and access to resources to undertake agriculture in urban regions. This phase should be followed by institutionalization, or the transformation of rules and organizational support for urban agriculture. Third, economic and market growth would increasingly support and diversify urban food."

She notes that urban agriculture is not a panacea; urban-rural connections will remain important for global food security and consumption.

Read more at Science Daily

Amnesia caused by head injury reversed in early mouse study

A mouse study designed to shed light on memory loss in people who experience repeated head impacts, such as athletes, suggests the condition could potentially be reversed. The research in mice finds that amnesia and poor memory following head injury is due to inadequate reactivation of neurons involved in forming memories.

The study, conducted by researchers at Georgetown University Medical Center in collaboration with Trinity College Dublin, Ireland, is reported January 16, 2024, in the Journal of Neuroscience.

Importantly for diagnostic and treatment purposes, the researchers found that the memory loss attributed to head injury was not a permanent pathological event driven by a neurodegenerative disease.

Indeed, the researchers could reverse the amnesia to allow the mice to recall the lost memory, potentially allowing cognitive impairment caused by head impact to be clinically reversed.

The Georgetown investigators had previously found that the brain adapts to repeated head impacts by changing the way the synapses in the brain operate.

This can cause trouble in forming new memories and remembering existing memories.

In their new study, investigators were able to trigger mice to remember memories that had been forgotten due to head impacts.

"Our research gives us hope that we can design treatments to return the head-impact brain to its normal condition and recover cognitive function in humans that have poor memory caused by repeated head impacts," says the study's senior investigator, Mark Burns, PhD, a professor and Vice-Chair in Georgetown's Department of Neuroscience and director of the Laboratory for Brain Injury and Dementia.

In the new study, the scientists gave two groups of mice a new memory by training them in a test they had never seen before.

One group was exposed to a high frequency of mild head impacts for one week (similar to contact sport exposure in people) and one group were controls that didn't receive the impacts.

The impacted mice were unable to recall the new memory a week later.

"Most research in this area has been in human brains with chronic traumatic encephalopathy (CTE), which is a degenerative brain disease found in people with a history of repetitive head impact," said Burns.

"By contrast, our goal was to understand how the brain changes in response to the low-level head impacts that many young football players regularly experience."

Researchers have found that, on average, college football players receive 21 head impacts per week with defensive ends receiving 41 head impacts per week.

The number of head impacts to mice in this study were designed to mimic a week of exposure for a college football player, and each single head impact by itself was extraordinarily mild.

Using genetically modified mice allowed the researchers to see the neurons involved in learning new memories, and they found that these memory neurons (the "memory engram") were equally present in both the control mice and the experimental mice.

To understand the physiology underlying these memory changes, the study's first author, Daniel P. Chapman, Ph.D., said, "We are good at associating memories with places, and that's because being in a place, or seeing a photo of a place, causes a reactivation of our memory engrams. This is why we examined the engram neurons to look for the specific signature of an activated neuron. When the mice see the room where they first learned the memory, the control mice are able to activate their memory engram, but the head impact mice were not. This is what was causing the amnesia."

The researchers were able to reverse the amnesia to allow the mice to remember the lost memory using lasers to activate the engram cells.

"We used an invasive technique to reverse memory loss in our mice, and unfortunately this is not translatable to humans," Burns adds.

"We are currently studying a number of non-invasive techniques to try to communicate to the brain that it is no longer in danger, and to open a window of plasticity that can reset the brain to its former state."

Read more at Science Daily

Pacific kelp forests are far older that we thought

The unique underwater kelp forests that line the Pacific Coast support a varied ecosystem that was thought to have evolved along with the kelp over the past 14 million years.

But a new study shows that kelp flourished off the Northwest Coast more than 32 million years ago, long before the appearance of modern groups of marine mammals, sea urchins, birds and bivalves that today call the forests home.

The much greater age of these coastal kelp forests, which today are a rich ecosystem supporting otters, sea lions, seals, and many birds, fish and crustaceans, means that they likely were a main source of food for an ancient, now-extinct mammal called a desmostylian. The hippopotamus-sized grazer is thought to be related to today's sea cows, manatees and their terrestrial relatives, the elephants.

"People initially said, "We don't think the kelps were there before 14 million years ago because the organisms associated with the modern kelp forest were not there yet,'" said paleobotanist Cindy Looy, professor of integrative biology at the University of California, Berkeley. "Now, we show the kelps were there, it's just that all the organisms that you expect to be associated with them were not. Which is not that strange, because you first need the foundation for the whole system before everything else can show up."

Evidence for the greater antiquity of kelp forests, reported this week in the journal Proceedings of the National Academy of Sciences, comes from newly discovered fossils of the kelp's holdfast -- the root-like part of the kelp that anchors it to rocks or rock-bound organisms on the seafloor. The stipe, or stem, attaches to the holdfast and supports the blades, which typically float in the water, thanks to air bladders.

Looy's colleague, Steffen Kiel, dated these fossilized holdfasts, which still grasp clams and envelop barnacles and snails, to 32.1 million years ago, in the middle of the Cenozoic Era, which stretches from 66 million years ago to the present. The oldest previously known kelp fossil, consisting of one air bladder and a blade similar to that of today's bull kelp, dates from 14 million years ago and is in the collection of the University of California Museum of Paleontology (UCMP).

"Our holdfasts provide good evidence for kelp being the food source for an enigmatic group of marine mammals, the desmostylia," said Kiel, lead author of the paper and a senior curator at the Swedish Museum of Natural History in Stockholm. "This is the only order of Cenozoic mammals that actually went extinct during the Cenozoic. Kelp had long been suggested as a food source for these hippo-sized marine mammals, but actual evidence was lacking. Our holdfasts indicate that kelp is a likely candidate."

According to Kiel and Looy, who is the senior author of the paper and UCMP curator of paleobotany, these early kelp forests were likely not as complex as the forests that evolved by about 14 million years ago. Fossils from the late Cenozoic along the Pacific Coast indicate an abundance of bivalves -- clams, oysters and mussels -- birds and sea mammals, including sirenians related to manatees and extinct, bear-like predecessors of the sea otter, called Kolponomos. Such diversity is not found in the fossil record from 32 million years ago.

"Another implication is that the fossil record has, once again, shown that the evolution of life -- in this case, of kelp forests -- was more complex than estimated from biological data alone," Kiel said. "The fossil record shows that numerous animals appeared in, and disappeared from, kelp forests during the past 32 million years, and that the kelp forest ecosystems that we know today have only evolved during the past few million years."

The value of fossil hunting amateurs

The fossils were discovered by James Goedert, an amateur fossil collector who has worked with Kiel in the past. When Goedert broke open four stone nodules he found along the beach near Jansen Creek on the Olympic Peninsula in Washington, he saw what looked like the holdfasts of kelp and other macroalgae common along the coast today.

Kiel, who specializes in invertebrate evolution, agreed and subsequently dated the rocks based on the ratio of strontium isotopes. He also analyzed oxygen isotope levels in the bivalve shells to determine that the holdfasts lived in slightly warmer water than today, at the upper range of temperatures found in modern kelp forests.

Looy reached out to co-author Dula Parkinson, a staff scientist with the Advanced Light Source at Lawrence Berkeley National Laboratory, for help obtaining a 3D X-ray scan of one of the holdfast fossils using Synchrotron Radiation X-ray Tomographic Microscopy (SRXTM). When she reviewed the detailed X-ray slices through the fossil, she was amazed to see a barnacle, a snail, a mussel and tiny, single-celled foraminifera hidden within the holdfast, in addition to the bivalve on which it sat.

Looy noted, however, that the diversity of invertebrates found within the 32-million-year-old fossilized holdfast was not as high as would be found inside a kelp holdfast today.

"The holdfasts are definitely not as rich as they would be if you would go to a kelp ecosystem right now," Looy said. "The diversifying of organisms living in these ecosystems hadn't started yet."

Kiel and Looy plan further studies of the fossils to see what they reveal about the evolution of the kelp ecosystem in the North Pacific and how that relates to changes in the ocean-climate system.

Read more at Science Daily

Jan 16, 2024

NASA's Webb discovers dusty 'cat's tail' in Beta Pictoris System

Beta Pictoris, a young planetary system located just 63 light-years away, continues to intrigue scientists even after decades of in-depth study. It possesses the first dust disk imaged around another star — a disk of debris produced by collisions between asteroids, comets, and planetesimals. Observations from NASA’s Hubble Space Telescope revealed a second debris disk in this system, inclined with respect to the outer disk, which was seen first. Now, a team of astronomers using NASA’s James Webb Space Telescope to image the Beta Pictoris system (Beta Pic) has discovered a new, previously unseen structure.

The team, led by Isabel Rebollido of the Astrobiology Center in Spain, used Webb’s NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) to investigate the composition of Beta Pic’s previously detected main and secondary debris disks. The results exceeded their expectations, revealing a sharply inclined branch of dust, shaped like a cat’s tail, that extends from the southwest portion of the secondary debris disk.

“Beta Pictoris is the debris disk that has it all: It has a really bright, close star that we can study very well, and a complex cirumstellar environment with a multi-component disk, exocomets, and two imaged exoplanets,” said Rebollido, lead author of the study. “While there have been previous observations from the ground in this wavelength range, they did not have the sensitivity and the spatial resolution that we now have with Webb, so they didn’t detect this feature.”


A Star’s Portrait Improved with Webb

Even with Webb or JWST, peering at Beta Pic in the right wavelength range — in this case, the mid-infrared — was crucial to detect the cat’s tail, as it only appeared in the MIRI data. Webb’s mid-infrared data also revealed differences in temperature between Beta Pic’s two disks, which likely is due to differences in composition.

“We didn’t expect Webb to reveal that there are two different types of material around Beta Pic, but MIRI clearly showed us that the material of the secondary disk and cat’s tail is hotter than the main disk,” said Christopher Stark, a co-author of the study at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “The dust that forms that disk and tail must be very dark, so we don’t easily see it at visible wavelengths — but in the mid-infrared, it’s glowing.”

To explain the hotter temperature, the team deduced that the dust may be highly porous “organic refractory material,” similar to the matter found on the surfaces of comets and asteroids in our solar system. For example, a preliminary analysis of material sampled from asteroid Bennu by NASA’s OSIRIS-REx mission found it to be very dark and carbon-rich, much like what MIRI detected at Beta Pic.


The Tail’s Puzzling Beginning Warrants Future Research

However, a major lingering question remains: What could explain the shape of the cat’s tail, a uniquely curved feature unlike what is seen in disks around other stars?

Rebollido and the team modeled various scenarios in an attempt to emulate the cat’s tail and unravel its origins. Though further research and testing is required, the team presents a strong hypothesis that the cat’s tail is the result of a dust production event that occurred a mere one hundred years ago.

“Something happens — like a collision — and a lot of dust is produced,” shared Marshall Perrin, a co-author of the study at the Space Telescope Science Institute in Baltimore, Maryland. “At first, the dust goes in the same orbital direction as its source, but then it also starts to spread out. The light from the star pushes the smallest, fluffiest dust particles away from the star faster, while the bigger grains do not move as much, creating a long tendril of dust.”

“The cat’s tail feature is highly unusual, and reproducing the curvature with a dynamical model was difficult,” explained Stark. “Our model requires dust that can be pushed out of the system extremely rapidly, which again suggests it’s made of organic refractory material.”

The team’s preferred model explains the sharp angle of the tail away from the disk as a simple optical illusion. Our perspective combined with the curved shape of the tail creates the observed angle of the tail, while in fact, the arc of material is only departing from the disk at a five-degree incline. Taking into consideration the tail’s brightness, the team estimates the amount of dust within the cat’s tail to be equivalent to a large main belt asteroid spread out across 10 billion miles.

A recent dust production event within Beta Pic’s debris disks could also explain a newly-seen asymmetric extension of the inclined inner disk, as shown in the MIRI data and seen only on the side opposite of the tail. Recent collisional dust production could also account for a feature previously spotted by the Atacama Large Millimeter/submillimeter Array in 2014: a clump of carbon monoxide (CO) located near the cat’s tail. Since the star’s radiation should break down CO within roughly one hundred years, this still-present concentration of gas could be lingering evidence of the same event.

“Our research suggests that Beta Pic may be even more active and chaotic than we had previously thought,” said Stark. “JWST continues to surprise us, even when looking at the most well-studied objects. We have a completely new window into these planetary systems.”

Read more at Science Daily

Ancient cities provide key datasets for urban planning, policy and predictions in the Anthropocene

A new study led by authors from the Max Planck Institute of Geoanthropology, published in the first edition of the new journal Nature Cities, shows how state-of-the-art methods and perspectives from archaeology, history, and palaeoecology are shedding new light on 5,500 years of urban life.

Cities play a key role in climate change and biodiversity and are one of the most recognizable features of the Anthropocene.

They also accelerate innovation and shape social networks, while perpetuating and intensifying inequalities.

Today over half of all humanity lives in cities, a threshold which will rise to nearly 70% by the mid-21st century.

Yet despite their importance for the Anthropocene, cities are not a recent phenomenon.

In a new study, an interdisciplinary team of authors from the Max Planck Institute of Geoanthropology argue that the history of urbanism provides an important resource for understanding where our contemporary urban challenges come from, as well as how we might begin to address them.

The paper highlights the ways in which new methodologies are changing our understanding of past cities and providing a reference for urban societies navigating the intensifying climatic extremes of the 21st century.

These methods range from remote sensing techniques like LiDAR, which are documenting cities in places where urban life was once considered impossible, to biomolecular approaches like isotope analysis, which can provide insights into how cities have shaped different organisms and influenced human mobility and connectivity through time.

Meanwhile, the study of sediment cores and historical data can show how cities have placed adaptive pressures on different landscapes and human societies -- as they still do today.

As understanding of humanity's influence on the Earth system grows, urbanism is increasingly considered one of the most impactful forms of land use.

In this new study, the authors also highlight how multidisciplinary approaches, including Earth system modelling, are revealing the impacts that ancient and historical forms of urbanism had on land use, and, critically, how they compare to the impacts of urban areas today.

Throughout, the authors emphasize that the past does not just provide anecdotal insights, but rather numerical datasets of things like road lengths, building types, population sizes, economic output, environmental impacts, and more.

With advances in computational archaeology, this opens up the possibility of quantifying similarities and differences in urban pathways across space and time, directly linking the past to the present.

By reviewing diverse examples from around the world ranging from medieval Constantinople (now Istanbul) to 9th century Bagdad, from Great Zimbabwe to Greater Angkor in Cambodia, this new study highlights the potential of new methodological approaches to reveal historical legacies and predict trajectories of urbanism in the Anthropocene epoch.

Read more at Science Daily

New research sheds light on an old fossil solving an evolutionary mystery

A research paper published in Royal Society's Biology Letters on January 10 has revealed that picrodontids -- an extinct family of placental mammals that lived several million years after the extinction of the dinosaurs -- are not primates as previously believed.

The paper -- co-authored by Jordan Crowell, an Anthropology Ph.D. candidate at the CUNY Graduate Center; Stephen Chester, an Associate Professor of Anthropology at Brooklyn College and the Graduate Center; and John Wible, Curator of Mammals at the Carnegie Museum of Natural History -- is significant in that it settled a paleontological debate that has been brewing for over 100 years while helping to paint a more clear picture of primate evolution.

For the last 50 years, paleontologists have believed picrodontids, which were no larger than a mouse and likely ate foods such as fruit, nectar, and pollen, were primates, based on features of their teeth that they share with living primates.

But by using modern CT scan technology to analyze the only known preserved picrodontid skull in Brooklyn College's Mammalian Evolutionary Morphology Laboratory, Crowell, the lead author on the paper, worked with Chester, the paper's senior author, and Wible to determine they are not closely related to primates at all.

"While picrodontids share features of their teeth with living primates, the bones of the skull, specifically the bone that surrounds the ear, are unlike that of any living primate or close fossil relatives of primates," Crowell said.

"This suggests picrodontids and primates independently evolved similarities of their teeth likely for similar diets. This study also highlights the importance of revisiting old specimens with updated techniques to examine them."

Chester, who serves as Crowell's Ph.D. adviser, has both a professional and personal interest in this research.

It was Chester's colleague and "academic grandfather," Professor Emeritus Frederick Szalay from CUNY's Hunter College and the Graduate Center, who in 1968 first convincingly classified picrodontids as primates based on evidence from fossilized teeth.

Szalay studied the teeth of the only known picrodontid skull, Zanycteris paleocenus, for his research -- the same skull this team examined with the new technology that led to their discovery.

"The Zanycteris cranium was prepared and partially submerged in plaster around 1917, so researchers studying this important specimen at the American Museum of Natural History were not aware of how much cranial anatomy was hidden over the last 100 years" Chester said.

"Micro-CT scanning has revolutionized the field of paleontology and allows researchers to discover so much more about previously studied fossils housed in natural history museum collections."

Read more at Science Daily

'Smart glove' can boost hand mobility of stroke patients

This month, a group of stroke survivors in B.C. will test a new technology designed to aid their recovery, and ultimately restore use of their limbs and hands.

Participants will wear a new groundbreaking "smart glove" capable of tracking their hand and finger movements during rehabilitation exercises supervised by Dr. Janice Eng, a leading stroke rehabilitation specialist and professor of medicine at UBC.

The glove incorporates a sophisticated network of highly sensitive sensor yarns and pressure sensors that are woven into a comfortable stretchy fabric, enabling it to track, capture and wirelessly transmit even the smallest hand and finger movements.

"With this glove, we can monitor patients' hand and finger movements without the need for cameras. We can then analyze and fine-tune their exercise programs for the best possible results, even remotely," says Dr. Eng.

Precision in a wearable device


UBC electrical and computer engineering professor Dr. Peyman Servati, PhD student Arvin Tashakori and their team at their startup, Texavie, created the smart glove for collaboration on the stroke project.

Dr. Servati highlighted a number of breakthroughs, described in a paper published last week in Nature Machine Intelligence.

"This is the most accurate glove we know of that can track hand and finger movement and grasping force without requiring motion-capture cameras. Thanks to machine learning models we developed, the glove can accurately determine the angles of all finger joints and the wrist as they move. The technology is highly precise and fast, capable of detecting small stretches and pressures and predicting movement with at least 99-per-cent accuracy -- matching the performance of costly motion-capture cameras."

Unlike other products in the market, the glove is wireless and comfortable, and can be easily washed after removing the battery.

Dr. Servati and his team have developed advanced methods to manufacture the smart gloves and related apparel at a relatively low cost locally.

Augmented reality and robotics


Dr. Servati envisions a seamless transition of the glove into the consumer market with ongoing improvements, in collaboration with different industrial partners.

The team also sees potential applications in virtual reality and augmented reality, animation and robotics.

Read more at Science Daily

Jan 15, 2024

Study uncovers potential origins of life in ancient hot springs

Newcastle University research turns to ancient hot springs to explore the origins of life on Earth.

The research team, funded by the UK's Natural Environmental Research Council, investigated how the emergence of the first living systems from inert geological materials happened on the Earth, more than 3.5 billion years ago.

Scientists at Newcastle University found that by mixing hydrogen, bicarbonate, and iron-rich magnetite under conditions mimicking relatively mild hydrothermal vent results in the formation of a spectrum of organic molecules, most notably including fatty acids stretching up to 18 carbon atoms in length.

Published in the journal Communications Earth & Environment, their findings potentially reveal how some key molecules needed to produce life are made from inorganic chemicals, which is essential to understanding a key step in how life formed on the Earth billions of years ago.

Their results may provide a plausible genesis of the organic molecules that form ancient cell membranes, that were perhaps selectively chosen by early biochemical processes on primordial Earth.

Fatty acids in the early stages of life

Fatty acids are long organic molecules that have regions that both attract and repel water that will automatically form cell-like compartments in water naturally and it is these types of molecules that could have made the first cell membranes.

Yet, despite their importance, it was uncertain where these fatty acids came from in the early stages of life.

One idea is that they might have formed in the hydrothermal vents where hot water, mixed with hydrogen-rich fluids coming from underwater vents mixed with seawater containing CO2.

The group replicated crucial aspects of the chemical environment found in early Earth's oceans and the mixing of the hot alkaline water from around certain types of hydrothermal vents in their laboratory.

They found that when hot hydrogen-rich fluids were mixed with carbon dioxide-rich water in the presence of iron-based minerals that were present on the early Earth it created the types of molecules needed to form primitive cell membranes.

Lead author, Dr Graham Purvis, conducted the study at Newcastle University and is currently a Postdoctoral Research Associate at Durham University.

He said: "Central to life's inception are cellular compartments, crucial for isolating internal chemistry from the external environment. These compartments were instrumental in fostering life-sustaining reactions by concentrating chemicals and facilitating energy production, potentially serving as the cornerstone of life's earliest moments.

The results suggest that the convergence of hydrogen-rich fluids from alkaline hydrothermal vents with bicarbonate-rich waters on iron-based minerals could have precipitated the rudimentary membranes of early cells at the very beginning of life.

This process might have engendered a diversity of membrane types, some potentially serving as life's cradle when life first started.

Principal Investigator Dr Jon Telling, Reader in Biogeochemistry, at School of Natural Environmental Sciences, added:

"We think that this research may provide the first step in how life originated on our planet. Research in our laboratory now continues on determining the second key step; how these organic molecules which are initially 'stuck' to the mineral surfaces can lift off to form spherical membrane-bounded cell-like compartments; the first potential 'protocells' that went on to form the first cellular life."

Read more at Science Daily

Chasing the light: Study finds new clues about warming in the Arctic

The Arctic, Earth's icy crown, is experiencing a climate crisis like no other. It's heating up at a furious pace -- four times faster than the rest of our planet. Researchers at Sandia National Laboratories are pulling back the curtain on the reduction of sunlight reflectivity, or albedo, which is supercharging the Arctic's warming.

The scientists are not armed with parkas and shovels. Instead, they have tapped into data from GPS satellite radiometers, capturing the sunlight bouncing off the Arctic.

This data dive could be the key to cracking the Arctic amplification code.

"The uneven warming in the Arctic is both a scientific curiosity and a pressing concern, leading us to question why this landscape has been changing so dramatically," said Erika Roesler, an atmospheric and climate scientist at Sandia.

Previous studies have suggested that sea-ice albedo feedbacks are likely driving Arctic amplification.

These albedo feedbacks can be broken down into two main areas.

First, there's an overall reduction in sea ice, leading to more exposure of the dark ocean.

This absorbs more sunlight than snow-covered ice and raises temperatures.

The second factor is the reflectivity of the remaining sea ice, or local albedo, which includes ponding water on ice due to melting.

Sandia researchers aimed to gain a better understanding of the reduction in reflectivity in the Arctic.

Senior scientist Phil Dreike collaborated with the U.S. Space Force to obtain permission for Sandia to analyze previously unpublished data from the radiometers on GPS satellites.

"New observational climate datasets are unique," Roesler said.

"To qualify as a climate dataset, observations must span a multitude of years. Small-scale science projects are typically not that long in duration, making this dataset particularly valuable."

Amy Kaczmarowski, an engineer at Sandia, conducted an analysis of the data spanning from 2014 to 2019.

"There have been numerous local measurements and theoretical discussions regarding the effects of water puddling on ice albedo," Kaczmarowski said.

"This study represents one of the first comprehensive examinations of year-to-year effects in the Arctic region. Sandia's data analysis revealed a 20% to 35% decrease in total reflectivity over the Arctic summer. According to microwave sea-ice extent measurements collected during the same period, one-third of this loss of reflectivity is attributed to fully melted ice."

The other two-thirds of the loss in reflectivity is likely caused by the weathering of the remaining sea ice.

"The key discovery here is just how much the weathered ice is reducing reflectivity," Kaczmarowski added.

Weathered ice refers to the remaining sea ice, which can be thinner and may contain melt ponds.

The GPS satellites are expected to continue providing data through 2040.

The Sandia team hopes other researchers will consider their findings, recently published in the journal Nature Scientific Reports, and incorporate them into their models for Arctic amplification.

They plan to continue mining the GPS data and are enthusiastic about collaborating with other climate researchers for further analysis.

Read more at Science Daily

Candida evolution disclosed: New insights into fungal infections

Global fungal infections, which affect one billion people and cause 1.5 million deaths each year, are on the rise due to the increasing number of medical treatments that heighten vulnerability. Patients undergoing chemotherapy or immunosuppressive treatments after organ transplant often present compromised immune systems. Given the emergence of resistant strains, the limited variety of current antifungal drugs as well as their cost and side effects, the treatment of these infections is challenging and brings about an urgent need for more effective treatments.

In this context, a team from the Institute for Research in Biomedicine (IRB Barcelona) and the Barcelona Supercomputing Center -- Centro Nacional de Supercomputación (BSC-CNS), led by the ICREA researcher Dr. Toni Gabaldón, has identified hundreds of genes subject to recent, clinically-relevant selection in six species of the fungal pathogen Candida.

"This work highlights how thesepathogens adapted to humans and antifungal drugs and provides valuable knowledge that could lead to better treatments for Candida infections," explains Dr. Gabaldón, head of the Comparative Genomics lab at IRB Barcelona and the BSC.

More than 2,000 genomes from 6 different species

The study delves into the evolutionary landscape of Candida pathogens by analysing approximately 2,000 genomes from clinical samples of six major Candida species.

These genomes are stored in public databases. The researchers compared these genomes to a reference, creating a comprehensive catalogue of genetic variants.

Building on previous work addressing drug-resistant strains, the researchers conducted a Genome-Wide Association Study (GWAS) to identify genetic variants linked to antifungal drug resistance in clinical isolates.

This approach provided insights into both known and novel mechanisms of resistance towards seven antifungal drugs in three Candida species.

"Additionally, a concerning finding has arisen from the study: the potential spread of resistance through mating between susceptible and resistant strains, contributing to the prevalence of drug-resistant Candida pathogens," explains Dr. Miquel Àngel Schikora-Tamarit, a postdoctoral researcher in the same lab and first author of the study.

In addition, by focusing on variants acquired recently among clinical strains, the researches detected shared and species-specific genetic signatures of recent selection that inform on which adaptations might be needed to thrive and spread in human-related environments.

Beyond the novel insights into the adaptation of Candida, the study provides a valuable resource, namely a comprehensive catalogue of variants, selection signatures, and drivers of drug resistance.

This knowledge not only contributes to our understanding of these infections but also lays the groundwork for future experiments and potential advancements in the development of more effective treatments for Candida infections.

Read more at Science Daily

Study quantifies how aquifer depletion threatens crop yields

Three decades of data have informed a new Nebraska-led study that shows how the depletion of groundwater -- the same that many farmers rely on for irrigation -- can threaten food production amid drought and drier climes.

The study found that, due in part to the challenges of extracting groundwater, an aquifer's depletion can curb crop yields even when it appears saturated enough to continue meeting the demands of irrigation. Those agricultural losses escalate as an aquifer dwindles, the researchers reported, so that its depletion exerts a greater toll on corn and soybean yields when waning from, say, 100 feet thick to 50 than from 200 feet to 150.

That reality should encourage policymakers, resource managers and growers to reconsider the volume of crop-quenching groundwater they have at their disposal, the team said, especially in the face of fiercer, more frequent drought.

"As you draw down an aquifer to the point that it's quite thin, very small changes in the aquifer thickness will then have progressively larger and larger impacts on your crop production and resilience," said Nick Brozović, director of policy at the Daugherty Water for Food Global Institute. "And that's a thing that we don't predict well, because we tend to predict based on the past. So if we base what's going to happen on our past experience, we're always going to underpredict. We're always going to be surprised by how bad things get."

The team came to its conclusions after analyzing yields, weather and groundwater data from the High Plains Aquifer, which, as the largest in the United States, underlies portions of eight states -- including nearly all of Nebraska. Some areas of the aquifer, especially those beneath Texas and Kansas but also the Cornhusker State, have diminished considerably over the past several decades, pumped for the sake of irrigating land that would otherwise stand little chance of sustaining crops.

"In terms of things that let you address food security under extreme conditions -- in particular, drought and climate change -- we really can't do without irrigation," said Brozović, professor of agricultural economics at the University of Nebraska-Lincoln. "If we want to feed the world with high-quality, nutritious food and a stable food supply, we need to irrigate."

Brozović and Husker colleague Taro Mieno had already constructed plenty of models, and run plenty of simulations, on how the High Plains Aquifer responds to drought and dry conditions. But talking with farmers revealed that the models were not addressing their primary concern: well yield, or the amount of groundwater that growers can expect to continuously draw when trying to buffer their crops against drought.

"Everybody's interested in how aquifer depletion affects the resiliency of irrigated agriculture in the region," said Mieno, an associate professor of agricultural economics and lead author of the study, which was published in the journal Nature Water.

So the researchers consulted annual estimates of the High Plains Aquifer's thickness, which date back to 1935, along with county-level yields of corn and soybean from 1985 through 2016. Meteorological data, meanwhile, allowed the team to calculate seasonal water deficits, or the difference between the water gained from precipitation and the amount that crops lost via evaporation and transpiration.

When the latter exceeds the former, farmers often turn to aquifers for help in making up the difference, the researchers knew. What they didn't know: Under what conditions, and to what extent, would an aquifer's depletion make pumping its water too difficult or expensive to undertake? And how much would the resulting decisions -- to reduce the amount of irrigation per acre, to cease irrigating certain plots all together -- influence corn and soybean yields?

Farmers fortunate enough to be growing corn and soybean above the most saturated swaths of the High Plains Aquifer -- roughly 220 to 700 feet thick -- continued to enjoy high irrigated yields even in times of extreme water deficits, the team found. By contrast, those depending on the least saturated areas -- between 30 and 100 feet -- saw their irrigated yields begin trending downward when water deficits reached just 400 millimeters, a common occurrence in Nebraska and other Midwestern states.

In years when the deficit approached or exceeded 700 millimeters, irrigated fields residing above the thickest groundwater yielded markedly more corn than those sitting above the thinnest. The results were starker during a 950-millimeter water deficit, which corresponds with extreme drought: Fields atop the least saturated stretches of aquifer yielded roughly 19.5 fewer bushels per acre.

"Because of the way that aquifers work, even if there's a lot of water there, as they deplete, you actually lose the ability to meet those crop water needs during the driest periods, because well yield tends to decline as you deplete an aquifer," Brozović said. "That has an economic consequence and a resilience consequence."

The study captured another telling link between the water residing underground and that applied at the surface. When atop groundwater roughly 330 feet thick, farmers irrigated 89% of their acres dedicated to growing corn. Where the aquifer was a mere 30 feet thick? Just 70% of those acres received irrigation. That's likely a result of lower well yield driving farmers to irrigate only some of their fields, Taro said, or even give up on irrigation.

To better understand how that reduced irrigation was contributing to agricultural losses amid dry conditions, the researchers then factored in yields from both irrigated and non-irrigated fields, the latter of which rely on precipitation alone. That analysis pegged yields as even more sensitive to even smaller water deficits, suggesting that the decline in irrigated land was compounding the losses endured on still-irrigated plots.

And it illustrated the runaway threat posed when an aquifer's average thickness drops below certain thresholds. At a water deficit of 950 millimeters, reducing an aquifer's thickness from roughly 330 to 230 feet was estimated to initiate an average loss of about 2.5 corn bushels per acre, what the authors called a "negligible difference." The same absolute decrease, but from 230 to 130 feet, led to an estimated loss of 15 bushels per acre.

"As a consequence, your resilience to climate decreases rapidly," Mieno said. "So when you're operating on an aquifer that is very thick right now, you're relatively safe. But you want to manage it in a way that you don't go past that threshold, because from there, it's all downhill.

"And the importance of aquifers is going to increase as climate change progresses in the future, for sure. As it gets hotter, you typically need more water. That means you need more irrigation, and you're going to deplete the aquifer even faster, and things can get worse and worse."

Nebraska is lucky, Brozović said, in that it sits above such a massive reservoir and has established a governance system designed to conserve it at a local scale. But most regulations focus on mandating how much and when groundwater gets pumped, not safeguarding the aquifer's saturation level or the corresponding ability to extract water from it.

Brozović conceded that convincing policymakers to consider revising those parameters now, when much of the state still boasts sufficient groundwater, is "perhaps a tough sell." He's hopeful that the new study can at least help put that conversation on the table.

"Once you have a problem -- once well yields are already declining and the aquifer's really thin -- even if you put in policies, you still get a lot of the (negative) impacts," he said. "So the time to really put in meaningful policies is before things have gone off the cliff.

"First, you have to understand, you have to measure, you have to educate. You have to understand what you're preserving, and why. The more you can provide the quantitative evidence for why it's worth going to the trouble of doing all of this, and what's at stake," he said, "the easier that conversation is."

Read more at Science Daily