May 6, 2023

Do your homework to prep for the 2023 and 2024 eclipses

This year and next, Americans will have the extraordinary opportunity to witness two solar eclipses as both will be visible throughout the continental U.S. On Oct. 14, 2023, the moon will obscure all but a small annulus of the sun, producing a "ring of fire" eclipse. On April 8, 2024, the eclipse will be total in a band stretching from Texas to Maine.

Both occurrences promise to be remarkable events and teachable moments. But preparation is essential.

In The Physics Teacher, co-published by AIP Publishing and the American Association of Physics Teachers, astronomer Douglas Duncan of the University of Colorado provides a practical playbook to help teachers, students, and the general public prepare for the eclipse events. He also shares ways to fundraise for schools and organizations and guidance for safe eclipse-viewing. The Fiske Planetarium, which Duncan used to direct, is also producing short videos about the upcoming eclipses.

"According to NASA surveys, over 100 million Americans watched the 2017 eclipse in person or via media," said Duncan. "That was when a total eclipse crossed the U.S., with totality viewable in Wyoming, where Motel 6 rooms in the state were going for $800 a night if you didn't book far in advance. A total eclipse is worth traveling to. It is incredible, and people remember it their whole life."

A self-described eclipse-chaser who has himself witnessed 12 eclipses beginning in 1970, Duncan emphasizes the importance of eye protection. He cites two companies that produce inexpensive glasses for viewing the sun and advises event organizers to order them well in advance: Solar Eclipse Glasses and Rainbow Symphony.

Additionally, after observing spectators at previous eclipses using their phones to snap pictures, Duncan developed Solar Snap, a filter and app to enable safe and effective smart phone photography for such events.

With small groups, Duncan suggests using binoculars to project an image of the sun so that viewers can safely observe the spectacle transposed onto a sheet of paper.

Read more at Science Daily

Older people have better mental well-being than 30 years ago

This was observed in a study conducted at the Gerontology Research Center at the Faculty of Sport and Health Sciences, University of Jyväskylä (Finland). The study examined differences in depressive symptoms and life satisfaction between current 75- and 80-year-olds and the same-aged people who lived in the 1990s.

The results showed that 75- and 80-year-old men and women today experience fewer depressive symptoms than those who were 75 and 80 years old in the 1990s. The differences were partly explained by the better perceived health and higher education of those born later.

"In our previous comparisons, we found that older people today have significantly better physical and cognitive functioning at the same age compared to those born earlier," says Professor Taina Rantanen from the Faculty of Sport and Health Sciences. "These new results complement these positive findings in terms of mental well-being."

Today, 75- and 80-year-olds are more satisfied with their lives to date. However, there was no similar difference in satisfaction with their current lives. In fact, 80-year-old men who lived in the 1990s were even more satisfied with their current lives than 80-year-old are men today.

"These men born in 1910 had lived through difficult times, which may explain their satisfaction with their current lives in the 1990s when many things were better than before," says postdoctoral researcher Tiia Kekäläinen.

"Individuals adapt to their situation and living conditions. Both in the 1990s and today, the majority of older adults reported being satisfied with their current lives."

Read more at Science Daily

May 5, 2023

Hubble follows shadow play around planet-forming disk

The young star TW Hydrae is playing "shadow puppets" with scientists observing it with NASA's Hubble Space Telescope.

In 2017, astronomers reported discovering a shadow sweeping across the face of a vast pancake-shaped gas-and-dust disk surrounding the red dwarf star. The shadow isn't from a planet, but from an inner disk slightly inclined relative to the much larger outer disk -- causing it to cast a shadow. One explanation is that an unseen planet's gravity is pulling dust and gas into the planet's inclined orbit.

Now, a second shadow -- playing a game of peek-a-boo -- has emerged in just a few years between observations stored in Hubble's MAST archive. This could be from yet another disk nestled inside the system. The two disks are likely evidence of a pair of planets under construction.

TW Hydrae is less than 10 million years old and resides about 200 light-years away. In its infancy, our solar system may have resembled the TW Hydrae system, some 4.6 billion years ago. Because the TW Hydrae system is tilted nearly face-on to our view from Earth, it is an optimum target for getting a bull's-eye-view of a planetary construction yard.

The second shadow was discovered in observations obtained on June 6, 2021, as part of a multi-year program designed to track the shadows in circumstellar disks. John Debes of AURA/STScI for the European Space Agency at the Space Telescope Science Institute in Baltimore, Maryland, compared the TW Hydrae disk to Hubble observations made several years ago.

"We found out that the shadow had done something completely different," said Debes, who is principal investigator and lead author of the study published in The Astrophysical Journal. "When I first looked at the data, I thought something had gone wrong with the observation because it wasn't what I was expecting. I was flummoxed at first, and all my collaborators were like: what is going on? We really had to scratch our heads and it took us a while to actually figure out an explanation."

The best solution the team came up with is that there are two misaligned disks casting shadows. They were so close to each other in the earlier observation they were missed. Over time they've now separated and split into two shadows. "We've never really seen this before on a protoplanetary disk. It makes the system much more complex than we originally thought," he said.

The simplest explanation is that the misaligned disks are likely caused by the gravitational pull of two planets in slightly different orbital planes. Hubble is piecing together a holistic view of the architecture of the system.

The disks may be proxies for planets that are lapping each other as they whirl around the star. It's sort of like spinning two vinyl phonograph records at slightly different speeds. Sometimes labels will match up but then one gets ahead of the other.

"It does suggest that the two planets have to be fairly close to each other. If one was moving much faster than the other, this would have been noticed in earlier observations. It's like two race cars that are close to each other, but one slowly overtakes and laps the other," said Debes.

The suspected planets are located in a region roughly the distance of Jupiter from our Sun. And, the shadows complete one rotation around the star about every 15 years -- the orbital period that would be expected at that distance from the star.

Also, these two inner disks are inclined about five to seven degrees relative to the plane of the outer disk. This is comparable to the range of orbital inclinations inside our solar system. "This is right in line with typical solar system style architecture," said Debes.

The outer disk that the shadows are falling on may extend as far as several times the radius of our solar system's Kuiper belt. This larger disk has a curious gap at twice Pluto's average distance from the Sun. This might be evidence for a third planet in the system.

Any inner planets would be difficult to detect because their light would be lost in the glare of the star. Also, dust in the system would dim their reflected light. ESA's Gaia space observatory may be able to measure a wobble in the star if Jupiter-mass planets are tugging on it, but this would take years given the long orbital periods.

Read more at Science Daily

New clues about the rise of Earth's continents

Continents are part of what makes Earth uniquely habitable for life among the planets of the solar system, yet surprisingly little is understood about what gave rise to these huge pieces of the planet's crust and their special properties. New research from Elizabeth Cottrell, research geologist and curator of rocks at the Smithsonian's National Museum of Natural History, and lead study author Megan Holycross, formerly a Peter Buck Fellow and National Science Foundation Fellow at the museum and now an assistant professor at Cornell University, deepens the understanding of Earth's crust by testing and ultimately eliminating one popular hypothesis about why continental crust is lower in iron and more oxidized compared to oceanic crust. The iron-poor composition of continental crust is a major reason why vast portions of the Earth's surface stand above sea level as dry land, making terrestrial life possible today.

The study, published today in Science, uses laboratory experiments to show that the iron-depleted, oxidized chemistry typical of Earth's continental crust likely did not come from crystallization of the mineral garnet, as a popular explanation proposed in 2018.

The building blocks of new continental crust issue forth from the depths of the Earth at what are known as continental arc volcanoes, which are found at subduction zones where an oceanic plate dives beneath a continental plate. In the garnet explanation for continental crust's iron-depleted and oxidized state, the crystallization of garnet in the magmas beneath these continental arc volcanoes removes non-oxidized (reduced or ferrous, as it is known among scientists) iron from the terrestrial plates, simultaneously depleting the molten magma of iron and leaving it more oxidized.

One of the key consequences of Earth's continental crust's low iron content relative to oceanic crust is that it makes the continents less dense and more buoyant, causing the continental plates to sit higher atop the planet's mantle than oceanic plates. This discrepancy in density and buoyancy is a major reason that the continents feature dry land while oceanic crusts are underwater, as well as why continental plates always come out on top when they meet oceanic plates at subduction zones.

The garnet explanation for the iron depletion and oxidation in continental arc magmas was compelling, but Cottrell said one aspect of it did not sit right with her.

"You need high pressures to make garnet stable, and you find this low-iron magma at places where crust isn't that thick and so the pressure isn't super high," she said.

In 2018, Cottrell and her colleagues set about finding a way to test whether the crystallization of garnet deep beneath these arc volcanoes is indeed essential to the process of creating continental crust as is understood. To accomplish this, Cottrell and Holycross had to find ways to replicate the intense heat and pressure of the Earth's crust in the lab, and then develop techniques sensitive enough to measure not just how much iron was present, but to differentiate whether that iron was oxidized.

To recreate the massive pressure and heat found beneath continental arc volcanoes, the team used what are called piston-cylinder presses in the museum's High-Pressure Laboratory and at Cornell. A hydraulic piston-cylinder press is about the size of a mini fridge and is mostly made of incredibly thick and strong steel and tungsten carbide. Force applied by a large hydraulic ram results in very high pressures on tiny rock samples, about a cubic millimeter in size. The assembly consists of electrical and thermal insulators surrounding the rock sample, as well as a cylindrical furnace. The combination of the piston-cylinder press and heating assembly allows for experiments that can attain the very high pressures and temperatures found under volcanoes.

In 13 different experiments, Cottrell and Holycross grew samples of garnet from molten rock inside the piston-cylinder press under pressures and temperatures designed to simulate conditions inside magma chambers deep in Earth's crust. The pressures used in the experiments ranged from 1.5 to 3 gigapascals -- that is roughly 15,000 to 30,000 Earth atmospheres of pressure or 8,000 times more pressure than inside a can of soda. Temperatures ranged from 950 to 1,230 degrees Celsius, which is hot enough to melt rock.

Next, the team collected garnets from Smithsonian's National Rock Collection and from other researchers around the world. Crucially, this group of garnets had already been analyzed so their concentrations of oxidized and unoxidized iron were known.

Finally, the study authors took the materials from their experiments and those gathered from collections to the Advanced Photon Source at the U.S. Department of Energy's Argonne National Laboratory in Illinois. There the team used high-energy X-ray beams to conduct X-ray absorption spectroscopy, a technique that can tell scientists about the structure and composition of materials based on how they absorb X-rays. In this case, the researchers were looking into the concentrations of oxidized and unoxidized iron.

The samples with known ratios of oxidized and unoxidized iron provided a way to check and calibrate the team's X-ray absorption spectroscopy measurements and facilitated a comparison with the materials from their experiments.

The results of these tests revealed that the garnets had not incorporated enough unoxidized iron from the rock samples to account for the levels of iron-depletion and oxidation present in the magmas that are the building blocks of Earth's continental crust.

"These results make the garnet crystallization model an extremely unlikely explanation for why magmas from continental arc volcanoes are oxidized and iron depleted," Cottrell said. "It's more likely that conditions in Earth's mantle below continental crust are setting these oxidized conditions."

Like so many results in science, the findings lead to more questions: "What is doing the oxidizing or iron depleting?" Cottrell asked. "If it's not garnet crystallization in the crust and it's something about how the magmas arrive from the mantle, then what is happening in the mantle? How did their compositions get modified?"

Cottrell said that these questions are hard to answer but that now the leading theory is that oxidized sulfur could be oxidizing the iron, something a current Peter Buck Fellow is investigating under her mentorship at the museum.

This study is an example of the kind of research that museum scientists will tackle under the museum's new Our Unique Planet initiative, a public-private partnership, which supports research into some of the most enduring and significant questions about what makes Earth special. Other research will investigate the source of Earth's liquid oceans and how minerals may have served as templates for life.

Read more at Science Daily

New tusk-analysis techniques reveal surging testosterone in male woolly mammoths

Traces of sex hormones extracted from a woolly mammoth's tusk provide the first direct evidence that adult males experienced musth, a testosterone-driven episode of heightened aggression against rival males, according to a new University of Michigan-led study.

In male elephants, elevated testosterone during musth was previously recognized from blood and urine tests. Musth battles in extinct relatives of modern elephants have been inferred from skeletal injuries, broken tusk tips and other indirect lines of evidence.

But the new study, scheduled for online publication May 3 in the journal Nature, is the first to show that testosterone levels are recorded in the growth layers of mammoth and elephant tusks.

The U-M researchers and their international colleagues report annually recurring testosterone surges -- up to 10 times higher than baseline levels -- within a permafrost-preserved woolly mammoth tusk from Siberia. The adult male mammoth lived more than 33,000 years ago.

The testosterone surges seen in the mammoth tusk are consistent with musth-related testosterone peaks the researchers observed in an African bull elephant tusk, according to the study authors. The word "musth" comes from the Hindi and Urdu word for intoxicated.

"Temporal patterns of testosterone preserved in fossil tusks show that, like modern elephants, mature bull mammoths experienced musth," said study lead author Michael Cherney, a research affiliate at the U-M Museum of Paleontology and a research fellow at the U-M Medical School.

The study demonstrates that both modern and ancient tusks hold traces of testosterone and other steroid hormones. These chemical compounds are incorporated into dentin, the mineralized tissue that makes up the interior portion of all teeth (tusks are elongated upper incisor teeth).

"This study establishes dentin as a useful repository for some hormones and sets the stage for further advances in the developing field of paleoendocrinology," Cherney said. "In addition to broad applications in zoology and paleontology, tooth-hormone records could support medical, forensic and archaeological studies."

Hormones are signaling molecules that help regulate physiology and behavior. Testosterone is the main sex hormone in male vertebrates and is part of the steroid group of hormones. It circulates in the bloodstream and accumulates in various tissues.

Scientists have previously analyzed steroid hormones present in human and animal hair, nails, bones and teeth, in both modern and ancient contexts. But the significance and value of such hormone records have been the subject of ongoing scrutiny and debate.

The authors of the new Nature study say their findings should help change that by demonstrating that steroid records in teeth can provide meaningful biological information that sometimes persists for thousands of years.

"Tusks hold particular promise for reconstructing aspects of mammoth life history because they preserve a record of growth in layers of dentin that form throughout an individual's life," said study co-author Daniel Fisher, a curator at the U-M Museum of Paleontology and professor in the Department of Earth and Environmental Sciences.

"Because musth is associated with dramatically elevated testosterone in modern elephants, it provides a starting point for assessing the feasibility of using hormones preserved in tusk growth records to investigate temporal changes in endocrine physiology," said Fisher, who is also a professor in the U-M Department of Ecology and Evolutionary Biology.

For the study, researchers sampled tusks from one adult African bull elephant and two adult woolly mammoths -- a male and a female -- from Siberia. The samples were obtained in accordance with relevant laws and with appropriate permits.

The researchers used CT scans to identify annual growth increments within the tusks. A tiny drill bit, operated under a microscope and moved across a block of dentin using computer-actuated stepper motors, was used to grind contiguous half-millimeter-wide samples representing approximately monthly intervals of dentin growth.

The powder produced during this milling process was collected and chemically analyzed.

The study required new methods, developed in the laboratory of U-M endocrinologist and study co-author Rich Auchus, to extract steroids from tusk dentin for measurement with a mass spectrometer, an instrument that identifies chemical substances by sorting ions according to their mass and charge.

"We had developed steroid mass spectrometry methods for human blood and saliva samples, and we have used them extensively for clinical research studies. But never in a million years did I imagine that we would be using these techniques to explore 'paleoendocrinology,'" said Auchus, professor of internal medicine and pharmacology at the U-M Medical School.

"We did have to modify the method some, because those tusk powders were the dirtiest samples we ever analyzed. When Mike (Cherney) showed me the data from the elephant tusks, I was flabbergasted. Then we saw the same patterns in the mammoth -- wow!"

The African bull elephant is believed to have been 30 to 40 years old when it was killed by a hunter in Botswana in 1963. According to estimates based on growth layers in its tusk, the male woolly mammoth lived to be about 55 years old. Its right tusk was discovered by a diamond-mining company in Siberia in 2007. Radiocarbon dating revealed that the animal lived 33,291 to 38,866 years ago.

The tusk from the female woolly mammoth was discovered on Wrangel Island, which was connected to northeast Siberia in glacial periods of lower sea level but is now separated from it by the Arctic Ocean. Carbon-dating showed an age of 5,597 to 5,885 years before present. (Wrangel Island is the last known place where woolly mammoths survived, until around 4,000 years ago.)

In contrast to the male tusks, testosterone levels from the female woolly mammoth tusk showed little variation over time -- as expected -- and the average testosterone level was lower than the lowest values in the male mammoth's tusk records.

"With reliable results for some steroids from samples as small as 5 mg of dentin, these methods could be used to investigate records of organisms with smaller teeth, including humans and other hominids," the authors wrote. "Endocrine records in modern and ancient dentin provide a new approach to investigating reproductive ecology, life history, population dynamics, disease, and behavior in modern and prehistoric contexts."

Read more at Science Daily

Scientists recover an ancient woman's DNA from a 20,000-year-old pendant

Artefacts made of stone, bones or teeth provide important insights into the subsistence strategies of early humans, their behavior and culture. However, until now it has been difficult to attribute these artefacts to specific individuals, since burials and grave goods were very rare in the Palaeolithic. This has limited the possibilities of drawing conclusions about, for example, division of labor or the social roles of individuals during this period.

In order to directly link cultural objects to specific individuals and thus gain deeper insights into Paleolithic societies, an international, interdisciplinary research team, led by the Max Planck Institute for Evolutionary Anthropology in Leipzig, has developed a novel, non-destructive method for DNA isolation from bones and teeth. Although they are generally rarer than stone tools, the scientists focused specifically on artefacts made from skeletal elements, because these are more porous and are therefore more likely to retain DNA present in skin cells, sweat and other body fluids.

A new DNA extraction method

Before the team could work with real artefacts, they first had to ensure that the precious objects would not be damaged. "The surface structure of Paleolithic bone and tooth artefacts provides important information about their production and use. Therefore, preserving the integrity of the artefacts, including microstructures on their surface, was a top priority" says Marie Soressi, an archaeologist from the University of Leiden who supervised the work together with Matthias Meyer, a Max Planck geneticist.

The team tested the influence of various chemicals on the surface structure of archaeological bone and tooth pieces and developed a non-destructive phosphate-based method for DNA extraction. "One could say we have created a washing machine for ancient artifacts within our clean laboratory," explains Elena Essel, the lead author of the study who developed the method. "By washing the artifacts at temperatures of up to 90°C, we are able to extract DNA from the wash waters, while keeping the artifacts intact."

Early setbacks

The team first applied the method to a set of artefacts from the French cave Quinçay excavated back in the 1970s to 1990s. Although in some cases it was possible to identify DNA from the animals from which the artefacts were made, the vast majority of the DNA obtained came from the people who had handled the artefacts during or after excavation. This made it difficult to identify ancient human DNA.

To overcome the problem of modern human contamination, the researchers then focused on material that had been freshly excavated using gloves and face masks and put into clean plastic bags with sediment still attached. Three tooth pendants from Bacho Kiro Cave in Bulgaria, home to the oldest securely dated modern humans in Europe, showed significantly lower levels of modern DNA contamination; however, no ancient human DNA could be identified in these samples.

A pendant from Denisova Cave

The breakthrough was finally enabled by Maxim Kozlikin and Michael Shunkov, archaeologists excavating the famous Denisova Cave in Russia. In 2019, unaware of the new method being developed in Leipzig, they cleanly excavated and set aside an Upper Paleolithic deer tooth pendant. From this, the geneticists in Leipzig isolated not only the DNA from the animal itself, a wapiti deer, but also large quantities of ancient human DNA. "The amount of human DNA we recovered from the pendant was extraordinary" says Elena Essel, "almost as if we had sampled a human tooth."

Based on the analysis of mitochondrial DNA, the small part of the genome that is exclusively inherited from the mother to their children, the researchers concluded that most of the DNA likely originated from a single human individual. Using the wapiti and human mitochondrial genomes they were able to estimate the age of the pendant at 19,000 to 25,000 years, without sampling the precious object for C14 dating.

In addition to mitochondrial DNA, the researchers also recovered a substantial fraction of the nuclear genome of its human owner. Based on the number of X chromosomes they determined that the pendant was made, used or worn by a woman. They also found that this woman was genetically closely related to contemporaneous ancient individuals from further east in Siberia, the so called 'Ancient North Eurasians' for whom skeletal remains have previously been analyzed. "Forensic scientists will not be surprised that human DNA can be isolated from an object that has been handled a lot" says Matthias Meyer, "but it is amazing that this is still possible after 20,000 years."

Read more at Science Daily

May 4, 2023

Neutron star's X-rays reveal 'photon metamorphosis'

A "beautiful effect" predicted by quantum electrodynamics (QED) can explain the puzzling first observations of polarized X-rays emitted by a magnetar -- a neutron star featuring a powerful magnetic field, according to a Cornell astrophysicist.

The extremely dense and hot remnant of a massive star, boasting a magnetic field 100 trillion times stronger than Earth's, was expected to generate highly polarized X-rays, meaning that the radiation's electromagnetic field did not vibrate randomly but had a preferred direction.

But scientists were surprised when NASA's Imaging X-ray Polarimetry Explorer (IXPE) satellite last year detected that lower- and higher-energy X-rays were polarized differently, with electromagnetic fields oriented at right angles to each other.

The phenomenon can be naturally explained as a result of "photon metamorphosis" -- a transformation of X-ray photons that has been theorized but never directly observed, said Dong Lai, Ph.D. '94, the Benson Jay Simon '59, MBA '62, and Mary Ellen Simon, M.A. '63, Professor of Astrophysics in the College of Arts and Sciences.

"In this observation of radiation from a faraway celestial object, we see a beautiful effect that is a manifestation of intricate, fundamental physics," Lai said. "QED is one of the most successful physics theories, but it had not been tested in such strong magnetic field conditions."

Lai is the author of "IXPE Detection of Polarized X-rays from Magnetars and Photon Mode Conversion at QED Vacuum Resonance," published April 18 in Proceedings of the National Academy of Sciences.

The research builds on calculations Lai and Wynn Ho, Ph.D. '03, published 20 years ago, incorporating observations NASA reported last November of the magnetar 4U 0142+61, located 13,000 light-years away in the Cassiopeia constellation.

Quantum electrodynamics, which describes microscopic interactions between electrons and photons, predicts that as X-ray photons exit the neutron star's thin atmosphere of hot, magnetized gas, or plasma, they pass through a phase called vacuum resonance.

There, Lai said, photons, which have no charge, can temporarily convert into pairs of "virtual" electrons and positrons that are influenced by the magnetar's super-strong magnetic field even in vacuum, a process called "vacuum birefringence." Combined with a related process, plasma birefringence, conditions are created for the polarity of high-energy X-rays to swing 90 degrees relative to low-energy X-rays, according to Lai's analysis.

"You can think about the polarization as two flavors of photons," he said. "A photon suddenly converting from one flavor to another -- you don't usually see this kind of thing. But it's a natural consequence of the physics if you apply the theory under these extreme conditions."

The IXPE mission did not see the polarization swing in observations of another magnetar, called 1RXS J170849.0-400910, with an even stronger magnetic field. Lai said that's consistent with his calculations, which suggest vacuum resonance and photon metamorphosis would occur very deep inside such a neutron star.

Lai said his interpretation of IXPE's observations of the magnetar 4U 0142+61 helped constrain its magnetic field and rotation, and suggested that its atmosphere was likely composed of partially ionized heavy elements.

Read more at Science Daily

Astronomers spot a star swallowing a planet

As a star runs out of fuel, it will billow out to a million times its original size, engulfing any matter -- and planets -- in its wake. Scientists have observed hints of stars just before, and shortly after, the act of consuming entire planets, but they have never caught one in the act until now.

In a study that will appear in Nature, scientists at MIT, Harvard University, Caltech, and elsewhere report that they have observed a star swallowing a planet, for the first time.

The planetary demise appears to have taken place in our own galaxy, some 12,000 light-years away, near the eagle-like constellation Aquila. There, astronomers spotted an outburst from a star that became more than 100 times brighter over just 10 days, before quickly fading away. Curiously, this white-hot flash was followed by a colder, longer-lasting signal. This combination, the scientists deduced, could only have been produced by one event: a star engulfing a nearby planet.

"We were seeing the end-stage of the swallowing," says lead author Kishalay De, a postdoc in MIT's Kavli Institute for Astrophysics and Space Research.

What of the planet that perished? The scientists estimate that it was likely a hot, Jupiter-sized world that spiraled close, then was pulled into the dying star's atmosphere, and, finally, into its core.

A similar fate will befall the Earth, though not for another 5 billion years, when the sun is expected to burn out, and burn up the solar system's inner planets.

"We are seeing the future of the Earth," De says. "If some other civilization was observing us from 10,000 light-years away while the sun was engulfing the Earth, they would see the sun suddenly brighten as it ejects some material, then form dust around it, before settling back to what it was."

The study's MIT co-authors include Deepto Chakrabarty, Anna-Christina Eilers, Erin Kara, Robert Simcoe, Richard Teague, and Andrew Vanderburg, along with colleagues from Caltech, the Harvard and Smithsonian Center for Astrophysics, and multiple other institutions.

Hot and cold

The team discovered the outburst in May 2020. But it took another year for the astronomers to piece together an explanation for what the outburst could be.

The initial signal showed up in a search of data taken by the Zwicky Transient Facility (ZTF), run at Caltech's Palomar Observatory in California. The ZTF is a survey that scans the sky for stars that rapidly change in brightness, the pattern of which could be signatures of supernovae, gamma-ray bursts, and other stellar phenomena.

De was looking through ZTF data for signs of eruptions in stellar binaries -- systems in which two stars orbit each other, with one pulling mass from the other every so often and brightening briefly as a result.

"One night, I noticed a star that brightened by a factor of 100 over the course of a week, out of nowhere," De recalls. "It was unlike any stellar outburst I had seen in my life."

Hoping to nail down the source with more data, De looked to observations of the same star taken by the Keck Observatory in Hawaii. The Keck telescopes take spectroscopic measurements of starlight, which scientists can use to discern a star's chemical composition.

But what De found further befuddled him. While most binaries give off stellar material such as hydrogen and helium as one star erodes the other, the new source gave off neither. Instead, what De saw were signs of "peculiar molecules" that can only exist at very cold temperatures.

"These molecules are only seen in stars that are very cold," De says. "And when a star brightens, it usually becomes hotter. So, low temperatures and brightening stars do not go together."

"A happy coincidence"

It was then clear that the signal was not of a stellar binary. De decided to wait for more answers to emerge. About a year after his initial discovery, he and his colleagues analyzed observations of the same star, this time taken with an infrared camera at the Palomar Observatory. Within the infrared band, astronomers can see signals of colder material, in contrast to the white-hot, optical emissions that arise from binaries and other extreme stellar events.

"That infrared data made me fall off my chair," De says. "The source was insanely bright in the near-infrared."

It seemed that, after its initial hot flash, the star continued to throw out colder energy over the next year. That frigid material was likely gas from the star that shot into space and condensed into dust, cold enough to be detected at infrared wavelengths. This data suggested that the star could be merging with another star rather than brightening as a result of a supernovae explosion.

But when the team further analyzed the data and paired it with measurements taken by NASA's infrared space telescope, NEOWISE, they came to a much more exciting realization. From the compiled data, they estimated the total amount of energy released by the star since its initial outburst, and found it to be surprisingly small -- about 1/1,000 the magnitude of any stellar merger observed in the past.

"That means that whatever merged with the star has to be 1,000 times smaller than any other star we've seen," De says. "And it's a happy coincidence that the mass of Jupiter is about 1/1,000 the mass of the sun. That's when we realized: This was a planet, crashing into its star."

With the pieces in place, the scientists were finally able to explain the initial outburst. The bright, hot flash was likely the final moments of a Jupiter-sized planet being pulled into a dying star's ballooning atmosphere. As the planet fell into the star's core, the outer layers of the star blasted away, settling out as cold dust over the next year.

Read more at Science Daily

'Devastating' fungal infections wiping out crops and threatening global food security, experts warn

Scientists have warned of the "devastating" impact that fungal disease in crops will have on global food supply unless agencies across the world come together to find new ways to combat infection.

Worldwide, growers lose between 10 and 23 per cent of their crops to fungal infection each year, despite widespread use of antifungals. An additional 10-20 per cent is lost post harvest[GS1] . In a commentary in Nature, academics predict those figures will worsen as global warming means fungal infections are steadily moving polewards[GS2] [GS3] [GS4] [VL5] , meaning more countries are likely to see a higher prevalence of fungal infections damaging harvests*. Growers have already reported wheat stem rust infections -- which normally occur in the tropics -- in Ireland and England. The experts also warn that tolerance to higher temperatures in fungi could increase the likelihood of opportunistic soil-dwelling pathogens to hop hosts, and infect animals or humans.

Professor Sarah Gurr, Chair in Food Security at the University of Exeter, co-authored the report. She said fungi had recently attracted attention through popular hit TV show The Last of Us, in which fungi take over human brains. She said: "While the storyline is science fiction, we are warning that we could see a global health catastrophe caused by the rapid global spread of fungal infections as they develop increasing resistance in a warming world. The imminent threat here is not about 'zombies,' but about global starvation."

Across the world, food security is expected to encounter unprecedented challenges as rising populations mean more demand. Across the five most important calorie crops of rice, wheat, maize (corn), soya beans and potatoes, infections cause losses which equate to enough food to provide some 600 million to 4 billion people with 2,000 calories every day for one year.

Commentary co-author Eva Stukenbrock, professor and head of the Environmental Genomics group at Christian-Albrechts University of Kiel, Germany, and fellow of the Canadian Institute for Advanced Research (CIFAR), said: "As our global population is projected to soar, humanity is facing unprecedented challenges to food production. We're already seeing massive crop losses to fungal infection, which could sustain millions of people each year. This worrying trend may only worsen as a warming world makes fungal infections more prevalent in European crops, and as they continue to develop resistance to antifungals. This will be catastrophic for developing countries and will have a major impact in the Western world, too."

The commentary highlights a "perfect storm" which is causing fungal infections to spread rapidly. Among the factors is the fact that fungi are incredibly resilient, remaining viable in soil for up to 40 years, with airborne spores that can travel between continents. Added to this, they are extremely adaptable, with "phenomenal" genetic diversity between and among species. Modern farming practices entail vast areas of genetically uniform crops, which provide the ideal feeding and breeding grounds for such a prolific and fast-evolving group of organisms. They are also well equipped to evolve beyond traditional means to control their spread. The increasingly widespread use of antifungal treatments that target a single fungal cellular process means fungi can evolve resistance to these fungicides, so that they are no longer effective. This forces farmers to use ever-higher concentrations of fungicide in a bid to control infection, which can accelerate the pace of resistance developing.

However, there is some cause for hope. In 2020, a team the University of Exeter [GS6] discovered a new chemistry which could pave the way for a new type of antifungal targets several different mechanisms, meaning it is much harder for fungi to develop resistance. The Exeter group found the antifungal to be useful against a range of fungal diseases -- Septoria tritici blotch on wheat, rice blast , corn smut[GS7] ** and against the fungus which causes Panama disease of bananas***.

Farming practices may also hold the key to change, after a study in Denmark showed promise by planting seed mixtures which carry a range of genes which are resistant to fungal infection. Technology may also prove crucial, with AI, citizen science and remote sensing tools such as drones allowing for early detection and control of outbreaks.

Overall, the authors argue that protecting the world's crops from fungal disease will require a far more unified approach, bringing together farmers, the agricultural industry, plant breeders, biologists, governments, policymakers and funders.

Read more at Science Daily

The future of data storage lies in DNA microcapsules

Storing data in DNA sounds like science fiction, yet it lies in the near future. Professor Tom de Greef expects the first DNA data center to be up and running within five to ten years. Data won't be stored as zeros and ones in a hard drive but in the base pairs that make up DNA: AT and CG. Such a data center would take the form of a lab, many times smaller than the ones today. De Greef can already picture it all. In one part of the building, new files will be encoded via DNA synthesis. Another part will contain large fields of capsules, each capsule packed with a file. A robotic arm will remove a capsule, read its contents and place it back.

We're talking about synthetic DNA. In the lab, bases are stuck together in a certain order to form synthetically produced strands of DNA. Files and photos that are currently stored in data centers can then be stored in DNA. For now, the technique is suitable only for archival storage. This is because the reading of stored data is very expensive, so you want to consult the DNA files as little as possible.

Large, energy-guzzling data centers made obsolete

Data storage in DNA offers many advantages. A DNA file can be stored much more compactly, for instance, and the lifespan of the data is also many times longer. But perhaps most importantly, this new technology renders large, energy-guzzling data centers obsolete. And this is desperately needed, warns De Greef, "because in three years, we will generate so much data worldwide that we won't be able to store half of it."

Together with PhD student Bas Bögels, Microsoft and a group of university partners, De Greef has developed a new technique to make the innovation of data storage with synthetic DNA scalable. The results have been published today in the journal Nature Nanotechnology. De Greef works at the Department of Biomedical Engineering and the Institute for Complex Molecular Systems (ICMS) at TU Eindhoven and serves as a visiting professor at Radboud University.

Scalable

The idea of using strands of DNA for data storage emerged in the 1980s but was far too difficult and expensive at the time. It became technically possible three decades later, when DNA synthesis started to take off. George Church, a geneticist at Harvard Medical School, elaborated on the idea in 2011. Since then, synthesis and the reading of data have become exponentially cheaper, finally bringing the technology to the market.

In recent years, De Greef and his group have looked mainly into reading the stored data. For the time being, this is the biggest problem facing this new technique. The PCR method currently used for this, called 'random access', is highly error-prone. You can therefore only read one file at a time and, in addition, the data quality deteriorates too much each time you read a file. Not exactly scalable.

Here's how it works: PCR (Polymerase Chain Reaction) creates millions of copies of the piece of DNA that you need by adding a primer with the desired DNA code. Corona tests in the lab, for example, are based on this: even a minuscule amount of coronavirus material from your nose is detectable when copied so many times. But if you want to read multiple files simultaneously, you need multiple primer pairs doing their work at the same time. This creates many errors in the copying process.

Every capsule contains one file

This is where the capsules come into play. De Greef's group developed a microcapsule of proteins and a polymer and then anchored one file per capsule. De Greef: "These capsules have thermal properties that we can use to our advantage." Above 50 degrees Celsius, the capsules seal themselves, allowing the PCR process to take place separately in each capsule. Not much room for error then. De Greef calls this 'thermo-confined PCR'. In the lab, it has so far managed to read 25 files simultaneously without significant error.

If you then lower the temperature again, the copies detach from the capsule and the anchored original remains, meaning that the quality of your original file does not deteriorate. De Greef: "We currently stand at a loss of 0.3 percent after three reads, compared to 35 percent with the existing method."

Searchable with fluorescence

And that's not all. De Greef has also made the data library even easier to search. Each file is given a fluorescent label and each capsule its own color. A device can then recognize the colors and separate them from one another. This brings us back to the imaginary robotic arm at the beginning of this story, which will neatly select the desired file from the pool of capsules in the future.

Read more at Science Daily

Novel ultrasound uses microbubbles to open blood-brain barrier to treat glioblastoma in humans

A major impediment to treating the deadly brain cancer glioblastoma has been that the most potent chemotherapy can't permeate the blood-brain barrier to reach the aggressive brain tumor.

But now Northwestern Medicine scientists report results of the first in-human clinical trial in which they used a novel, skull-implantable ultrasound device to open the blood-brain barrier and repeatedly permeate large, critical regions of the human brain to deliver chemotherapy that was injected intravenously.

The four-minute procedure to open the blood-brain barrier is performed with the patient awake, and patients go home after a few hours. The results show the treatment is safe and well tolerated, with some patients getting up to six cycles of treatment.

This is the first study to successfully quantify the effect of ultrasound-based blood-brain barrier opening on the concentrations of chemotherapy in the human brain. Opening the blood-brain barrier led to an approximately four- to six-fold increase in drug concentrations in the human brain, the results showed.

Scientists observed this increase with two different powerful chemotherapy drugs, paclitaxel and carboplatin. The drugs are not used to treat these patients because they do not cross blood-brain barrier in normal circumstances.

In addition, this is the first study to describe how quickly the blood-brain barrier closes after sonication. Most of the blood-brain barrier restoration happens in the first 30 to 60 minutes after sonication, the scientists discovered. The findings will allow optimization of the sequence of drug delivery and ultrasound activation to maximize the drug penetration into the human brain, the authors said.

"This is potentially a huge advance for glioblastoma patients," said lead investigator Dr. Adam Sonabend, an associate professor of neurological surgery at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurosurgeon.

Temozolomide, the current chemotherapy used for glioblastoma, does cross the blood-brain barrier, but is a weak drug, Sonabend said.

The paper will be published May 2 in The Lancet Oncology.

The blood-brain barrier is a microscopic structure that shields the brain from the vast majority of circulating drugs. As a result, the repertoire of drugs that can be used to treat brain diseases is very limited. Patients with brain cancer cannot be treated with most drugs that are otherwise effective for cancer elsewhere in the body, as these do not cross the blood-brain barrier. Effective repurposing of drugs to treat brain pathology and cancer require their delivery to the brain.

In the past, studies that injected paclitaxel directly into the brain of patients with these tumors observed promising signs of efficacy, but the direct injection was associated with toxicity such as brain irritation and meningitis, Sonabend said.

Blood-brain barrier recloses after an hour

The scientists discovered that the use of ultrasound and microbubble-based opening of the blood-brain barrier is transient, and most of the blood-brain barrier integrity is restored within one hour after this procedure in humans.

"There is a critical time window after sonification when the brain is permeable to drugs circulating in the bloodstream," Sonabend said.

Previous human studies showed that the blood-brain barrier is completely restored 24 hours after brain sonication, and based on some animal studies, the field assumed that the blood-brain barrier is open for the first six hours or so. The Northwestern study shows that this time window might be shorter.

In another first, the study reports that using a novel skull-implantable grid of nine ultrasound emitters designed by French biotech company Carthera opens the blood-brain barrier in a volume of brain that is nine times larger than the initial device (a small single-ultrasound emitter implant). This is important because to be effective, this approach requires coverage of a large region of the brain adjacent to the cavity that remains in the brain after removal of glioblastoma tumors.

Clinical trial for patients with recurrent glioblastoma

The findings of the study are the basis for an ongoing phase 2 clinical trial the scientists are conducting for patients with recurrent glioblastoma. The objective of the trial -- in which participants receive a combination of paclitaxel and carboplatin delivered to their brain with the ultrasound technique -- is to investigate whether this treatment prolongs survival of these patients. A combination of these two drugs is used in other cancers, which is the basis for combining them in the phase 2 trial.

In the phase 1 clinical trial reported in this paper, patients underwent surgery for resection of their tumors and implantation of the ultrasound device. They started treatment within a few weeks after the implantation.

Scientists escalated the dose of paclitaxel delivered every three weeks with the accompanying ultrasound-based blood-brain barrier opening. In subsets of patients, studies were performed during surgery to investigate the effect of this ultrasound device on drug concentrations. The blood-brain barrier was visualized and mapped in the operating room using a fluorescent die called fluorescein and by MRI obtained after ultrasound therapy.

"While we have focused on brain cancer (for which there are approximately 30,000 gliomas in the U.S.), this opens the door to investigate novel drug-based treatments for millions of patients who suffer from various brain diseases," Sonabend said.

Read more at Science Daily

May 3, 2023

Astronomers find distant gas clouds with leftovers of the first stars

Using ESO's Very Large Telescope (VLT), researchers have found for the first time the fingerprints left by the explosion of the first stars in the Universe. They detected three distant gas clouds whose chemical composition matches what we expect from the first stellar explosions. These findings bring us one step closer to understanding the nature of the first stars that formed after the Big Bang.

"For the first time ever, we were able to identify the chemical traces of the explosions of the first stars in very distant gas clouds," says Andrea Saccardi, a PhD student at the Observatoire de Paris -- PSL, who led this study during his master's thesis at the University of Florence.

Researchers think that the first stars that formed in the Universe were very different from the ones we see today. When they appeared 13.5 billion years ago, they contained just hydrogen and helium, the simplest chemical elements in nature. These stars, thought to be tens or hundreds of times more massive than our Sun, quickly died in powerful explosions known as supernovae, enriching the surrounding gas with heavier elements for the first time. Later generations of stars were born out of that enriched gas, and in turn ejected heavier elements as they too died. But the very first stars are now long gone, so how can researchers learn more about them? "Primordial stars can be studied indirectly by detecting the chemical elements they dispersed in their environment after their death," says Stefania Salvadori, Associate Professor at the University of Florence and co-author of the study published today in the Astrophysical Journal.

Using data taken with ESO's VLT in Chile, the team found three very distant gas clouds, seen when the Universe was just 10-15% of its current age, and with a chemical fingerprint matching what we expect from the explosions of the first stars. Depending on the mass of these early stars and the energy of their explosions, these first supernovae released different chemical elements such as carbon, oxygen and magnesium, which are present in the outer layers of stars. But some of these explosions were not energetic enough to expel heavier elements like iron, which is found only in the cores of stars. To search for the telltale sign of these very first stars that exploded as low energy supernovae, the team therefore looked for distant gas clouds poor in iron but rich in the other elements. And they found just that: three faraway clouds in the early Universe with very little iron but plenty of carbon and other elements -- the fingerprint of the explosions of the very first stars.

This peculiar chemical composition has also been observed in many old stars in our own galaxy, which researchers consider to be second-generation stars that formed directly from the 'ashes' of the first ones. This new study has found such ashes in the early Universe, thus adding a missing piece to this puzzle. "Our discovery opens new avenues to indirectly study the nature of the first stars, fully complementing studies of stars in our galaxy," explains Salvadori.

To detect and study these distant gas clouds, the team used light beacons known as quasars -- very bright sources powered by supermassive black holes at the centres of faraway galaxies. As the light from a quasar travels through the Universe, it passes through gas clouds where different chemical elements leave an imprint on the light.

To find these chemical imprints, the team analysed data on several quasars observed with the X-shooter instrument on ESO's VLT. X-shooter splits light into an extremely wide range of wavelengths, or colours, which makes it a unique instrument with which to identify many different chemical elements in these distant clouds.

Read more at Science Daily

What would the Earth look like to an alien civilization located light years away?

What would the Earth look like to an alien civilization located light years away? A team of researchers from Mauritius and Manchester University has used crowd-sourced data to simulate radio leakage from mobile towers and predict what an alien civilization might detect from various nearby stars, including Barnard's star, six light years away from Earth. Ramiro Saide, currently an intern at the SETI Institute's Hat Creek Radio Observatory and M.Phils. student at the University of Mauritius, generated models displaying the radio power that these civilizations would receive as the Earth rotates and the towers rise and set. Saide believes that unless an alien civilization is much more advanced than ours, they would have difficulty detecting the current levels of mobile tower radio leakage from Earth. However, the team suggests that some technical civilizations are likely to have much more sensitive receiving systems than we do, and the detectability of our mobile systems will increase substantially as we move to much more powerful broadband systems.

Saide is also excited by the fact that his simulations show that the Earth's mobile radio signature includes a substantial contribution from developing countries, including Africa. According to team leader Professor Mike Garrett (University of Manchester, Jodrell Bank Centre for Astrophysics), "the results highlight Africa's success in bypassing the landline stage of development and moving directly into the digital age." Garrett is pleased with the results. "I've heard many colleagues suggest that the Earth has become increasingly radio quiet in recent years -- a claim that I always contested -- although it's true we have fewer powerful TV and radio transmitters today, the proliferation of mobile communication systems around the world is profound. While each system represents relatively low radio powers individually, the integrated spectrum of billions of these devices is substantial."

Dr. Nalini Heeralall-Issur, Saide's supervisor in Mauritius, thinks Saide might be right. "Every day we learn more about the characteristics of exoplanets via space missions like Kepler and TESS, with further insights from the JWST -- I believe that there's every chance advanced civilizations are out there, and some may be capable of observing the human-made radio leakage coming from planet Earth."

The team is eager to extend their research to include other contributors to the Earth's radio leakage signature. The next step is to include powerful civilian and military radars, new digital broadcast systems, Wi-Fi networks, individual mobile handsets and the swarm of satellite constellations now being launched into low Earth orbit, such as Elon Musk's Starlink system. According to Garrett, "Current estimates suggest we will have more than one hundred thousand satellites in low Earth orbit and beyond before the end of the decade. The Earth is already anomalously bright in the radio part of the spectrum; if the trend continues, we could become readily detectable by any advanced civilization with the right technology."

Read more at Science Daily

Researchers model 'link' between improved photosynthesis and increased yield

A team from the University of Illinois has modeled improving photosynthesis through enzyme modification and simulated soybean growth with realistic climate conditions, determining to what extent the improvements in photosynthesis could result in increased yields.

"There's a complex relationship between photosynthesis improvement and actual yield, having higher photosynthesis doesn't necessarily mean you have higher yield. The yield return is highly impacted by seasonal climate conditions" said Yufeng He, a postdoctoral researcher at Illinois, who led this work for a research project called Realizing Increased Photosynthetic Efficiency (RIPE). "This study has created a bridge that links the missing part between photosynthesis improvements and higher yields at field scale."

RIPE, which is led by Illinois, is engineering crops to be more productive by improving photosynthesis, the natural process all plants use to convert sunlight into energy and yields. This RIPE research was supported by the Bill & Melinda Gates Foundation, Foundation for Food & Agriculture Research, and U.K. Foreign, Commonwealth & Development Office.

He and his colleagues in the Matthews Research Group used the BioCro modeling framework to simulate soybeans in Illinois fields under normal and elevated CO2 conditions, paying specific attention to two important parameters that affect the plant canopy's photosynthetic process; Jmax and Vcmax. They wanted to determine the effect of boosting these photosynthetic processes at the canopy level, rather than just at the leaf level, and determine if the effects could lead to higher yields under a range of climate conditions.

The team found that the overall returns in plant photosynthesis and pod biomass (yields) were affected when plants were simulated in a high CO2 environment. They also found that correlations between increased photosynthesis and increased yield were dependent on the climate conditions at different stages of soybean growth. Their findings were recently published in Field Crops Research.

"There has been evidence showing that photosynthesis can be improved by modifying certain enzymes, but most of these studies were either done only looking at the leaf-scale impacts or the impacts from a limited number of field trials and seasonal climate conditions," said Megan Matthews, Assistant Professor in the Department of Civil and Environmental Engineering at Illinois and Principal Investigator on the research. "We studied the impacts of seasonal climate conditions at the field level on the improvements of photosynthesis. Using realistic climate inputs to run our models and show how those improvements would vary with different climates."

Read more at Science Daily

High-throughput experiments might ensure a better diagnosis of hereditary diseases

Researchers at the Department of Biology, University of Copenhagen, have now contributed to solving this problem for a specific gene called GCK. The study has just been published in Genome Biology.

Figure: GCK gene

Rasmus Hartmann-Petersen, Professor at the Department of Biology, explains:
- “The GCK gene, which codes for the enzyme glucokinase, regulates the secretion of insulin in the pancreas. GCK gene variants can therefore cause a form of hereditary diabetes. Although the connection between GCKand diabetes has been known for several years, we have, until now, only known the effect of a few percent of the possible variants of this gene”.

Together with colleagues at the PRISM centre, UCPH, who are currently studying the effects of genetic variations, the researchers measured the effect of all of the possible variants of GCK.

PhD student Sarah Gersing, who is the first author of the article, explains:
- “We used yeast cells to measure the activity of over 9000 different GCK variants. In this way, we were able to generate a list of the effects — both of already known variants, but also of variants that patients might carry, but that have not yet been discovered. This provides us with a reference for future GCK diagnostics”.

Prof. Kresten Lindorff-Larsen, who heads the PRISM centre, continues:
- “Our results are quite unique; not only have we measured the effect of several thousand variants, but for many of the variants, we can now explain what they do to the glucokinase protein. In our centre, we have gathered researchers working across a range of research fields, bridging from data analysis and biophysics to cell biology and medicine, and it is now clear how this broad approach pays off in explaining how diseases arise”.

Gene variants of GCK can, among other things, cause a form of hereditary diabetes called "GCK maturity onset diabetes of the young" (GCK-MODY).

Professor of genetics, dr. med. Torben Hansen, who is also a member of the PRISM centre, says: - "Although GCK-MODY patients exhibit elevated blood glucose levels, this is often not associated with complications. Hence, unlike other forms of diabetes, most GCK-MODY patients might therefore not need to be treated with medication. However, due to missing or inaccurate genetic data, more than half of the GCK-MODY patients are classified with having either type 1 or type 2 diabetes – and are therefore unnecessarily medicated. We estimate that approx. 1% of those who have recently been diagnosed with type 2 diabetes in Denmark have a variant in the GCK gene, meaning that they don’t need treatment, or need to be treated differently. Our new map of GCK variants can hopefully help give these patients a more correct diagnosis.”

The next step for PRISM is to transfer these methods to other genes and diseases.
- "We are already well underway with genes involved in e.g., neurodegenerative diseases, and we are trying to develop precise methods that can provide us with insights on disease mechanisms", says Rasmus Hartmann-Petersen.

 Kresten Lindorff-Larsen continues:
- "Our data gives us the opportunity to test and develop computational models for variant effects, which will then be transferable to other genes and diseases."

Read more at Science Daily

May 2, 2023

Webb finds water vapor, but from a rocky planet or its star?

The most common stars in the universe are red dwarf stars, which means that rocky exoplanets are most likely to be found orbiting such a star. Red dwarf stars are cool, so a planet has to hug it in a tight orbit to stay warm enough to potentially host liquid water (meaning it lies in the habitable zone). Such stars are also active, particularly when they are young, releasing ultraviolet and X-ray radiation that could destroy planetary atmospheres. As a result, one important open question in astronomy is whether a rocky planet could maintain, or reestablish, an atmosphere in such a harsh environment.

To help answer that question, astronomers used NASA's James Webb Space Telescope to study a rocky exoplanet known as GJ 486 b. It is too close to its star to be within the habitable zone, with a surface temperature of about 800 degrees Fahrenheit (430 degrees Celsius). And yet, their observations using Webb's Near-Infrared Spectrograph (NIRSpec) show hints of water vapor. If the water vapor is associated with the planet, that would indicate that it has an atmosphere despite its scorching temperature and close proximity to its star. Water vapor has been seen on gaseous exoplanets before, but to date no atmosphere has been definitely detected around a rocky exoplanet. However, the team cautions that the water vapor could be on the star itself -- specifically, in cool starspots -- and not from the planet at all.

"We see a signal, and it's almost certainly due to water. But we can't tell yet if that water is part of the planet's atmosphere, meaning the planet has an atmosphere, or if we're just seeing a water signature coming from the star," said Sarah Moran of the University of Arizona in Tucson, lead author of the study.

"Water vapor in an atmosphere on a hot rocky planet would represent a major breakthrough for exoplanet science. But we must be careful and make sure that the star is not the culprit," added Kevin Stevenson of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, principal investigator on the program.

GJ 486 b is about 30% larger than Earth and three times as massive, which means it is a rocky world with stronger gravity than Earth. It orbits a red dwarf star in just under 1.5 Earth days. It is expected to be tidally locked, with a permanent day side and a permanent night side.

GJ 486 b transits its star, crossing in front of the star from our point of view. If it has an atmosphere, then when it transits starlight would filter through those gasses, imprinting fingerprints in the light that allow astronomers to decode its composition through a technique called transmission spectroscopy.

The team observed two transits, each lasting about an hour. They then used three different methods to analyze the resulting data. The results from all three are consistent in that they show a mostly flat spectrum with an intriguing rise at the shortest infrared wavelengths. The team ran computer models considering a number of different molecules, and concluded that the most likely source of the signal was water vapor.

While the water vapor could potentially indicate the presence of an atmosphere on GJ 486 b, an equally plausible explanation is water vapor from the star. Surprisingly, even in our own Sun, water vapor can sometimes exist in sunspots because these spots are very cool compared to the surrounding surface of the star. GJ 486 b's host star is much cooler than the Sun, so even more water vapor would concentrate within its starspots. As a result, it could create a signal that mimics a planetary atmosphere.

"We didn't observe evidence of the planet crossing any starspots during the transits. But that doesn't mean that there aren't spots elsewhere on the star. And that's exactly the physical scenario that would imprint this water signal into the data and could wind up looking like a planetary atmosphere," explained Ryan MacDonald of the University of Michigan in Ann Arbor, one of the study's co-authors.

A water vapor atmosphere would be expected to gradually erode due to stellar heating and irradiation. As a result, if an atmosphere is present, it would likely have to be constantly replenished by volcanoes ejecting steam from the planet's interior. If the water is indeed in the planet's atmosphere, additional observations are needed to narrow down how much water is present.

Future Webb observations may shed more light on this system. An upcoming Webb program will use the Mid-Infrared Instrument (MIRI) to observe the planet's day side. If the planet has no atmosphere, or only a thin atmosphere, then the hottest part of the day side is expected to be directly under the star. However, if the hottest point is shifted, that would indicate an atmosphere that can circulate heat.

Read more at Science Daily

Ecosystem evolution in Africa

Ohio University's Nancy J. Stevens Ph.D., distinguished professor in the Department of Biomedical Sciences in the Heritage College of Osteopathic Medicine, is coauthor on a paper published in the journal Science and funded by the National Science Foundation that documents the evolution of grassland ecosystems on continental Africa.

Collaborating with an extensive team of geologists and paleoanthropologists from universities around the world, led by researchers from Baylor University and the University of Minnesota, the team synthesized data from nine Early Miocene fossil localities in the East African Rift of Kenya and Uganda to determine that the expansion of grassy biomes dominated by grasses with the C4 photosynthetic pathway in Eastern Africa occurred more than 10 million years earlier.

According to the paper, previous reconstructions of early Miocene ecosystems, 15-20 million years ago, have suggested that equatorial Africa was covered by a semi-continuous forest, with open habitats dominated by warm-season, or C4, grasses that were uncommon until 8-10 million years ago. C4 refers to the different pathways that plants use to capture carbon dioxide during photosynthesis. C4 plants produce a four-carbon molecule and are more adapted to warm or hot season condition under moist or dry environments.

As the researchers gathered expertise about geological features, isotopes and fossils found at the sites, the paradigm of a continuous forest blanketing equatorial Africa during the early Miocene shifted to a more complex mosaic of habitats that already included open environments with C4 grasses.

The result of this research pushes back the oldest evidence of C4 grass-dominated habitats in Africa -- and globally -- by more than 10 million years, with important implications for primate evolution and the origins of tropical C4 grasslands and savanna ecosystems across the African continent and around the world.

"We suspected that we would find C4 plants at some sites, but we didn't expect to find them at as many sites as we did, and in such high abundance," Daniel Peppe, lead author and associate professor at Baylor University, said.

A critical aspect of this work was that the team combined many different lines of evidence together: geology, fossil soils, isotopes and phytoliths (plant silica microfossils) to reach their conclusions.

Read more at Science Daily

The science behind the life and times of the Earth's salt flats

Researchers at the University of Massachusetts Amherst and the University of Alaska Anchorage are the first to characterize two different types of surface water in the hyperarid salars -- or salt flats -- that contain much of the world's lithium deposits. This new characterization represents a leap forward in understanding how water moves through such basins, and will be key to minimizing the environmental impact on such sensitive, critical habitats.

"You can't protect the salars if you don't first understand how they work," says Sarah McKnight, lead author of the research that appeared recently in Water Resources Research. She completed this work as part of her Ph.D in geosciences at UMass Amherst.

Think of a salar as a giant, shallow depression into which water is constantly flowing, both through surface runoff but also through the much slower flow of subsurface waters. In this depression, there's no outlet for the water, and because the bowl is in an extremely arid region, the rate of evaporation is such that enormous salt flats have developed over millennia. There are different kinds of water in this depression; generally the nearer the lip of the bowl, the fresher the water. Down near the bottom of the depression, where the salt flats occur, the water is incredibly salty. However, the salt flats are occasionally pocketed with pools of brackish water. Many different kinds of valuable metals can be found in the salt flats -- including lithium -- while the pools of brackish water are critical habitat for animals like flamingoes and vicuñas.

One of the challenges of studying these systems is that many salars are relatively inaccessible. The one McKnight studies, the Salar de Atacama in Chile, is sandwiched between the Andes and the Atacama Desert. Furthermore, the hydrogeology is incredibly complex: water comes into the system from Andean runoff, as well as via the subsurface aquifer, but the process governing how exactly snow and groundwater eventually turn into salt flat is difficult to pin down.

Add to this the increased mining pressure in the area and the poorly understood effects it may have on water quality, as well as the mega-storms whose intensity and precipitation has increased markedly due to climate change, and you get a system whose workings are difficult to understand.

However, combining observations of surface and groundwater with data from the Sentinel-2 satellite and powerful computer modeling, McKnight and her colleagues were able to see something that has so far remained invisible to other researchers.

It turns out that not all water in the salar is the same. What McKnight and her colleagues call "terminal pools" are brackish ponds of water located in what is called the "transition zone," or the part of the salar where the water is increasingly briny but has not yet reached full concentration. Then there are the "transitional pools," which are located right at the boundary between the briny waters and the salt flats. Water comes into each of these pools from different sources -- some of them quite far away from the pools they feed -- and exits the pools via different pathways.

"It's important to define these two different types of surface waters," says McKnight, "because they behave very differently. After a major storm event, the terminal pools flood quickly, and then quickly recede back to their pre-flood levels. But the transitional pools take a very long time -- from a few months to almost a year -- to recede back to their normal level after a major storm."

Read more at Science Daily

Information 'deleted' from the human genome may be what made us human

What the human genome is lacking compared with the genomes of other primates might have been as crucial to the development of humankind as what has been added during our evolutionary history, according to a new study led by researchers at Yale and the Broad Institute of MIT and Harvard.

The new findings, published April 28 in the journal Science, fill an important gap in what is known about historical changes to the human genome. While a revolution in the capacity to collect data from genomes of different species has allowed scientists to identify additions that are specific to the human genome -- such as a gene that was critical for humans to develop the ability to speak -- less attention has been paid to what's missing in the human genome.

For the new study researchers used an even deeper genomic dive into primate DNA to show that the loss of about 10,000 bits of genetic information -- most as small as a few base pairs of DNA -- over the course of our evolutionary history differentiate humans from chimpanzees, our closest primate relative. Some of those "deleted" pieces of genetic information are closely related to genes involved in neuronal and cognitive functions, including one associated with the formation of cells in the developing brain.

These 10,000 missing pieces of DNA -- which are present in the genomes of other mammals -- are common to all humans, the Yale team found.

The fact that these genetic deletions became conserved in all humans, the authors say, attests to their evolutionary importance, suggesting that they conferred some biological advantage.

"Often we think new biological functions must require new pieces of DNA, but this work shows us that deleting genetic code can result in profound consequences for traits make us unique as a species," said Steven Reilly, an assistant professor of genetics at Yale School of Medicine and senior author of the paper.

The paper was one of several published in Science from the Zoonomia Project, an international research collaboration that is cataloging the diversity in mammalian genomes by comparing DNA sequences from 240 species of mammals that exist today.

In their study, the Yale team found that some genetic sequences found in the genomes of most other mammal species, from mice to whales, vanished in humans. But rather than disrupt human biology, they say, some of these deletions created new genetic encodings that eliminated elements that would normally turn genes off.

The deletion of this genetic information, Reilly said, had an effect that was the equivalent of removing three characters -- "n't" -- from the word "isn't" to create a new word, "is."

"[Such deletions] can tweak the meaning of the instructions of how to make a human slightly, helping explain our bigger brains and complex cognition," he said.

The researchers used a technology called Massively Parallel Reporter Assays (MPRA), which can simultaneously screen and measure the function of thousands of genetic changes among species.

Read more at Science Daily