Apr 23, 2022

Two largest Mars-quakes to date recorded from planet's far side

The seismometer placed on Mars by NASA's InSight lander has recorded its two largest seismic events to date: a magnitude 4.2 and a magnitude 4.1 marsquake. The pair are the first recorded events to occur on the planet's far side from the lander and are five times stronger than the previous largest event recorded.

Seismic wave data from the events could help researchers learn more about the interior layers of Mars, particularly its core-mantle boundary, researchers from InSight's Marsquake Service (MQS) report in The Seismic Record.

Anna Horleston of the University of Bristol and colleagues were able to identify reflected PP and SS waves from the magnitude 4.2 event, called S0976a, and locate its origin in the Valles Marineris, a massive canyon network that is one of Mars' most distinguishing geological features and one of the largest graben systems in the Solar System. Earlier orbital images of cross-cutting faults and landslides suggested the area would be seismically active, but the new event is the first confirmed seismic activity there.

S1000a, the magnitude 4.1 event recorded 24 days later, was characterized by reflected PP and SS waves as well as Pdiff waves, small amplitude waves that have traversed the core-mantle boundary. This is the first time Pdiff waves have been spotted by the InSight mission. The researchers could not definitively pinpoint S1000a's location, but like S0976a it originated on Mars' far side. The seismic energy from S1000a also holds the distinction of being the longest recorded on Mars, lasting 94 minutes.

Both marsquakes occurred in the core shadow zone, a region where P and S waves can't travel directly to InSight's seismometer because they are stopped or bent by the core. PP and SS waves don't follow a direct path, but rather are reflected at least once at the surface before traveling to the seismometer.

"Recording events within the core shadow zone is a real steppingstone for our understanding of Mars. Prior to these two events the majority of the seismicity was within about 40 degrees distance of InSight," said Savas Ceylan, a co-author from ETH Zürich. "Being within the core shadow, the energy traverses parts of Mars we have never been able to seismologically sample before."

The two marsquakes differ in some important ways. S0976a is characterized by only low frequency energy, like many of the quakes identified so far on the planet, while S1000a has a very broad frequency spectrum. "[S1000a] is a clear outlier in our catalog and will be key to our further understanding of Martian seismology," Horleston said.

S0976a is likely to have a much deeper origin than S1000a, she noted. "The latter event has a frequency spectrum much more like a family of events that we observe that have been modeled as shallow, crustal quakes, so this event may have occurred near the surface. S0976a looks like many of the events we have located to Cerberus Fossae -- an area of extensive faulting -- that have depths modeled to be around 50 kilometers or more and it is likely that this event has a similar, deep, source mechanism."

Compared to the rest of the seismic activity detected by InSight, the two new far-side quakes are true outliers, the researchers said.

Read more at Science Daily

New miniature heart could help speed heart disease cures

There's no safe way to get a close-up view of the human heart as it goes about its work: you can't just pop it out, take a look, then slot it back in. Scientists have tried different ways to get around this fundamental problem: they've hooked up cadaver hearts to machines to make them pump again, attached lab-grown heart tissues to springs to watch them expand and contract. Each approach has its flaws: reanimated hearts can only beat for a few hours; springs can't replicate the forces at work on the real muscle. But getting a better understanding of this vital organ is urgent: in America, someone dies of heart disease every 36 seconds, according to the Centers for Disease Control and Prevention.

Now, an interdisciplinary team of engineers, biologists, and geneticists has developed a new way of studying the heart: they've built a miniature replica of a heart chamber from a combination of nanoengineered parts and human heart tissue. There are no springs or external power sources -- like the real thing, it just beats by itself, driven by the live heart tissue grown from stem cells. The device could give researchers a more accurate view of how the organ works, allowing them to track how the heart grows in the embryo, study the impact of disease, and test the potential effectiveness and side effects of new treatments -- all at zero risk to patients and without leaving a lab.

The Boston University-led team behind the gadget -- nicknamed miniPUMP, and officially known as the cardiac miniaturized Precision-enabled Unidirectional Microfluidic Pump -- says the technology could also pave the way for building lab-based versions of other organs, from lungs to kidneys. Their findings have been published in Science Advances.

"We can study disease progression in a way that hasn't been possible before," says Alice White, a BU College of Engineering professor and chair of mechanical engineering. "We chose to work on heart tissue because of its particularly complicated mechanics, but we showed that, when you take nanotechnology and marry it with tissue engineering, there's potential for replicating this for multiple organs."

According to the researchers, the device could eventually speed up the drug development process, making it faster and cheaper. Instead of spending millions -- and possibly decades -- moving a medicinal drug through the development pipeline only to see it fall at the final hurdle when tested in people, researchers could use the miniPUMP at the outset to better predict success or failure.

The project is part of CELL-MET, a multi-institutional National Science Foundation Engineering Research Center in Cellular Metamaterials that's led by BU. The center's goal is to regenerate diseased human heart tissue, building a community of scientists and industry experts to test new drugs and create artificial implantable patches for hearts damaged by heart attacks or disease.

"Heart disease is the number one cause of death in the United States, touching all of us," says White, who was chief scientist at Alcatel-Lucent Bell Labs before joining BU in 2013. "Today, there is no cure for a heart attack. The vision of CELL-MET is to change this."

Personalized Medicine

There's a lot that can go wrong with your heart. When it's firing properly on all four cylinders, the heart's two top and two bottom chambers keep your blood flowing so that oxygen-rich blood circulates and feeds your body. But when disease strikes, the arteries that carry blood away from your heart can narrow or become blocked, valves can leak or malfunction, the heart muscle can thin or thicken, or electrical signals can short, causing too many -- or too few -- beats. Unchecked, heart disease can lead to discomfort -- like breathlessness, fatigue, swelling, and chest pain -- and, for many, death.

"The heart experiences complex forces as it pumps blood through our bodies," says Christopher Chen, BU's William F. Warren Distinguished Professor of Biomedical Engineering. "And while we know that heart muscle changes for the worse in response to abnormal forces -- for example, due to high blood pressure or valve disease -- it has been difficult to mimic and study these disease processes. This is why we wanted to build a miniaturized heart chamber."

At just 3 square centimeters, the miniPUMP isn't much bigger than a postage stamp. Built to act like a human heart ventricle -- or muscular lower chamber -- its custom-made components are fitted onto a thin piece of 3D-printed plastic. There are miniature acrylic valves, opening and closing to control the flow of liquid -- water, in this case, rather than blood -- and small tubes, funneling that fluid just like arteries and veins. And beating away in one corner, the muscle cells that make heart tissue contract, cardiomyocytes, made using stem cell technology.

"They're generated using induced pluripotent stem cells," says Christos Michas (ENG'21), a postdoctoral researcher who designed and led the development of the miniPUMP as part of his PhD thesis.

To make the cardiomyocyte, researchers take a cell from an adult -- it could be a skin cell, blood cell, or just about any other cell -- reprogram it into an embryonic-like stem cell, then transform that into the heart cell. In addition to giving the device literal heart, Michas says the cardiomyocytes also give the system enormous potential in helping pioneer personalized medicines. Researchers could place a diseased tissue in the device, for instance, then test a drug on that tissue and watch to see how its pumping ability is impacted.

"With this system, if I take cells from you, I can see how the drug would react in you, because these are your cells," says Michas. "This system replicates better some of the function of the heart, but at the same time, gives us the flexibility of having different humans that it replicates. It's a more predictive model to see what would happen in humans -- without actually getting into humans."

According to Michas, that could allow scientists to assess a new heart disease drug's chances of success long before heading into clinical trials. Many drug candidates fail because of their adverse side effects.

"At the very beginning, when we're still playing with cells, we can introduce these devices and have more accurate predictions of what will happen in clinical trials," says Michas. "It will also mean that the drugs might have fewer side effects."

Thinner than a Human Hair

One of the key parts of the miniPUMP is an acrylic scaffold that supports, and moves with, the heart tissue as it contracts. A series of superfine concentric spirals -- thinner than a human hair -- connected by horizontal rings, the scaffold looks like an artsy piston. It's an essential piece of the puzzle, giving structure to the heart cells -- which would just be a formless blob without it -- but not exerting any active force on them.

"We don't think previous methods of studying heart tissue capture the way the muscle would respond in your body," says Chen, who's also director of BU's Biological Design Center and an associate faculty member at Harvard University's Wyss Institute for Biologically Inspired Engineering. "This gives us the first opportunity to build something that mechanically is more similar to what we think the heart is actually experiencing -- it's a big step forward."

To print each of the tiny components, the team used a process called two-photon direct laser writing -- a more precise version of 3D printing. When light is beamed into a liquid resin, the areas it touches turn solid; because the light can be aimed with such accuracy -- focused to a tiny spot -- many of the components in the miniPUMP are measured in microns, smaller than a dust particle.

The decision to make the pump so small, rather than life-size or larger, was deliberate and is crucial to its functioning.

"The structural elements are so fine that things that would ordinarily be stiff are flexible," says White. "By analogy, think about optical fiber: a glass window is very stiff, but you can wrap a glass optical fiber around your finger. Acrylic can be very stiff, but at the scale involved in the miniPUMP, the acrylic scaffold is able to be compressed by the beating cardiomyocytes."

Chen says that the pump's scale shows "that with finer printing architectures, you might be able to create more complex organizations of cells than we thought was possible before." At the moment, when researchers try to create cells, he says, whether heart cells or liver cells, they're all disorganized -- "to get structure, you have to cross your fingers and hope the cells create something." That means the tissue scaffolding pioneered in the miniPUMP has big potential implications beyond the heart, laying the foundation for other organs-on-a-chip, from kidneys to lungs.

Refining the Technology

According to White, the breakthrough is possible because of the range of experts on CELL-MET's research team, which included not just mechanical, biomedical, and materials engineers like her, Chen, and Arvind Agarwal of Florida International University, but also geneticist Jonathan G. Seidman of Harvard Medical School and cardiovascular medicine specialist Christine E. Seidman of Harvard Medical School and Brigham and Women's Hospital. It's a breadth of experience that's benefited not just the project, but Michas. An electrical and computer engineering student as an undergraduate, he says he'd "never seen cells in my life before starting this project." Now, he's preparing to start a new position with Seattle-based biotech Curi Bio, a company that combines stem cell technology, tissue biosystems, and artificial intelligence to power the development of drugs and therapeutics.

"Christos is someone who understands the biology," says White, "can do the cell differentiation and tissue manipulation, but also understands nanotechnology and what's required, in an engineering way, to fabricate the structure."

Read more at Science Daily

Apr 22, 2022

Deepest sediment core collected in the Atlantic Ocean

A team of scientists, engineers, and ship's crew on the research vessel Neil Armstrong operated by the Woods Hole Oceanographic Institution (WHOI) recently collected a 38-foot-long cylindrical sediment sample from the deepest part of the Puerto Rico Trench, nearly 5 miles below the surface. The sample core is breaking records as the deepest ever collected in the Atlantic Ocean, and possibly the deepest core collected in any ocean.

The event took place aboard a collaborative cruise in Puerto Rico between February and March, 2022. The group responsible for the core collection was led by Prof. Steven D'Hondt and Dr. Robert Pockalny from the University of Rhode Island's Graduate School of Oceanography and included researchers and technicians from WHOI, University of Rhode Island, University of California San Diego, Oregon State University, University of Washington, University of Puerto Rico Mayagüez, and University of Munich.

Long sediment cores are generally collected by allowing a core pipe with a lead weight on top to fall through the water and into soft sediment that collects on the seafloor over long periods of time. When the pipe is pulled out of the seafloor and back up to the ship, the recovered sediment inside can be used to study Earth's environmental conditions and climate dating back tens or hundreds of thousands, or even millions, of years ago.

Scientists are also interested in understanding genetic traits that enable microscopic organisms to survive within seafloor sediments. The main objective of this expedition was to better understand how microbes at different depths below the seafloor have adapted to vastly different environmental conditions present across the entire depth range of the trench. Over the course of three weeks at sea, the team collected cores from a water depth of about 50 meters (165 feet) to the trench's maximum depth of about 8,385 meters (27,510 feet).

"We took these cores to learn how microbes that live beneath the seafloor respond to pressure," said D'Hondt. "Our ultimate objective is to improve understanding of how organisms in extreme environments engage in the world around them. "Our team's success in extracting this core from the deepest part of the Atlantic Ocean will enable us to make a tremendous advance in our understanding of this little-known part of life on Earth."

The core collections were made possible by the long core system originally developed at WHOI in 2007 by then-research specialist Jim Broda for the research vessel Knorr. After the ship's retirement, the system was adapted to fit the slightly shorter vessel Neil Armstrong. After this expedition, the long corer will be transferred to the OSU Marine Sediment Sampling Group, which is funded by the U.S. National Science Foundation and supports coring operations throughout the U.S. academic research fleet, so it can be made available to the entire oceanographic community.

Read more at Science Daily

Indiana Jones was right all along: Research shows the smaller the scorpion, the deadlier

Researchers in NUI Galway have shown, for the first time, that smaller species of scorpions, with smaller pincers, have more potent venoms compared to larger species with robust claws.

The scientists tested the theory from Indiana Jones and the Kingdom of the Crystal Skull, which warned of the dangers of small scorpions, and that "when it comes to scorpions, the bigger the better."

While this may have simply been a throwaway movie line from the adventurous archaeologist Indiana Jones, the research shows there is truth to it.

The team of scientists at NUI Galway's Ryan Institute put the quip to the test by analysing 36 species of scorpions to show that larger scorpions have less potent venoms and really are better in terms of avoiding a nasty sting.

The results of the research have published in the international journal Toxins.

It shows the smallest scorpions in their analysis, like the Brazilian yellow scorpion, where over 100 times more potent than the largest species they studied, such as the rock scorpion.

The potency pattern was not just about body size, but also pincer size, with venoms found in species with the smallest pinchers, including the South African thick-tail scorpion, which is more than 10 times more potent compared to species with the largest and most robust pinchers, such as the Israeli gold scorpion.

Dr Kevin Healy, Lecturer of Zoology at NUI Galway and senior author of the study, said: "Outside of entertaining movie trivia there are good evolutionary reason to expect the results and important medical implications for such patterns."

The researchers highlighted that while scorpions use both their venomous sting and their pinchers to capture prey and for defence there is an evolutionary trade-off between these weapons. Energy used to make bigger pincers means less energy is available for its chemical arsenal. This results in larger scorpions which can use their physical size are less reliant on venoms, while smaller species have evolved more potent venoms.

Dr Healy added: "When we look at the most potent, and dangerous, scorpion venoms we find they tend to be associated with species such as the deathstalker which are relatively small. In contrast, the biggest species such as rock scorpions have venoms that are likely to only cause slight pain."

Alannah Forde, an NUI Galway graduate student and lead author of the study, said: "Not only did we find that bigger is better -- when it comes to people being stung -- we also found that bigger pincers are better when it comes to assessing the danger level of a scorpion. While species such as large-clawed scorpion might be small to medium in size, they mainly rely on their large pincers instead of their relatively weak venom."

Scorpion stings are a global health problem with more than 1 million cases and thousands of deaths every year. Identifying the species involved with a sting is vital for treatment, hence general rules such as "bigger is better" are often used to help with treatment.

The team aim to test these evolutionary rules to what makes some species more potent to help develop better medical approaches to scorpion stings.

Read more at Science Daily

Evidence suggests cancer is not as purely genetic as once thought

While cancer is a genetic disease, the genetic component is just one piece of the puzzle -- and researchers need to consider environmental and metabolic factors as well, according to a research review by a leading expert at the University of Alberta.

Nearly all the theories about the causes of cancer that have emerged over the past several centuries can be sorted into three larger groups, said David Wishart, professor in the departments of biological sciences and computing science. The first is cancer as a genetic disease, focusing on the genome, or the set of genetic instructions that you are born with. The second is cancer as an environmental disease, focusing on the exposome, which includes everything your body is exposed to throughout your life. The third is cancer as a metabolic disease, focusing on the metabolome, all the chemical byproducts of the process of metabolism.

The metabolic perspective hasn't had much research until now, but it's gaining the interest of more scientists, who are beginning to understand the metabolome's role in cancer.

The genome, exposome and metabolome operate together in a feedback loop as cancer develops and spreads.

According to the data, heritable cancers account for just five to 10 per cent of all cancers, Wishart said. The other 90 to 95 per cent are initiated by factors in the exposome, which in turn trigger genetic mutations.

"That's an important thing to consider, because it says that cancer isn't inevitable."

The metabolome is critical to the process, as those genetically mutated cancer cells are sustained by the cancer-specific metabolome.

"Cancer is genetic, but often the mutation itself isn't enough," said Wishart. As cancer develops and spreads in the body, it creates its own environment and introduces certain metabolites. "It becomes a self-fuelled disease. And that's where cancer as a metabolic disorder becomes really important."

The multi-omics perspective, in which the genome, exposome and metabolome are all considered in unison when thinking about cancer, is showing promise for finding treatments and for overcoming the limitations of looking at only one of these factors.

For example, Wishart explained, researchers who focus only on the genetic perspective are looking to address particular mutations. The problem is, there are around 1,000 genes that can become cancerous when mutated, and it typically takes at least two different mutations within these cells for cancer to grow. That means there are a million potential mutation pairs, and "it becomes hopeless" to narrow down the possibilities when seeking new treatments.

But when considering cancer from the metabolic perspective, there are just four major metabolic types, said Wishart. Rather than trying to find a treatment plan for one specific mutation combination amongst a million, determining the patient's cancer metabolic type can immediately guide doctors in deciding on the best treatment for their specific cancer.

"It really doesn't make a difference where the cancer is -- it's something you've got to get rid of. It's how it thrives or grows that matters," said Wishart. "It becomes a question of, 'What's the fuel that powers this engine?'"

Wishart cautioned that health-care providers still need a mix of therapeutics for cancer, and noted that a deeper understanding of the metabolome and its role in the cancer feedback loop is also critical to preventing cancer.

"If we understand the causes of cancer, then we can start highlighting the known causes, the lifestyle issues that introduce or increase our risk," he said.

Read more at Science Daily

Earliest geochemical evidence of plate tectonics found in 3.8-billion-year-old crystal

A handful of ancient zircon crystals found in South Africa hold the oldest evidence of subduction, a key element of plate tectonics, according to a new study published today in AGU Advances, AGU's journal for high-impact, open-access research and commentary across the Earth and space sciences.

These rare time capsules from Earth's youth point to a transition around 3.8 billion years ago from a long-lived, stable rock surface to the active processes that shape our planet today, providing a new clue in a hot debate about when plate tectonics was set in motion.

Earth's crust and the top layer of mantle just under it are broken up into rigid plates that move slowly on top of viscous but mobile lower layers of mantle rock. Heat from Earth's core drives this slow but inexorable motion, responsible for volcanoes, earthquakes, and the uplift of mountain ranges.

Estimates for when this process revved up and modern crust formed range from over 4 billion years ago to just 800 million years ago. Uncertainty arises because the geologic record from Earth's youth is sparse, due to the surface recycling effect of plate tectonics itself. Almost nothing remains from the Hadean Eon, Earth's first 500 million years.

"The Hadean Earth is this big mystery box," said Nadja Drabon, a geologist at Harvard University and the lead author of the new study.

Tiny time capsules

In an exciting step forward in solving this mystery, in 2018 Drabon and her colleagues unearthed a chronological series of 33 microscopic zircon crystals from a rare, ancient block of crust in the Barberton Greenstone Belt in South Africa, that formed at different times over a critical 800-million-year span from 4.15 to 3.3 billion years ago.

Zircon is a relatively common accessory mineral in Earth's crust, but ancient representatives from the Hadean Eon, 4 to 4.56 billion years ago, are exceedingly rare, found in only 12 places on Earth, and usually in numbers fewer than three at each location.

Hafnium isotopes and trace elements preserved in the Greenstone Belt zircons told a story about the conditions on Earth at the time they crystalized. Zircons 3.8-billion-years-old and younger appeared to have formed in rock experiencing pressures and melting similar to modern subduction zones, suggesting the crust may have started moving.

"When I say plate tectonics, I'm specifically referring to an arc setting, when one plate goes under another and you have all that volcanism -- think of the Andes, for example, and the Ring of Fire," Drabon said, describing a classic example of subduction.

"At 3.8 billion years there is a dramatic shift where the crust is destabilized, we have new rocks forming and we see geochemical signatures becoming more and more similar to what we see in modern plate tectonics," Drabon said.

In contrast, the older zircons preserved evidence of a global cap of "protocrust" derived from remelting mantle rock that had remained stable for 600 million years, the study found.

Signs of global change

The new study found a similar transition to conditions resembling modern subduction in zircons from other locations around the world, dating to within about 200 million years of the South African zircons.

"We see evidence for a significant change on the Earth around 3.8 to 3.6 billion years ago and evolution toward plate tectonics is one clear possibility." Drabon said.

While not conclusive, the results suggest a global change may have begun, Drabon said, possibly starting and stopping in scattered locations before settling into the efficient global engine of constantly moving plates we see today.

Plate tectonics shapes Earth's atmosphere as well as its surface. Release of volcanic gasses and production of new silicate rock, which consumes large amounts carbon dioxide from the atmosphere, temper large temperature swings from too much or too little greenhouse gas.

"Without all of the recycling and new crust forming, we might be going back and forth between boiling hot and freezing cold," Drabon said. "It's kind of like a thermostat for the climate."

Plate tectonics has, so far, only been observed on Earth, and may be essential to making a planet livable, Drabon said, which makes the origins of plate motions of interest in research into the early development of life.

Read more at Science Daily

Apr 21, 2022

Dying stars' cocoons might explain fast blue optical transients

Ever since they were discovered in 2018, fast blue optical transients (FBOTs) have utterly surprised and completely confounded both observational and theoretical astrophysicists.

So hot that they glow blue, these mysterious objects are the brightest known optical phenomenon in the universe. But with only a few discovered so far, FBOTs' origins have remained elusive.

Now a Northwestern University astrophysics team presents a bold new explanation for the origin of these curious anomalies. Using a new model, the astrophysicists believe FBOTs could result from the actively cooling cocoons that surround jets launched by dying stars. It marks the first astrophysics model that is fully consistent with all observations related to FBOTs.

The research was published April 11 in the Monthly Notices of the Royal Astronomical Society.

As a massive star collapses, it can launch outflows of debris at rates near the speed of light. These outflows, or jets, collide into collapsing layers of the dying star to form a "cocoon" around the jet. The new model shows that as the jet pushes the cocoon outward -- away from the core of the collapsing star -- it cools, releasing heat as an observed FBOT emission.

"A jet starts deep inside of a star and then drills its way out to escape," said Northwestern's Ore Gottlieb, who led the study. "As the jet moves through the star, it forms an extended structure, known as the cocoon. The cocoon envelopes the jet, and it continues to do so even after the jet escapes the star, this cocoon escapes with the jet. When we calculated how much energy the cocoon has, it turned out to be as powerful as an FBOT."

Gottlieb is a Rothschild Fellow in Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA). He coauthored the paper with CIERA member Sasha Tchekovskoy, an assistant professor of physics and astronomy in Northwestern's Weinberg College of Arts and Sciences.

The hydrogen problem

FBOTs (pronounced F-bot) are a type of cosmic explosion initially detected in the optical wavelength. As their name implies, transients fade almost as quickly as they appear. FBOTs reach peak brightness within a matter of days and then quickly fade -- much faster than standard supernovae rise and decay.

After discovering FBOTs just eight years ago, astrophysicists wondered if the mysterious events were related to another transient class: gamma ray bursts (GRBs). The strongest and brightest explosions across all wavelengths, GRBs also are associated with dying stars. When a massive star exhausts its fuel and collapses into a black hole, it launches jets to produce a powerful gamma ray emission.

"The reason why we think GRBs and FBOTs might be related is because both are very fast -- moving at close to the speed of light -- and both are asymmetrically shaped, breaking the spherical shape of the star," Gottlieb said. "But there was a problem. Stars that produce GRBs lack hydrogen. We don't see any signs of hydrogen in GRBs, whereas in FBOTs, we see hydrogen everywhere. So, it could not be the same phenomenon."

Using their new model, Gottlieb and his coauthors think they might have found an answer to this problem. Hydrogen-rich stars tend to house hydrogen in their outermost layer -- a layer too thick for a jet to penetrate.

"Basically, the star would be too massive for the jet to pierce through," Gottlieb said. "So the jet will never make it out of the star, and that's why it fails to produce a GRB. However, in these stars, the dying jet transfers all its energy to the cocoon, which is the only component to escape the star. The cocoon will emit FBOT emissions, which will include hydrogen. This is another area where our model is fully consistent with all FBOT observations."

Putting the picture together

Although FBOTs glow bright in optical wavelengths, they also emit radio waves and X-rays. Gottlieb's model explains these too.

When the cocoon interacts with the dense gas surrounding the star, this interaction heats up stellar material to release a radio emission. And when the cocoon expands far enough away from the black hole (formed from the collapsed star), X-rays can leak out from the black hole. The X-rays join radio and optical light to form a full picture of the FBOT event.

While Gottlieb is encouraged by his team's findings, he says more observations and models are needed before we can definitively understand FBOTs' mysterious origins.

"This is a new class of transients, and we know so little about them," Gottlieb said. "We need to detect more of them earlier in their evolution before we can fully understand these explosions. But our model is able to draw a line among supernovae, GRBs and FBOTs, which I think is very elegant."

"This study paves the way for more advanced simulations of FBOTs," Tchekovskoy said. "This next-generation model will allow us to directly connect the physics of the central black hole to the observables, enabling us to reveal otherwise hidden physics of the FBOT central engine."

Read more at Science Daily

Scientists resurrect ancient enzymes to improve photosynthesis

A Cornell University study describes a breakthrough in the quest to improve photosynthesis in certain crops, a step toward adapting plants to rapid climate changes and increasing yields to feed a projected 9 billion people by 2050.

The study, "Improving the Efficiency of Rubisco by Resurrecting Its Ancestors in the Family Solanaceae," published April 15 in Science Advances. The senior author is Maureen Hanson, the Liberty Hyde Bailey Professor of Plant Molecular Biology in the College of Agriculture and Life Sciences. First author Myat Lin is a postdoctoral research associate in Hanson's lab.

The authors developed a computational technique to predict favorable gene sequences that make Rubisco, a key plant enzyme for photosynthesis. The technique allowed the scientists to identify promising candidate enzymes that could be engineered into modern crops and, ultimately, make photosynthesis more efficient and increase crop yields.

Their method relied on evolutionary history, where the researchers predicted Rubisco genes from 20-30 million years ago, when Earth's carbon dioxide (CO2) levels were higher than they are today and the Rubisco enzymes in plants were adapted to those levels.

By resurrecting ancient Rubisco, early results show promise for developing faster, more efficient Rubisco enzymes to incorporate into crops and help them adapt to hot, dry future conditions, as human activities are increasing heat-trapping CO2 gas concentrations in Earth's atmosphere.

The study describes predictions of 98 Rubisco enzymes at key moments in the evolutionary history of plants in the Solanaceae family, which include tomato, pepper, potato, eggplant and tobacco. Researchers use tobacco as the experimental model for their studies of Rubisco.

"We were able to identify predicted ancestral enzymes that do have superior qualities compared to current-day enzymes," Hanson said. Lin developed the new technique for identifying predicted ancient Rubisco enzymes.

Scientists have known that they can increase crop yields by accelerating photosynthesis, where plants convert CO2, water and light into oxygen and sugars that plants use for energy and for building new tissues.

For many years, researchers have focused on Rubisco, a slow enzyme that pulls (or fixes) carbon from CO2 to create sugars. Aside from being slow, Rubisco also sometimes catalyzes a reaction with oxygen in the air; by so doing, it creates a toxic byproduct, wastes energy and makes photosynthesis inefficient.

Hanson's lab had previously tried to use Rubisco from cyanobacteria (blue-green algae), which is faster but also reacts readily with oxygen, forcing the researchers to try to create micro-compartments to protect the enzyme from oxygen, with mixed results. Other researchers have tried to engineer more optimal Rubisco by making changes in the enzyme's amino acids, though little was known about which changes would lead to desired results.

In this study, Lin reconstructed a phylogeny -- a tree-like diagram showing evolutionary relatedness among groups of organisms -- of Rubisco, using Solanaceae plants.

"By getting a lot of [genetic] sequences of Rubisco in existing plants, a phylogenetic tree could be constructed to figure out which Rubiscos likely existed 20 to 30 million years ago," Hanson said.

The advantage of identifying potential ancient Rubisco sequences is that carbon dioxide levels were possibly as high as 500 to 800 parts per million (ppm) in the atmosphere 25 million to 50 million years ago. Today, heat-trapping CO2 levels are rising sharply due to many human activities, with current measurements at around 420 ppm, after staying relatively constant under 300 ppm for hundreds of millennia until the 1950s.

Lin, Hanson and colleagues then used an experimental system developed for tobacco in Hanson's lab, and described in a 2020 Nature Plants paper, which employs E. coli bacteria to test in a single day the efficacy of different versions of Rubisco. Similar tests done in plants take months to verify.

The team found that ancient Rubisco enzymes predicted from modern-day Solanaceae plants showed real promise for being more efficient.

"For the next step, we want to replace the genes for the existing Rubisco enzyme in tobacco with these ancestral sequences using CRISPR [gene-editing] technology, and then measure how it affects the production of biomass," Hanson said. "We certainly hope that our experiments will show that by adapting Rubisco to present day conditions, we will have plants that will give greater yields."

Read more at Science Daily

Pacific Northwest wildfires alter air pollution patterns across North America

Increasingly large and intense wildfires in the Pacific Northwest are altering the seasonal pattern of air pollution and causing a spike in unhealthy pollutants in August, new research finds. The smoke is undermining clean air gains, posing potential risks to the health of millions of people, according to the study.

The research, led by scientists at the National Center for Atmospheric Research (NCAR), found that levels of carbon monoxide -- a gas that indicates the presence of other air pollutants -- have increased sharply as wildfires spread in August. Carbon monoxide levels are normally lower in the summer because of chemical reactions in the atmosphere related to changes in sunlight, and the finding that their levels have jumped indicates the extent of the smoke's impacts.

"Wildfire emissions have increased so substantially that they're changing the annual pattern of air quality across North America," said NCAR scientist Rebecca Buchholz, the lead author. "It's quite clear that there is a new peak of air pollution in August that didn't used to exist."

Although carbon monoxide generally is not a significant health concern outdoors, the gas indicates the presence of more harmful pollutants, including aerosols (airborne particulates) and ground-level ozone that tends to form on hot summer days.

The research team used satellite-based observations of atmospheric chemistry and global inventories of fires to track wildfire emissions during most of the past two decades, as well as computer modeling to analyze the potential impacts of the smoke. They focused on three North American regions: the Pacific Northwest, the central United States, and the Northeast.

Buchholz said the findings were particularly striking because carbon monoxide levels have been otherwise decreasing, both globally and across North America, due to improvements in pollution-control technologies.

The study was published this week in Nature Communications. The research was funded in part by the U.S. National Science Foundation, NCAR's sponsor. The paper was co-authored by researchers from the University of Colorado, Boulder; Columbia University; NASA; Tsinghua University; and Colorado State University.

Increasing impacts on air pollution

Wildfires have been increasing in the Pacific Northwest and other regions of North America, due to a combination of climate change, increased development, and land use policies. The fires are becoming a larger factor in air pollution, especially as emissions from human activities are diminishing because of more efficient combustion processes in motor vehicles and industrial facilities.

To analyze the impacts of fires, Buchholz and her collaborators used data from two instruments on the NASA Terra satellite: MOPITT (Measurements of Pollution in the Troposphere), which has tracked carbon monoxide continually since 2002; and MODIS (Moderate Resolution Imaging Spectrometer), which detects fires and provides information on aerosols. They also studied four inventories of wildfire emissions, which rely on MODIS data.

The scientists focused on the period from 2002, the beginning of a consistent and long-term record of MOPPIT data, to 2018, the last year for which complete observations were available at the time when they began their study.

The results showed an increase in carbon monoxide levels across North America in August, which corresponded with the peak burning season of the Pacific Northwest. The trend was especially pronounced from 2012 to 2018, when the Pacific Northwest fire season became much more active, according to the emissions inventories. Data from the MODIS instrument revealed that aerosols also showed an upward trend in August.

To determine whether the higher pollution levels were caused by the fires, the scientists eliminated other potential emission sources. They found that carbon monoxide levels upwind of the Pacific Northwest, over the Pacific Ocean, were much lower in August -- a sign that the pollution was not blowing in from Asia. They also found that fire season in the central U.S. and the Northeast did not coincide with the August increase in pollution, which meant that local fires in those regions were not responsible. In addition, they studied a pair of fossil fuel emission inventories, which showed that carbon monoxide emissions from human activities did not increase in any of the three study regions from 2012 to 2018.

"Multiple lines of evidence point to the worsening wildfires in the Pacific Northwest as the cause of degraded air quality," Buchholz said. "It's particularly unfortunate that these fires are undermining the gains that society has made in reducing pollution overall."

Risks to human health

The findings have implications for human health because wildfire smoke has been linked to significant respiratory problems, and it may also affect the cardiovascular system and worsen pregnancy outcomes.

Buchholz and her co-authors used an NCAR-based computer model, the Community Atmosphere Model with a chemistry component, to simulate the movement of emissions from the Pacific Northwest fires and their impact on carbon monoxide, ozone, and fine particulate matter. They ran the simulations on the Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center. The results showed the pollutants could affect more than 130 million people, including about 34 million in the Pacific Northwest, 23 million in the Central U.S., and 72 million in the Northeast.

Although the study did not delve deeply into the health implications of the emissions, the authors looked at respiratory death rates in Colorado for the month of August from 2002 to 2011, compared with the same month in 2012 to 2018. They chose Colorado, located in the central U.S. region of the study, because respiratory death rates in the state were readily obtainable.

They found that Colorado respiratory deaths in August increased significantly during the 2012-2018 period, when fires in the Pacific Northwest -- but not in Colorado -- produced more emissions in August.

Read more at Science Daily

Pterosaur discovery solves ancient feather mystery

An international team of palaeontologists has discovered remarkable new evidence that pterosaurs, the flying relatives of dinosaurs, were able to control the colour of their feathers using melanin pigments.

The study, published in the journal Nature, was led by University College Cork (UCC) palaeontologists Dr Aude Cincotta and Prof. Maria McNamara and Dr Pascal Godefroit from the Royal Belgian Institute of Natural Sciences, with an international team of scientists from Brazil and Belgium.

The new study is based on analyses of a new 115 million year old fossilized headcrest of the pterosaur Tupandactylus imperator from north-eastern Brazil. Pterosaurs lived side by side with dinosaurs, 230 to 66 million years ago.

This species of pterosaur is famous for its bizarre huge headcrest. The team discovered that the bottom of the crest had a fuzzy rim of feathers, with short wiry hair-like feathers and fluffy branched feathers.

"We didn't expect to see this at all," said Dr Cincotta. "For decades palaeontologists have argued about whether pterosaurs had feathers. The feathers in our specimen close off that debate for good as they are very clearly branched all the way along their length, just like birds today."

The team then studied the feathers with high-powered electron microscopes and found preserved melanosomes -- granules of the pigment melanin. Unexpectedly, the new study shows that the melanosomes in different feather types have different shapes.

"In birds today, feather colour is strongly linked to melanosome shape." said Prof. McNamara. "Since the pterosaur feather types had different melanosome shapes, these animals must have had the genetic machinery to control the colours of their feathers. This feature is essential for colour patterning and shows that coloration was a critical feature of even the very earliest feathers."

Read more at Science Daily

Apr 20, 2022

Astronomers discover micronovae, a new kind of stellar explosion

A team of astronomers, with the help of the European Southern Observatory's Very Large Telescope (ESO's VLT), have observed a new type of stellar explosion -- a micronova. These outbursts happen on the surface of certain stars, and can each burn through around 3.5 billion Great Pyramids of Giza of stellar material in only a few hours.

"We have discovered and identified for the first time what we are calling a micronova," explains Simone Scaringi, an astronomer at Durham University in the UK who led the study on these explosions published today in Nature. "The phenomenon challenges our understanding of how thermonuclear explosions in stars occur. We thought we knew this, but this discovery proposes a totally new way to achieve them," he adds.

Micronovae are extremely powerful events, but are small on astronomical scales; they are much less energetic than the stellar explosions known as novae, which astronomers have known about for centuries. Both types of explosions occur on white dwarfs, dead stars with a mass about that of our Sun, but as small as Earth.

A white dwarf in a two-star system can steal material, mostly hydrogen, from its companion star if they are close enough together. As this gas falls onto the very hot surface of the white dwarf star, it triggers the hydrogen atoms to fuse into helium explosively. In novae, these thermonuclear explosions occur over the entire stellar surface. "Such detonations make the entire surface of the white dwarf burn and shine brightly for several weeks," explains co-author Nathalie Degenaar, an astronomer at the University of Amsterdam, the Netherlands.

Micronovae are similar explosions that are smaller in scale and faster, lasting just several hours. They occur on some white dwarfs with strong magnetic fields, which funnel material towards the star's magnetic poles. "For the first time, we have now seen that hydrogen fusion can also happen in a localised way. The hydrogen fuel can be contained at the base of the magnetic poles of some white dwarfs, so that fusion only happens at these magnetic poles," says Paul Groot, an astronomer at Radboud University in the Netherlands and co-author of the study.

"This leads to micro-fusion bombs going off, which have about one millionth of the strength of a nova explosion, hence the name micronova," Groot continues. Although 'micro' may imply these events are small, do not be mistaken: just one of these outbursts can burn through about 20,000,000 trillion kg, or about 3.5 billion Great Pyramids of Giza, of material.*

These new micronovae challenge astronomers' understanding of stellar explosions and may be more abundant than previously thought. "It just goes to show how dynamic the Universe is. These events may actually be quite common, but because they are so fast they are difficult to catch in action," Scaringi explains.

The team first came across these mysterious micro-explosions when analysing data from NASA's Transiting Exoplanet Survey Satellite (TESS). "Looking through astronomical data collected by NASA's TESS, we discovered something unusual: a bright flash of optical light lasting for a few hours. Searching further, we found several similar signals," says Degenaar.

The team observed three micronovae with TESS: two were from known white dwarfs, but the third required further observations with the X-shooter instrument on ESO's VLT to confirm its white dwarf status.

"With help from ESO's Very Large Telescope, we found that all these optical flashes were produced by white dwarfs," says Degenaar. "This observation was crucial in interpreting our result and for the discovery of micronovae," Scaringi adds.

Read more at Science Daily

Researchers take step toward developing 'electric eye'

Georgia State University researchers have successfully designed a new type of artificial vision device that incorporates a novel vertical stacking architecture and allows for greater depth of color recognition and scalability on a micro-level. The new research is published in the top journal ACS Nano.

"This work is the first step toward our final destination-to develop a micro-scale camera for microrobots," says assistant professor of Physics Sidong Lei, who led the research. "We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization."

Lei's team was able to lay the groundwork for the biomimetic artificial vision device, which uses synthetic methods to mimic biochemical processes, using nanotechnology.

"It is well-known that more than 80 percent of the information is captured by vision in research, industry, medication, and our daily life," he says. "The ultimate purpose of our research is to develop a micro-scale camera for microrobots that can enter narrow spaces that are intangible by current means, and open up new horizons in medical diagnosis, environmental study, manufacturing, archaeology, and more."

This biomimetic "electric eye" advances color recognition, the most critical vision function, which is missed in the current research due to the difficulty of downscaling the prevailing color sensing devices. Conventional color sensors typically adopt a lateral color sensing channel layout and consume a large amount of physical space and offer less accurate color detection.

Researchers developed the unique stacking technique which offers a novel approach to the hardware design. He says the van der Waals semiconductor-empowered vertical color sensing structure offers precise color recognition capability which can simplify the design of the optical lens system for the downscaling of the artificial vision systems.

Ningxin Li, a graduate student in Dr. Lei's Functional Materials Studio who was part of the research team, says recent advancements in technology make the new design possible.

"The new functionality achieved in our image sensor architecture all depends on the rapid progress of van der Waals semiconductors during recent years," says Li. "Compared with conventional semiconductors, such as silicon, we can precisely control the van der Waals material band structure, thickness, and other critical parameters to sense the red, green, and blue colors."

The van der Waals semiconductors empowered vertical color sensor (vdW-Ss) represent a newly-emerged class of materials, in which individual atomic layers are bonded by weak van der Waals forces. They constitute one of the most prominent platforms for discovering new physics and designing next-generation devices.

"The ultra-thinness, mechanical flexibility, and chemical stability of these new semiconductor materials allow us to stack them in arbitrary orders. So, we are actually introducing a three-dimensional integration strategy in contrast to the current planar micro-electronics layout. The higher integration density is the main reason why our device architecture can accelerate the downscaling of cameras," Li says.

The technology currently is patent pending with Georgia State's Office of Technology Transfer & Commercialization (OTTC). OTTC anticipates this new design will be of high interest to certain industry partners. "This technology has the potential to overcome some of the key drawbacks seen with current sensors, says OTTC's Director, Cliff Michaels. "As nanotechnology advances and devices become more compact, these smaller, highly sensitive color sensors will be incredibly useful."

Researchers believe the discovery could even spawn advancements to help the vision-impaired one day.

"This technology is crucial for the development of biomimetic electronic eyes and also other neuromorphic prosthetic devices," says Li. "High-quality color sensing and image recognition function may bring new possibilities of colorful item perception for the visually impaired in the future."

Read more at Science Daily

Impact of family background on children's education unchanged in a century, research reveals

The family background of UK children still influences their educational achievements at primary school as much as it did nearly one hundred years ago, a major new study has revealed.

The study, by the University of York, looked at data from 92,000 individuals born between 1921 and 2011 and revealed that the achievement gap between children from impoverished family backgrounds and their more privileged peers has remained stagnant.

This gap accounted for half a grade difference at primary school level, but the impact of family background persists and increases throughout the school years. Previous research suggests that by GCSE year, the effect of family background on school performance is more than three times worse, accounting for a 1.75 grade difference.

The enduring impact of family background on success in education perpetuates social and economic inequalities across generations, the researchers say. They are calling for educational policies which prioritise equality in learning outcomes for children over equality in opportunities.

Lead author of the study, Professor Sophie von Stumm from the Department of Education at the University of York, said: "Our study shows for the first time that despite the efforts of policy makers and educators, children from impoverished backgrounds, whether born in 1921 or the modern day, face the same prospect of earning lower grades and obtaining fewer educational opportunities than children from wealthier backgrounds.

"We are calling for educational interventions that ensure the weakest students get the most support, as policies promoting equal learning opportunities only work if all children are equally well prepared to take advantage of them.

"For example, we know that children from low socioeconomic family backgrounds tend to start school with poorer language skills than their better-off peers. This early disadvantage makes it more difficult for them to utilise the learning opportunities that that school offers. In turn, children from impoverished families earn lower grades in primary and secondary school, and ultimately, they earn fewer educational qualifications than children from wealthier backgrounds."

The study looked at data provided by large cohort studies up until 2016. The researchers caution that the pandemic is likely to have intensified the link between family socioeconomic status and children's school performance because it increased inequality in families' access to resources.

Co-author of the study, Professor Paul Wakeling from the Department of Education at the University of York said: "There was rightly much public scrutiny of inequalities in GCSE grades during the pandemic. However, our findings highlight how important it is to consider inequalities in earlier years of schooling. The impacts could be felt for years to come "

Professor von Stumm added: "Children growing up in low socioeconomic family homes during the pandemic were disproportionately affected by school closures, with a lack of access to online learning and suitable learning environments.

Read more at Science Daily

In the race to solve Alzheimer's disease, scientists find more needles in the haystack

21 million. That's the number of genetic variations in the human genome that researchers are sifting to identify patterns predisposing people to Alzheimer's disease.

It's a huge haystack, and Alzheimer's-related genetic variations, like needles, are miniscule in comparison. Sudha Seshadri, MD, and other faculty at The University of Texas Health Science Center at San Antonio (UT Health San Antonio) readily attest to the deep gulf between what is known about Alzheimer's genetics and what is yet to be discovered.

Dr. Seshadri, Habil Zare, PhD, and colleagues at the university's Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases are investigators on a global project to answer the many Alzheimer's riddles. Dr. Seshadri is a founding principal investigator of the International Genomics of Alzheimer's Project, commonly called IGAP. Glenn Biggs Institute faculty contributed data for the newest research from IGAP, published April 4 in Nature Genetics, and helped craft the discussion on implications of the findings, Dr. Seshadri said.

Large sample

Genomic data of half a million people were used in this latest IGAP study, including 30,000 people with confirmed Alzheimer's disease and 47,000 people categorized as proxies. Researchers could not be sure that proxy participants had Alzheimer's clinically, but they were included based on conversations with their children.

"In Alzheimer's disease research you need many samples, because some of these variants are very rare, and if you want to detect them, you need to study many, many people," said Dr. Zare, assistant professor of cell systems and anatomy in the Joe R. and Teresa Lozano Long School of Medicine and an expert in computational biology and bioinformatics. "The only way to get there is through collaboration between centers and consortia, and IGAP was established for such kind of collaboration."

IGAP conducts genome-wide association studies. These studies reveal areas of the genome, the encyclopedia of human genes, that vary between people who have Alzheimer's disease and people who don't.

"We are looking for the genetic basis so as to better understand all the different types of biology that may be responsible for Alzheimer's disease," said Dr. Seshadri, founding director of the Biggs Institute and professor of neurology in the Long School of Medicine. "As we include data from more and more people, we are able to find variants that are fairly rare, that are only seen in about 1% of the population."

Sea change

In 2009, the year of the first genome-wide association studies, researchers knew of one gene, called APOE, associated with late-onset Alzheimer's disease. Before the April 4 journal publication, researchers had a list of 40 such genes. The new paper confirmed 33 of them in a larger population sample and added 42 new genetic variants not described before.

"We've doubled the number of genes that we know are associated with Alzheimer's disease," Dr. Seshadri said. "Each of these genetic variants is a route to understanding the biology and a potential target for treatment."

Emerging pathways of Alzheimer's biology suggest the involvement of inflammation, cell senescence, central nervous system cells called microglia, and many others. Finding genetic variations will shed light on these pathways.

"A certain percentage of them are what are called druggable targets," Dr. Zare said. "Some are considered more likely to yield drugs."

Diversity needed

The study published in Nature Genetics is confined to certain people groups, which makes it impossible to generalize the gene variations worldwide.

One of the challenges with this paper, as well, is it is largely in persons of European ancestry," Dr. Seshadri said. "So, we hope to bring, over the next few years, a much larger sample of Hispanic and other minority populations to further improve gene discovery."

The South Texas Alzheimer's Disease Research Center (ADRC), a collaboration of the Glenn Biggs Institute, UT Health San Antonio and The University of Texas Rio Grande Valley, is on a mission to bring the region's sizable Hispanic population into genetic studies and other initiatives such as clinical trials. ADRCs are National Institute on Aging Centers of Excellence.

Older Hispanic adults are estimated to be at 1.5 times greater risk of Alzheimer's and other dementias than non-Hispanic whites. Dementia is costing individuals, caregivers, families and the nation an estimated $321 billion in 2022, according to the Alzheimer's Association.

"Our South Texas ADRC is here to treat people and make discoveries that lead to better treatments," Dr. Seshadri said.

The needles in the haystack are being located, and this is having results.

"We are part of this international team and are finding a lot of needles in this huge haystack of 21 million variants," Dr. Zare said.

Partners are crucial

Dr. Seshadri said a gene called SP1 is being considered for drug development by industry. SP1 was identified in an earlier study conducted by IGAP.

"That was a clue discovered years ago and now we have more clues, and hopefully we will have more promising targets in the near future," Dr. Zare said.

As the quest to end the suffering endured by individuals and families continues, the researchers acknowledge the partners who play significant roles.

"We would like to thank each of the collaborators within IGAP, and all the patients and families that join such studies, and the National Institute on Aging, which is our funder," Dr. Seshadri said.

Read more at Science Daily

Apr 19, 2022

Explanation for formation of abundant features on Europa bodes well for search for extraterrestrial life

Europa is a prime candidate for life in our solar system, and its deep saltwater ocean has captivated scientists for decades. But it's enclosed by an icy shell that could be miles to tens of miles thick, making sampling it a daunting prospect. Now, increasing evidence reveals the ice shell may be less of a barrier and more of a dynamic system -- and site of potential habitability in its own right.

Ice-penetrating radar observations that captured the formation of a "double ridge" feature in Greenland suggest the ice shell of Europa may have an abundance of water pockets beneath similar features that are common on the surface. The findings, which appear in Nature Communications April 19, may be compelling for detecting potentially habitable environments within the exterior of the Jovian moon.

"Because it's closer to the surface, where you get interesting chemicals from space, other moons and the volcanoes of Io, there's a possibility that life has a shot if there are pockets of water in the shell," said study senior author Dustin Schroeder, an associate professor of geophysics at Stanford University's School of Earth, Energy & Environmental Sciences (Stanford Earth). "If the mechanism we see in Greenland is how these things happen on Europa, it suggests there's water everywhere."

A terrestrial analog

On Earth, researchers analyze polar regions using airborne geophysical instruments to understand how the growth and retreat of ice sheets might impact sea-level rise. Much of that study area occurs on land, where the flow of ice sheets is subject to complex hydrology -- such as dynamic subglacial lakes, surface melt ponds and seasonal drainage conduits -- that contributes to uncertainty in sea-level predictions.

Because a land-based subsurface is so different from Europa's subsurface ocean of liquid water, the study co-authors were surprised when, during a lab group presentation about Europa, they noticed that formations that streak the icy moon looked extremely similar to a minor feature on the surface of the Greenland ice sheet -- an ice sheet that the group has studied in detail.

"We were working on something totally different related to climate change and its impact on the surface of Greenland when we saw these tiny double ridges -- and we were able to see the ridges go from 'not formed' to 'formed,'?" Schroeder said.

Upon further examination, they found that the "M"-shaped crest in Greenland known as a double ridge could be a miniature version of the most prominent feature on Europa.

Prominent and prevalent

Double ridges on Europa appear as dramatic gashes across the moon's icy surface, with crests reaching nearly 1000 feet, separated by valleys about a half-mile wide. Scientists have known about the features since the moon's surface was photographed by the Galileo spacecraft in the 1990s but have not been able to conceive a definitive explanation of how they were formed.

Through analyses of surface elevation data and ice-penetrating radar collected from 2015 to 2017 by NASA's Operation IceBridge, the researchers revealed how the double ridge on northwest Greenland was produced when the ice fractured around a pocket of pressurized liquid water that was refreezing inside of the ice sheet, causing two peaks to rise into the distinct shape.

"In Greenland, this double ridge formed in a place where water from surface lakes and streams frequently drains into the near-surface and refreezes," said lead study author Riley Culberg, a PhD student in electrical engineering at Stanford. "One way that similar shallow water pockets could form on Europa might be through water from the subsurface ocean being forced up into the ice shell through fractures -- and that would suggest there could be a reasonable amount of exchange happening inside of the ice shell."

Snowballing complexity

Rather than behaving like a block of inert ice, the shell of Europa seems to undergo a variety of geological and hydrological processes -- an idea supported by this study and others, including evidence of water plumes that erupt to the surface. A dynamic ice shell supports habitability since it facilitates the exchange between the subsurface ocean and nutrients from neighboring celestial bodies accumulated on the surface.

"People have been studying these double ridges for over 20 years now, but this is the first time we were actually able to watch something similar on Earth and see nature work out its magic," said study co-author Gregor Steinbrügge, a planetary scientist at NASA's Jet Propulsion Laboratory (JPL) who started working on the project as a postdoctoral researcher at Stanford. "We are making a much bigger step into the direction of understanding what processes actually dominate the physics and the dynamics of Europa's ice shell."

The co-authors said their explanation for how the double ridges form is so complex, they couldn't have conceived it without the analog on Earth.

"The mechanism we put forward in this paper would have been almost too audacious and complicated to propose without seeing it happen in Greenland," Schroeder said.

The findings equip researchers with a radar signature for quickly detecting this process of double ridge formation using ice-penetrating radar, which is among the instruments currently planned for exploring Europa from space.

Read more at Science Daily

No glacial fertilization effect in the Antarctic Ocean

Changes in the concentration of atmospheric carbon dioxide (CO2) are considered to be the main cause of past and future climate change. A long-standing debate centers on whether the roughly 30 percent lower CO2 content of the ice-age atmosphere was caused by iron fertilization. It is argued that iron-rich dust is carried into the ocean by wind and water, where it stimulates the growth of algae that absorb more CO2. As the algae die and then sink permanently into the depths of the ocean, the CO2 also remains there like in a trap. Although there is clear evidence that dust input increased during the ice ages, the fertilization effect is controversial, at least for the Antarctic Ocean.

In a recent study, an international team of 38 researchers from 13 countries led by Dr. Michael Weber from the Institute for Geosciences at the University of Bonn investigated this question. As part of the Integrated Ocean Discovery Program (IODP), the team traveled to the Scotia Sea on the drillship "JOIDES Resolution" and spent two months in 2019 bringing up cores from the seafloor at depths of 3,000 to 4,000 meters. Weber: "We collected the highest-resolution and longest climate archive ever obtained near Antarctica and its main dust source, Patagonia."

1.5 million years of climate history

In the 200-meter-long deep-sea core U1537, the climate history of the last 1.5 million years was recorded in detail. This allows the reconstruction of the dust input to be nearly doubled, since Antarctic ice cores only cover the last 800,000 years. Current records from the deep ocean show that dust deposition during the ice ages was actually five to 15 times higher. This is also reflected in the ice cores.

However, the researchers found no evidence of a fertilization effect from dust in the Antarctic Ocean during the ice ages. Rather, the production of algae, for example, and thus carbon CO2 sequestration, was high only during warm periods when dust input into the Scotia Sea was low. This means that during cold periods, other processes prevented the CO2 captured in the ocean from escaping into the atmosphere and triggering warming. The main factors here are much more extensive sea ice cover, more intense stratification in the ocean, and reduced dynamics of the current systems, which contributed to a reduction in the CO2 content of the atmosphere during cold periods.

The opposing trends in dust deposition and oceanic productivity during the ice ages and interglacial periods of the Pleistocene are accompanied by long-term, gradual changes in the climate system in the southern polar region. Bioproductivity was particularly high during the interglacial periods of the last 400,000 years, but during the mid-Pleistocene transition 1.2 million to 700,000 years ago, it differed little from that during cold periods. As the transition progressed, the dust input covered larger and larger areas in the Southern Hemisphere. Abrupt changes continued to occur 900,000 years ago, indicating greater glaciation of Antarctica.

Read more at Science Daily

How air pollution alters lung tissue, increasing cancer susceptibility

Scientists have identified a mechanism that explains how fine air pollution particles might cause lung cancer, according to a study published today in eLife.

The findings could lead to new approaches for preventing or treating the initial lung changes that lead to the disease.

Tiny, inhalable fine particulate matter (FPM) found in air pollutants has been recognised as a Group 1 carcinogen and a substantial threat to global health. However, the cancer-causing mechanism of FPM remains unclear.

"Despite its potential to cause mutations, recent research suggests that FPM does not directly promote -- and may even inhibit -- the growth of lung cancer cells," explains first author Zhenzhen Wang, an associate researcher at Nanjing University (NJU), Nanjing, China, who carried out the study between labs at NJU and the University of Macau where she was sponsored by a University of Macau Fellowship. "This suggests that FPM might lead to cancer through indirect means that support tumour growth. For example, some studies suggest FPM can prevent immune cells from moving to where they are needed."

To explore this possibility, Wang and the team collected FPM from seven locations in China and studied its effects on the main immune cells that defend against tumour growth -- called cytotoxic T-cells (CTLs). In mice administered with lung cancer cells that were not exposed to FPM, CTLs were recruited to the lung to destroy the tumour cells. By contrast, in the mice whose lungs were exposed to FPM, the infiltration of CTLs was delayed -- potentially allowing the tumour cells to establish in lung tissue.

To investigate why the CTLs did not enter the lung as quickly in the FPM-exposed lungs, the team studied both the CTLs themselves and the lung tissue structure. They found that CTLs exposed to FPM still retained their migratory ability, but that FPM exposure dramatically compressed the lung tissue structure and the spaces that immune cells move between. There were also much higher levels of collagen -- a protein that provides biomechanical support for cells and tissues. When the team studied the movement of CTLs in the mice, in lung tissue exposed to FPM, CTLs struggled to move, whereas those in the untreated tissue were able to move freely.

Further analysis of the tissue showed that the structural changes were caused by increases in a collagen subtype called collagen IV, but the team still did not know how FPM triggered this. They found the answer when they looked more closely at the structural changes to collagen IV and the enzyme responsible for making them -- called peroxidasin. This enzyme drives a specific type of cross-linking that exposure to FPM was found to cause and aggravate in the lung tissue.

"The most surprising find was the mechanism by which this process occurred," Wang says. "The peroxidasin enzyme stuck to the FPM in the lung, which increased its activity. Taken together, this means that wherever FPM lands in the lung, increased peroxidasin activity leads to structural changes in the lung tissue that can keep immune cells out and away from growing tumour cells."

Read more at Science Daily

Bioengineers visualize fat storage in fruit flies

For the first time, researchers have visually monitored, in high resolution, the timing and location of fat storage within the intact cells of fruit flies. The new optical imaging tool from the lab of bioengineering professor Lingyan Shi at the University of California San Diego is already being used to untangle often discussed, yet mysterious, links between diet and things like obesity, diabetes and aging.The work from bioengineers at the UC San Diego Jacobs School of Engineering is published in the journal Aging Cell.

The optical microscopy platform developed by the UC San Diego bioengineers is unique. It allows the researchers to visually track, in high resolution within fat cells, how specific dietary changes affect the way flies turn the energy from their food into fat. The tool also allows the researchers to monitor the reverse process of changing fat back into energy. In addition, the researchers can now visually monitor changes in size in individual fat-storage "containers" within the class of fruit fly cells that is analogous to mammalian fat (adipose) cells.

In the new paper in Aging Cell, the researchers demonstrated the ability to visually track changes in fat (lipid) metabolism in flies after they were put on a wide range of different diets. The diets included calorie-restricted diets, high protein diets, and diets with twice, four-times, and ten-times the sugar of a standard diet.

"With our new optical microscopy system, we can see both where and when fats are being put into storage and taken out of storage," said Shi, the bioengineering professor at UC San Diego who is the corresponding senior author on the new paper. "This is the first imaging technology that can visualize fat metabolism at high resolution in both space and time within individual fat cells. We have demonstrated that we can see both where and when lipid metabolism changes within individual fruit fly fat body cells in response to dietary changes."

"Interest in optimizing the human diet is intense," Shi continued. "People want answers to questions like, 'What are the best diets to slow aging? What are the best diets for losing weight? What are the best diets for extending health span?' I don't yet have answers to these questions, but in my lab, we develop new technologies that are getting us closer to answering some of the big dietary questions out there."

In the new work in Aging Cell, for example, the researchers report a new way to answer questions like:

How much does a specific diet, such as a high-protein diet, or a high-sugar diet, or a calorie-restricted diet, alter a fruit fly's process of turning energy from food into fat? And how much do these same diets affect a fruit fly's process of turning fat back into energy?

"We developed this tool to help us untangle the relationships between diet and phenomena like obesity, diabetes, aging, and longevity," said Shi.

Tracking the size of fat droplets within intact fruit fly cells is one example of what's possible with the new visualization platform.

"Droplet size is a way to track how much of the stored fat is 'turning over' or getting converted back into energy. This is an important aspect of lipid metabolism, and we now have a tool that allows us to track changes in the size of specific lipid droplets within individual cells of fruit flies," said Yajuan Li, MD. PhD, who is a postdoctoral researcher in the Shi lab at UC San Diego and the first author on the paper in Aging Cell.

Heavy water

The new visualization platform builds on some of Shi's earlier work using a variation on regular water, called heavy water or (D2O). Heavy water is, literally, heavier than regular water. Heavy water molecules contain one oxygen atom like regular water. But in place of the pair of hydrogen atoms -- the "H2" in "H20" -- heavy water contains a pair of heavier deuterium atoms.

Like "regular" water, heavy water is freely incorporated into cells in living organisms. So when the researchers provide heavy water to a fruit fly, and then that fruit fly begins to convert energy from its food into fat molecules to be stored, some of those fat molecules contain deuterium. In this way, the prevalence of deuterium atoms in lipids stored within the fat cells of fruit flies provides a way to measure how much fat that fly has stored.

By changing a fly's diet at the same time that you introduce heavy water, you have a way to monitor how the diet changes lipid turnover. More details on how the system works are in this 2021 profile, in which Shi said, "When we are developing a new technology, a new tool, it will definitely inspire us to ask new biological questions."

Read more at Science Daily

Apr 18, 2022

With dwindling water supplies, the timing of rainfall matters

A new UC Riverside study shows it's not how much extra water you give your plants, but when you give it that counts.

This is especially true near Palm Springs, where the research team created artificial rainfall to examine the effects on plants over the course of two years. This region has both winter and summer growing seasons, both of which are increasingly impacted by drought and, occasionally, extreme rain events.

Normally, some desert wildflowers and grasses begin growing in December, and are dead by June. A second community of plants sprouts in July and flowers in August. These include the wildflowers that make for an extremely popular tourist attraction in "super bloom" years.

"We wanted to understand whether one season is more sensitive to climate change than another," said Marko Spasojevic, UCR plant ecologist and lead study author. "If we see an increase or decrease in summer rains, or winter rains, how does that affect the ecosystem?"

The team observed that in summer, plants grow more when given extra water, in addition to any natural rainfall. However, the same was not true in winter.

"Essentially, adding water in summer gets us more bang for our buck," Spasojevic said.

Their findings are described in a paper published in the University of California journal Elementa.

Over the course of the study, the team observed 24 plots of land at the Boyd Deep Canyon Desert Research Center, in the Palm Desert area. Some of the plots got whatever rain naturally fell. Others were covered and allowed to receive rain only in one season. A third group of plots received additional collected rainwater.

While adding water in summer resulted in higher plant biomass, it generally did not increase the diversity of plants that grew, the researchers noted. Decreasing rainfall, in contrast, had negative effects on plants across both summer and winter, but may lead to some increased growth in the following off-seasons.

Implications of the work extend beyond learning when additional water resources might be applied simply to help plants grow. Whole communities of animals depend on these plants. They are critical for pollinators such as bees and butterflies, and they play a big role in controlling erosion and movement of soils by wind.

"Studies like this one are critical for understanding the complex effects of climate change to dryland ecosystems," said Darrel Jenerette, UCR landscape ecologist and study co-author.

Desert plants also play an important role in removing carbon dioxide and nitrogen from the atmosphere to use as fuel for growth. Microbes that live in the soil can use the carbon and nitrogen released by plant roots, then send it back into the atmosphere where it can affect the climate.

"Drylands cover roughly a third of the land surface, so even small changes in the way they take in and emit carbon or nitrogen could have a big impact on our atmosphere," said Peter Homyak, UCR environmental scientist and study co-author.

As the team continues this research over the next few years, they expect to see changes in soil carbon and nitrogen cycling, given that plants are already being affected by changes in seasonal rainfall, as this study shows.

Read more at Science Daily

Neural network model helps predict site-specific impacts of earthquakes

In disaster mitigation planning for future large earthquakes, seismic ground motion predictions are a crucial part of early warning systems and seismic hazard mapping. The way the ground moves depends on how the soil layers amplify the seismic waves (described in a mathematical site "amplification factor"). However, geophysical explorations to understand soil conditions are costly, limiting characterization of site amplification factors to date.

A new study by researchers from Hiroshima University published on April 5 in the Bulletin of the Seismological Society of America introduced a novel artificial intelligence (AI)-based technique for estimating site amplification factors from data on ambient vibrations or microtremors of the ground.

Subsurface soil conditions, which determine how earthquakes affect a site, vary substantially. Softer soils, for example, tend to amplify ground motion from an earthquake, while hard substrates may dampen it. Ambient vibrations of the ground or microtremors that occur all over the Earth's surface caused by human or atmospheric disturbances can be used to investigate soil conditions. Measuring microtremors provides valuable information about the amplification factor (AF) of a site, thus its vulnerability to damage from earthquakes due to its response to tremors.

The recent study from Hiroshima University researchers introduced a new way to estimate site effects from microtremor data. "The proposed method would contribute to more accurate and more detailed seismic ground motion predictions for future earthquakes," says lead author and associate professor Hiroyuki Miura in the Graduate School of Advanced Science and Engineering. The study investigated the relationship between microtremor data and site amplification factors using a deep neural network with the goal of developing a model that could be applied at any site worldwide.

The researchers looked into a common method known as Horizontal-to-vertical spectral ratios (MHVR) which is usually used to estimate the resonant frequency of the seismic ground. It can be generated from microtremor data; ambient seismic vibrations are analyzed in three dimensions to figure out the resonant frequency of sediment layers on top of bedrock as they vibrate. Previous research has shown, however, that MHVR cannot reliably be used directly as the site amplification factor. So, this study proposed a deep neural network model for estimating site amplification factors from the MHVR data.

The study used 2012-2020 microtremor data from 105 sites in the Chugoku district of western Japan. The sites are part of Japan's national seismograph network that contains about 1700 observation stations distributed in a uniform grid at 20 km intervals across Japan. Using a generalized spectral inversion technique, which separates out the parameters of source, propagation, and site, the researchers analyzed site-specific amplifications.

Data from each site were divided into a training set, a validation set, and a test set. The training set were used to teach a deep neural network. The validation set were used in the network's iterative optimization of a model to describe the relationship between the microtremor MHVRs and the site amplification factors. The test data were a completely unknown set used to evaluate the performance of the model.

The model performed well on the test data, demonstrating its potential as a predictive tool for characterizing site amplification factors from microtremor data. However, notes Miura, "the number of training samples analyzed in this study (80) sites is still limited," and should be expanded before assuming that the neural network model applies nationwide or globally. The researchers hope to further optimize the model with a larger dataset.

Rapid and cost-effective techniques are needed for more accurate seismic ground motion prediction since the relationship is not always linear. Explains Miura, "By applying the proposed method, site amplification factors can be automatically and accurately estimated from microtremor data observed at arbitrary site." Going forward, the study authors aim to continue to refine advanced AI techniques to evaluate the nonlinear responses of the ground to earthquakes.

Read more at Science Daily

Extract from a common kitchen spice could be key to greener, more efficient fuel cells

Turmeric, a spice found in most kitchens, has an extract that could lead to safer, more efficient fuel cells.

Researchers at the Clemson Nanomaterials Institute (CNI) and their collaborators from the Sri Sathya Sai Institute of Higher Learning (SSSIHL) in India discovered a novel way to combine curcumin -- the substance in turmeric -- and gold nanoparticles to create an electrode that requires 100 times less energy to efficiently convert ethanol into electricity.

While the research team must do more testing, the discovery brings replacing hydrogen as a fuel cell feedstock one step closer.

"Of all the catalysts for alcohol oxidation in alkaline medium, the one we prepared is the best so far," said Apparao Rao, CNI's founding director and the R. A. Bowen Professor of Physics in the College of Science's.

Fuel cells generate electricity through a chemical reaction instead of combustion. They are used to power vehicles, buildings, portable electronic devices and backup power systems.

Hydrogen fuel cells are highly efficient and do not produce greenhouse gases. While hydrogen is the most common chemical element in the universe, it must be derived from substances such as natural gas and fossil fuels because it occurs naturally on Earth only in compound form with other elements in liquids, gases or solids. The necessary extraction adds to hydrogen fuel cells' cost and environmental impact.

In addition, hydrogen used in fuel cells is a compressed gas, creating challenges for storage and transportation. Ethanol, an alcohol made from corn or other agricultural-based feeds, is safer and easier to transport than hydrogen because it is a liquid.

"To make it a commercial product where we can fill our tanks with ethanol, the electrodes have to be highly efficient," said Lakshman Ventrapragada, a former student of Rao's who worked as a research assistant at the CNI and is an alumnus of SSSIHL. "At the same time, we don't want very expensive electrodes or synthetic polymeric substrates that are not eco-friendly because that defeats the whole purpose. We wanted to look at something green for the fuel cell generation process and making the fuel cell itself."

The researchers focused on the fuel cell's anode, where the ethanol or other feed source is oxidized.

Fuel cells widely use platinum as a catalyst. But platinum suffers from poisoning because of reaction intermediates such as carbon monoxide, Ventrapragada said. It is also costly.

The researchers used gold as a catalyst. Instead of using conducting polymers, metal-organic frameworks, or other complex materials to deposit the gold on the surface of the electrode, the researchers used curcumin because of its structural uniqueness. Curcumin is used to decorate the gold nanoparticles to stabilize them, forming a porous network around the nanoparticles. Researchers deposited the curcumin gold nanoparticle on the surface of the electrode at a 100 times lower electric current than in previous studies.

Without the curcumin coating, the gold nanoparticles agglomerate, cutting down on the surface area exposed to the chemical reaction, Ventrapragada said.

"Without this curcumin coating, the performance is poor," Rao said. "We need this coating to stabilize and create a porous environment around the nanoparticles, and then they do a super job with alcohol oxidation.

"There's a big push in the industry for alcohol oxidation. This discovery is an excellent enabler for that. The next step is to scale the process up and work with an industrial collaborator who can actually make the fuel cells and build stacks of fuel cells for the real application," he continued.

But the research could have broader implications than improved fuel cells. The electrode's unique properties could lend itself to future applications in sensors, supercapacitors and more, Ventrapragada said.

In collaboration with the SSSIHL research team, Rao's team is testing the electrode as a sensor that could help identify changes in the level of dopamine. Dopamine has been implicated in disorders such as Parkinson's disease and attention deficit hyperactivity disorder. When members of the research team tested urine samples obtained from healthy volunteers, they could measure dopamine to the approved clinical range with this electrode using a cost-effective method compared to standard ones used today, Rao said.

Read more at Science Daily

Tumors partially destroyed with sound don't come back

Noninvasive sound technology developed at the University of Michigan breaks down liver tumors in rats, kills cancer cells and spurs the immune system to prevent further spread -- an advance that could lead to improved cancer outcomes in humans.

By destroying only 50% to 75% of liver tumor volume, the rats' immune systems were able to clear away the rest, with no evidence of recurrence or metastases in more than 80% animals.

"Even if we don't target the entire tumor, we can still cause the tumor to regress and also reduce the risk of future metastasis," said Zhen Xu, professor of biomedical engineering at U-M and corresponding author of the study in Cancers.

Results also showed the treatment stimulated the rats' immune responses, possibly contributing to the eventual regression of the untargeted portion of the tumor and preventing further spread of the cancer.

The treatment, called histotripsy, noninvasively focuses ultrasound waves to mechanically destroy target tissue with millimeter precision. The relatively new technique is currently being used in a human liver cancer trial in the United States and Europe.

In many clinical situations, the entirety of a cancerous tumor cannot be targeted directly in treatments for reasons that include the mass' size, location or stage. To investigate the effects of partially destroying tumors with sound, this latest study targeted only a portion of each mass, leaving behind a viable intact tumor. It also allowed the team, including researchers at Michigan Medicine and the Ann Arbor VA Hospital, to show the approach's effectiveness under less than optimal conditions.

"Histotripsy is a promising option that can overcome the limitations of currently available ablation modalities and provide safe and effective noninvasive liver tumor ablation," said Tejaswi Worlikar, a doctoral student in biomedical engineering. "We hope that our learnings from this study will motivate future preclinical and clinical histotripsy investigations toward the ultimate goal of clinical adoption of histotripsy treatment for liver cancer patients."

Liver cancer ranks among the top 10 causes of cancer related deaths worldwide and in the U.S. Even with multiple treatment options, the prognosis remains poor with five-year survival rates less than 18% in the U.S. The high prevalence of tumor recurrence and metastasis after initial treatment highlights the clinical need for improving outcomes of liver cancer.

Where a typical ultrasound uses sound waves to produce images of the body's interior, U-M engineers have pioneered the use of those waves for treatment. And their technique works without the harmful side effects of current approaches such as radiation and chemotherapy.

"Our transducer, designed and built at U-M, delivers high amplitude microsecond-length ultrasound pulses -- acoustic cavitation -- to focus on the tumor specifically to break it up," Xu said. "Traditional ultrasound devices use lower amplitude pulses for imaging."

The microsecond long pulses from UM's transducer generate microbubbles within the targeted tissues -- bubbles that rapidly expand and collapse. These violent but extremely localized mechanical stresses kill cancer cells and break up the tumor's structure.

Since 2001, Xu's laboratory at U-M has pioneered the use of histotripsy in the fight against cancer, leading to the clinical trial #HOPE4LIVER sponsored by HistoSonics, a U-M spinoff company. More recently, the group's research has produced promising results on histotripsy treatment of brain therapy and immunotherapy.

Read more at Science Daily

Apr 17, 2022

Changes in vegetation shaped global temperatures over last 10,000 years

Follow the pollen. Records from past plant life tell the real story of global temperatures, according to research from a climate scientist at Washington University in St. Louis.

Warmer temperatures brought plants -- and then came even warmer temperatures, according to new model simulations published April 15 in Science Advances.

Alexander Thompson, a postdoctoral research associate in earth and planetary sciences in Arts & Sciences, updated simulations from an important climate model to reflect the role of changing vegetation as a key driver of global temperatures over the last 10,000 years.

Thompson had long been troubled by a problem with models of Earth's atmospheric temperatures since the last ice age. Too many of these simulations showed temperatures warming consistently over time.

But climate proxy records tell a different story. Many of those sources indicate a marked peak in global temperatures that occurred between 6,000 and 9,000 years ago.

Thompson had a hunch that the models could be overlooking the role of changes in vegetation in favor of impacts from atmospheric carbon dioxide concentrations or ice cover.

"Pollen records suggest a large expansion of vegetation during that time," Thompson said.

"But previous models only show a limited amount of vegetation growth," he said. "So, even though some of these other simulations have included dynamic vegetation, it wasn't nearly enough of a vegetation shift to account for what the pollen records suggest."

In reality, the changes to vegetative cover were significant.

Early in the Holocene, the current geological epoch, the Sahara Desert in Africa grew greener than today -- it was more of a grassland. Other Northern Hemisphere vegetation including the coniferous and deciduous forests in the mid-latitudes and the Arctic also thrived.

Thompson took evidence from pollen records and designed a set of experiments with a climate model known as the Community Earth System Model (CESM), one of the best-regarded models in a wide-ranging class of such models. He ran simulations to account for a range of changes in vegetation that had not been previously considered.

"Expanded vegetation during the Holocene warmed the globe by as much as 1.5 degrees Fahrenheit," Thompson said. "Our new simulations align closely with paleoclimate proxies. So this is exciting that we can point to Northern Hemisphere vegetation as one potential factor that allows us to resolve the controversial Holocene temperature conundrum."

Understanding the scale and timing of temperature change throughout the Holocene is important because it is a period of recent history, geologically speaking. The rise of human agriculture and civilization occurred during this time, so many scientists and historians from different disciplines are interested in understanding how early and mid-Holocene climate differed from the present day.

Read more at Science Daily