Oct 9, 2021

Dragonflies likely migrate across the Indian Ocean

Can dragonflies migrate thousands of miles across the Indian Ocean, from India via the Maldives to Africa, and back again? An international research team led by Lund University in Sweden has used models and simulations to find out if the hypothesis could be true.

In 2009, marine biologist Charles Anderson put forward a hypothesis after observing globe skimmer dragonflies(Pantala flavescens) on the Maldives, that had flown in from what he assumed was India. When they flew off again, it was towards East Africa. Now, 12 years later, a group of researchers decided to investigate his claim.

Globe skimmer dragonflies are too small to be fitted with transmitters. Instead, the researchers examined its physiological aspects and calculated how long a globe skimmer dragonfly could stay airborne using the energy that can be stored in its body. In addition, the researchers used meteorological wind models to determine if there are winds that can facilitate the migration in both directions.

"Our study shows that this migration from India to East Africa is actually possible. However, the globe skimmer dragonfly can't manage it using only the fat it can store in its body. It also requires favourable winds and these are present during certain periods of the year," says Johanna Hedlund, a biology researcher at Lund University.

According to the simulated migration experiments using wind models, about 15 per cent of the dragonflies could manage the migration from India to Africa in the spring. In the autumn, 40 per cent could make the same journey in the opposite direction.

Johanna Hedlund and her colleagues consider it impressive that dragonflies can do this at all. Even more impressive is the fact that the globe skimmer dragonfly migration across the Indian Ocean is the longest in the animal kingdom in relation to an animal's size.

"We have got a lot closer to solving the mystery of how a tiny dragonfly, which only weighs 300 milligrams, can cross 2,000 kilometres of open sea," says Johanna Hedlund.

Other animals also rely on favourable wind conditions when they migrate. Two examples are the amur falcon and the Jacobin cuckoo, which also fly across the Indian Ocean. The researchers behind the study in question warn that climate change may affect the chances of these birds and the globe skimmer dragonfly in the future. There is a risk that wind patterns will change when the water surface gets warmer.

Read more at Science Daily

Neuroscientists roll out first comprehensive atlas of brain cells

When you clicked to read this story, a band of cells across the top of your brain sent signals down your spine and out to your hand to tell the muscles in your index finger to press down with just the right amount of pressure to activate your mouse or track pad.

A slew of new studies now shows that the area of the brain responsible for initiating this action -- the primary motor cortex, which controls movement -- has as many as 116 different types of cells that work together to make this happen.

The 17 studies, appearing online Oct. 6 in the journal Nature, are the result of five years of work by a huge consortium of researchers supported by the National Institutes of Health's Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative to identify the myriad of different cell types in one portion of the brain. It is the first step in a long-term project to generate an atlas of the entire brain to help understand how the neural networks in our head control our body and mind and how they are disrupted in cases of mental and physical problems.

"If you think of the brain as an extremely complex machine, how could we understand it without first breaking it down and knowing the parts?" asked cellular neuroscientist Helen Bateup, a University of California, Berkeley, associate professor of molecular and cell biology and co-author of the flagship paper that synthesizes the results of the other papers. "The first page of any manual of how the brain works should read: Here are all the cellular components, this is how many of them there are, here is where they are located and who they connect to."

Individual researchers have previously identified dozens of cell types based on their shape, size, electrical properties and which genes are expressed in them. The new studies identify about five times more cell types, though many are subtypes of well-known cell types. For example, cells that release specific neurotransmitters, like gamma-aminobutyric acid (GABA) or glutamate, each have more than a dozen subtypes distinguishable from one another by their gene expression and electrical firing patterns.

While the current papers address only the motor cortex, the BRAIN Initiative Cell Census Network (BICCN) -- created in 2017 -- endeavors to map all the different cell types throughout the brain, which consists of more than 160 billion individual cells, both neurons and support cells called glia. The BRAIN Initiative was launched in 2013 by then-President Barack Obama.

"Once we have all those parts defined, we can then go up a level and start to understand how those parts work together, how they form a functional circuit, how that ultimately gives rise to perceptions and behavior and much more complex things," Bateup said.

Together with former UC Berkeley professor John Ngai, Bateup and UC Berkeley colleague Dirk Hockemeyer have already used CRISPR-Cas9 to create mice in which a specific cell type is labeled with a fluorescent marker, allowing them to track the connections these cells make throughout the brain. For the flagship journal paper, the Berkeley team created two strains of "knock-in" reporter mice that provided novel tools for illuminating the connections of the newly identified cell types, she said.

"One of our many limitations in developing effective therapies for human brain disorders is that we just don't know enough about which cells and connections are being affected by a particular disease and therefore can't pinpoint with precision what and where we need to target," said Ngai, who led UC Berkeley's Brain Initiative efforts before being tapped last year to direct the entire national initiative. "Detailed information about the types of cells that make up the brain and their properties will ultimately enable the development of new therapies for neurologic and neuropsychiatric diseases."

Ngai is one of 13 corresponding authors of the flagship paper, which has more than 250 co-authors in all.

Bateup, Hockemeyer and Ngai collaborated on an earlier study to profile all the active genes in single dopamine-producing cells in the mouse's midbrain, which has structures similar to human brains. This same profiling technique, which involves identifying all the specific messenger RNA molecules and their levels in each cell, was employed by other BICCN researchers to profile cells in the motor cortex. This type of analysis, using a technique called single-cell RNA sequencing, or scRNA-seq, is referred to as transcriptomics.

The scRNA-seq technique was one of nearly a dozen separate experimental methods used by the BICCN team to characterize the different cell types in three different mammals: mice, marmosets and humans. Four of these involved different ways of identifying gene expression levels and determining the genome's chromatin architecture and DNA methylation status, which is called the epigenome. Other techniques included classical electrophysiological patch clamp recordings to distinguish cells by how they fire action potentials, categorizing cells by shape, determining their connectivity, and looking at where the cells are spatially located within the brain. Several of these used machine learning or artificial intelligence to distinguish cell types.

"This was the most comprehensive description of these cell types, and with high resolution and different methodologies," Hockemeyer said. "The conclusion of the paper is that there's remarkable overlap and consistency in determining cell types with these different methods."

A team of statisticians combined data from all these experimental methods to determine how best to classify or cluster cells into different types and, presumably, different functions based on the observed differences in expression and epigenetic profiles among these cells. While there are many statistical algorithms for analyzing such data and identifying clusters, the challenge was to determine which clusters were truly different from one another -- truly different cell types -- said Sandrine Dudoit, a UC Berkeley professor and chair of the Department of Statistics. She and biostatistician Elizabeth Purdom, UC Berkeley associate professor of statistics, were key members of the statistical team and co-authors of the flagship paper.

"The idea is not to create yet another new clustering method, but to find ways of leveraging the strengths of different methods and combining methods and to assess the stability of the results, the reproducibility of the clusters you get," Dudoit said. "That's really a key message about all these studies that look for novel cell types or novel categories of cells: No matter what algorithm you try, you'll get clusters, so it is key to really have confidence in your results."

Bateup noted that the number of individual cell types identified in the new study depended on the technique used and ranged from dozens to 116. One finding, for example, was that humans have about twice as many different types of inhibitory neurons as excitatory neurons in this region of the brain, while mice have five times as many.

"Before, we had something like 10 or 20 different cell types that had been defined, but we had no idea if the cells we were defining by their patterns of gene expression were the same ones as those defined based on their electrophysiological properties, or the same as the neuron types defined by their morphology," Bateup said.

"The big advance by the BICCN is that we combined many different ways of defining a cell type and integrated them to come up with a consensus taxonomy that's not just based on gene expression or on physiology or morphology, but takes all of those properties into account," Hockemeyer said. "So, now we can say this particular cell type expresses these genes, has this morphology, has these physiological properties, and is located in this particular region of the cortex. So, you have a much deeper, granular understanding of what that cell type is and its basic properties."

Dudoit cautioned that future studies could show that the number of cell types identified in the motor cortex is an overestimate, but the current studies are a good start in assembling a cell atlas of the whole brain.

"Even among biologists, there are vastly different opinions as to how much resolution you should have for these systems, whether there is this very, very fine clustering structure or whether you really have higher level cell types that are more stable," she said. "Nevertheless, these results show the power of collaboration and pulling together efforts across different groups. We're starting with a biological question, but a biologist alone could not have solved that problem. To address a big challenging problem like that, you want a team of experts in a bunch of different disciplines that are able to communicate well and work well with each other."

Read more at Science Daily

Oct 8, 2021

Chang'e-5 samples reveal key age of moon rocks

A lunar probe launched by the Chinese space agency recently brought back the first fresh samples of rock and debris from the moon in more than 40 years. Now an international team of scientists -- including an expert from Washington University in St. Louis -- has determined the age of these moon rocks at close to 1.97 billion years old.

"It is the perfect sample to close a 2-billion-year gap," said Brad Jolliff, the Scott Rudolph Professor of Earth and Planetary Sciences in Arts & Sciences and director of the university's McDonnell Center for the Space Sciences. Jolliff is a U.S.-based co-author of an analysis of the new moon rocks led by the Chinese Academy of Geological Sciences, published Oct. 7 in the journal Science.

The age determination is among the first scientific results reported from the successful Chang'e-5 mission, which was designed to collect and return to Earth rocks from some of the youngest volcanic surfaces on the moon.

"Of course, 'young' is relative," Jolliff said. "All of the volcanic rocks collected by Apollo were older than 3 billion years. And all of the young impact craters whose ages have been determined from the analysis of samples are younger than 1 billion years. So the Chang'e-5 samples fill a critical gap."

The gap that Jolliff references is important not only for studying the moon, but also for studying other rocky planets in the solar system.

As a planetary body, the moon itself is about 4.5 billion years old, almost as old as the Earth. But unlike the Earth, the moon doesn't have the erosive or mountain-building processes that tend to erase craters over the years. Scientists have taken advantage of the moon's enduring craters to develop methods of estimating the ages of different regions on its surface, based in part on how pocked by craters the area appears to be.

This study shows that the moon rocks returned by Chang'e-5 are only about 2 billion years old. Knowing the age of these rocks with certainty, scientists are now able to more accurately calibrate their important chronology tools, Jolliff said.

"Planetary scientists know that the more craters on a surface, the older it is; the fewer craters, the younger the surface. That's a nice relative determination," Jolliff said. "But to put absolute age dates on that, one has to have samples from those surfaces."

"The Apollo samples gave us a number of surfaces that we were able to date and correlate with crater densities," Jolliff explained. "This cratering chronology has been extended to other planets -- for example, for Mercury and Mars -- to say that surfaces with a certain density of craters have a certain age."

"In this study, we got a very precise age right around 2 billion years, plus or minus 50 million years," Jolliff said. "It's a phenomenal result. In terms of planetary time, that's a very precise determination. And that's good enough to distinguish between the different formulations of the chronology."

Other interesting findings from the study relate to the composition of basalts in the returned samples and what that means for the moon's volcanic history, Jolliff noted.

The results presented in the Science paper are just the tip of the iceberg, so to speak. Jolliff and colleagues are now sifting through the regolith samples for keys to other significant lunar science issues, such as finding bits and pieces tossed into the Chang'e 5 collection site from distant, young impact craters such as Aristarchus, to possibly determining the ages of these small rocks and the nature of the materials at those other impact sites.

Jolliff has worked with the scientists at the Sensitive High Resolution Ion MicroProbe (SHRIMP) Center in Beijing that led this study, including study co-author Dunyi Liu, for over 15 years. This long-term relationship is possible through a special collaboration agreement that includes Washington University and its Department of Earth and Planetary Sciences, and Shandong University in Weihai, China, with support from Washington University's McDonnell Center for the Space Sciences.

"The lab in Beijing where the new analyses were done is among the best in the world, and they did a phenomenal job in characterizing and analyzing the volcanic rock samples," Jolliff said.

"The consortium includes members from China, Australia, the U.S., the U.K. and Sweden," Jolliff continued. "This is science done in the ideal way: an international collaboration, with free sharing of data and knowledge -- and all done in the most collegial way possible. This is diplomacy by science."

Jolliff is a specialist in mineralogy and provided his expertise for this study of the Chang'e-5 samples. His personal research background is focused on the moon and Mars, the materials that make up their surfaces and what they tell about the planets' history.

Read more at Science Daily

Unprecedented rise of heat and rainfall extremes in observational data

A 90-fold increase in the frequency of monthly heat extremes in the past ten years compared to 1951-1980 has been found by scientists in observation data. Their analysis reveals that so-called 3-sigma heat events, which deviate strongly from what is normal in a given region, now on average affect about 9 percent of all land area at any time. Record daily rainfall events also increased in a non-linear way -- on average, 1 in 4 rainfall records in the last decade can be attributed to climate change. Already today, extreme events linked to human-caused climate change are at unprecedented levels, the scientists say, and they must be expected to increase further.

"For extreme extremes, what we call 4-sigma-events that have been virtually absent before, we even see a roughly 1000-fold increase compared to the reference period. They affected about 3 percent of global land area in 2011-20 in any month," says lead-author Alexander Robinson from Complutense University of Madrid, Spain, and Potsdam Institute for Climate Impact Research, Germany. "This confirms previous findings, yet with ever-increasing numbers. We are seeing extremes now which are virtually impossible without the influence of global warming caused by greenhouse gas emissions from burning fossil fuels." The term 'sigma' refers to what scientists call a standard deviation.

For example, 2020 brought prolonged heat waves to both Siberia and Australia, contributing to the emergence of devastating wildfires in both regions. Both events led to the declaration of a local state of emergency. Temperatures at life-threatening levels have hit parts of the US and Canada in 2021, reaching almost 50°C. Globally, the record-breaking heat extremes increased most in tropical regions, since these normally have a low variability of monthly temperatures. As temperatures continue to rise, however, record-breaking heat will also become much more common in mid- and high-latitude regions.

1 in 4 rainfall records is attributable to climate change

Daily rainfall records have also increased. Compared to what would have to be expected in a climate without global warming, the number of wet records increased by about 30 percent. This implies that 1 in 4 records is attributable to human-caused climate change. The physics background to this is the Clausius-Clapeyron relation, which states that air can hold 7 percent more moisture per degree Celsius of warming.

Importantly, already-dry regions such as western North America and South Africa have seen a reduction in rainfall records, while wet regions such as central and northern Europe have seen a strong increase. Generally, increasing rainfall extremes do not help to alleviate drought problems.

Small temperature increase, disproportionally big consequences

Comparing the new data with the already quite extreme previous decade of 2000-2010, the data show that the land area affected by heat extremes of the 3-sigma category roughly doubled. Those deviations which are so strong they have previously been essentially absent, the 4-sigma events, newly emerged in the observations. Rainfall records have increased a further 5 percentage points in the last decade. The seemingly small amount of warming in the past ten years, just 0.25°C, has thus pushed up climate extremes substantially.

Read more at Science Daily

Colorblind fish show experts how vision evolved

After decades of studying color vision in mice, new research in zebrafish has allowed experts at the University of Tokyo to uncover how some animals regulate their ability to see blue light. The results, published in Science Advances, allow researchers to better understand the evolutionary history and current control mechanisms of color vision.

"In 1989 when I began studying the evolution of vision, the textbooks said that light sensitivity and color differentiation all came from the same protein. Since then, our group identified color-sensitive proteins, mapped their evolution between species, and now understand their regulation," said Emeritus Professor Yoshitaka Fukada from the University of Tokyo Graduate School of Science.

As new color-sensitive cone cells grow in the eye, controlled patterns of gene activity mean that each cell differentiates and produces one type of protein specialized to detect a specific range of light wavelengths. The ancestor of all animals with a backbone could differentiate four different color wavelengths of light: near-ultraviolet, blue, green and red.

Over millennia, some ancestor species lost the genes responsible for one or two of those color-detecting proteins. Sometimes, a descendant species eventually recreated a color-specific protein by duplicating, then mutating a remaining gene.

Genome sequencing allows researchers to study the evolution of color vision genes while gene editing tools can reveal how those genes are regulated. Studying mice has allowed experts to understand how violet- and red-wavelength sensitivity are regulated, but mice evolved without the ability to differentiate the blue and green wavelengths. Lack of convenient gene editing tools meant regulation of blue and green color sensitivity remained unknown.

In 2019, Fukada's research team, now led by Lecturer Daisuke Kojima, combined the relatively new gene editing tools and color vision studies in zebrafish, a species with all four color-sensitive proteins. Microscope images of normal zebrafish retinas, the light-sensitive membranes lining their eyeballs and connected to their brains by their optic nerves, show a vibrant arrangement of fluorescently labeled cone cells in a distinct pattern of violet-, green-, red-, blue-, red-, green- and violet-detecting cells.

Researchers first identified three genes -- six6b, six7, and foxq2 -- common only in species with all four color vision proteins. Then, they genetically modified zebrafish to reduce the activity of those genes.

Previously, the UTokyo researchers observed that reducing expression of six6b and six7 -- either in combination or individually -- eliminated both blue and green vision in zebrafish. Zebrafish without blue and green vision had difficulty finding food, indicating the importance of full-color vision for their survival.

It was their most recently published results that allowed researchers to understand how blue and green sensitivities are distinguished by different foxq2 activity. In cone cells that will detect blue light, six6b and six7 activate foxq2. Then foxq2 activates gene expression of the blue-sensitive protein and blocks expression of green-sensitive proteins. Retinas of zebrafish lacking normal foxq2 gene expression do not have cone cells sensitive to blue light, instead packing together a shorter pattern of violet-, green-, then two red-, green- and violet-detecting cone cells.

The combination of molecular genetic studies in single species with comparative genomic studies of multiple species gives researchers additional confidence in their map of color vision regulation.

Read more at Science Daily

A 'cousin' of Viagra reduces obesity by stimulating cells to burn fat

Researchers at Johns Hopkins Medicine have found that a drug first developed to treat Alzheimer's disease, schizophrenia and sickle cell disease reduces obesity and fatty liver in mice and improves their heart function -- without changes in food intake or daily activity.

These findings, published online Oct. 7 in the Journal of Clinical Investigation, reveal that a chemical inhibitor of the enzyme PDE9 stimulates cells to burn more fat. This occurred in male mice and in female mice whose sex hormones were reduced by removing their ovaries, thus mimicking menopause. Postmenopausal women are well known to be at increased risk for obesity around their waist as well as at risk for cardiovascular and metabolic disease.

Inhibiting PDE9 did not cause these changes in female mice that had their ovaries, so female sex hormone status was important in the study.

"Currently, there isn't a pill that has been proven effective for treating severe obesity, yet such obesity is a global health problem that increases the risk of many other diseases," says senior investigator David Kass, M.D., Abraham and Virginia Weiss Professor of Cardiology at the Johns Hopkins University School of Medicine. "What makes our findings exciting is that we found an oral medication that activates fat-burning in mice to reduce obesity and fat buildup in organs like the liver and heart that contribute to disease; this is new."

This study follows work reported by the same laboratory in 2015 that first showed the PDE9 enzyme is present in the heart and contributes to heart disease triggered by high blood pressure. Blocking PDE9 increases the amount of a small molecule known as cyclic GMP, which in turn controls many aspects of cell function throughout the body. PDE9 is the enzyme cousin of another protein called PDE5, which also controls cyclic GMP and is blocked by drugs such as Viagra. Inhibitors of PDE9 are experimental, so there is no drug name yet.

Based on these results, the investigators suspected PDE9 inhibition might improve cardiometabolic syndrome (CMS), a constellation of common conditions including high blood pressure; high blood sugar, cholesterol and triglycerides; and excess body fat, particularly around the waist. CMS is considered a pandemic by medical experts and a major risk factor for heart disease, stroke, type 2 diabetes, cancers and COVID-19.

While PDE9 inhibitors remain experimental, they have been developed by several pharmaceutical companies and tested in humans for diseases such as Alzheimer's and sickle cell. The current mouse study used a PDE9 inhibitor made by Pfizer Inc. (PF-04447943) that was first tested for Alzheimer's disease, though eventually abandoned for this use. Between the two reported clinical trials, over 100 subjects received this drug, and it was found to be well tolerated with no serious adverse side effects. A different PDE9 inhibitor is now being tested for human heart failure.

To test the effects of a PDE9 inhibitor on obesity and cardiometabolic syndrome, the researchers put mice on a high-fat diet that led to doubling their body weight, high blood lipids and diabetes after four months. A group of female mice had their ovaries surgically removed, and most of the mice also had a pressure stress applied to the heart to better mimic cardiometabolic syndrome. The mice were then assigned to receive either the PDE9 inhibitor or a placebo by mouth over the next six to eight weeks.

In female mice without their ovaries (a model of postmenopause), the difference in median percent weight change between the drug and placebo groups was -27.5%, and in males it was -19.5%. Lean body mass was not altered in either group, nor was daily food consumption or physical activity. The PDE9 inhibitor lowered blood cholesterol and triglycerides, and reduced fat in the liver to levels found in mice fed a normal diet. The heart also improved with PDE9 inhibition, with ejection fraction (which measures the percentage of blood leaving the heart each time it contracts) relatively higher by 7%-15% and heart mass (hypertrophy) rising 70% less compared with the placebo. An increase in heart mass is evidence of abnormal heart stress. However, having this lowered by the inhibitor indicates stress on the heart was reduced.

The investigators found PDE9 inhibition produces these effects by activating a master regulator of fat metabolism known as PPARa. By stimulating PPARa, levels of genes for proteins that control fat uptake into cells and their use as fuel are broadly increased. When PPARa was blocked in cells or the whole animal, the effects from PDE9 inhibition on obesity and fat-burning were also lost. They found estrogen normally plays this role of PPARa on fat regulation in females, but when its levels fall like they do after menopause, PPARa becomes more important to regulate fat and so PDE9 inhibition has a greater effect.

"The finding that the experimental drug did not benefit female mice that had their ovaries shows that these sex hormones, particularly estrogen, had already achieved what inhibiting PDE9 does to stimulate fat-burning," notes Sumita Mishra, the research associate who performed much of the work. "Menopause reduces sex hormone levels, and their control over fat metabolism then shifts to the protein regulated by PDE9, so the drug treatment is now effective."

According to the U.S. Centers for Disease Control and Prevention, more than 40% of people living in the U.S. are obese; and 43% of American women over the age of 60 -- long past menopause -- are considered obese.

Kass notes that if his lab's findings in mice apply to people, someone weighing 250 pounds could lose about 50 pounds with an oral PDE9 inhibitor without changing eating or exercise habits.

"I'm not suggesting to be a couch potato and take a pill, but I suspect that combined with diet and exercise, the effects from PDE9 inhibition may be even greater," says Kass. The next step would be testing in humans to see if PDE9 inhibitors produce similar effects in men and postmenopausal women.

Read more at Science Daily

Oct 7, 2021

Dwarf planet Vesta a window to the early solar system

The dwarf planet Vesta is helping scientists better understand the earliest era in the formation of our solar system. Two recent papers involving scientists from the University of California, Davis, use data from meteorites derived from Vesta to resolve the "missing mantle problem" and push back our knowledge of the solar system to just a couple of million years after it began to form. The papers were published in Nature Communications Sept. 14 and Nature Astronomy Sept. 30.

Vesta is the second-largest body in the asteroid belt at 500 kilometers across. It's big enough to have evolved in the same way as rocky, terrestrial bodies like the Earth, moon and Mars. Early on, these were balls of molten rock heated by collisions. Iron and the siderophiles, or 'iron-loving' elements such as rhenium, osmium, iridium, platinum and palladium sank to the center to form a metallic core, leaving the mantle poor in these elements. As the planet cooled, a thin solid crust formed over the mantle. Later, meteorites brought iron and other elements to the crust.

Most of the bulk of a planet like Earth is mantle. But mantle-type rocks are rare among asteroids and meteorites.

"If we look at meteorites, we have core material, we have crust, but we don't see mantle," said Qing-Zhu Yin, professor of earth and planetary sciences in the UC Davis College of Letters and Science. Planetary scientists have called this the "missing mantle problem."

In the recent Nature Communications paper, Yin and UC Davis graduate students Supratim Dey and Audrey Miller worked with first author Zoltan Vaci at the University of New Mexico to describe three recently discovered meteorites that do include mantle rock, called ultramafics that include mineral olivine as a major component. The UC Davis team contributed precise analysis of isotopes, creating a fingerprint that allowed them to identify the meteorites as coming from Vesta or a very similar body.

"This is the first time we've been able to sample the mantle of Vesta," Yin said. NASA's Dawn mission remotely observed rocks from the largest south pole impact crater on Vesta in 2011 but did not find mantle rock.

Probing the early solar system


Because it is so small, Vesta formed a solid crust long before larger bodies like the Earth, moon and Mars. So the siderophile elements that accumulated in its crust and mantle form a record of the very early solar system after core formation. Over time, collisions have broken pieces off Vesta that sometimes fall to Earth as meteorites.

Yin's lab at UC Davis had previously collaborated with an international team looking at elements in lunar crust to probe the early solar system. In the second paper, published in Nature Astronomy, Meng-Hua Zhu at the Macau University of Science and Technology, Yin and colleagues extended this work using Vesta.

"Because Vesta formed very early, it's a good template to look at the entire history of the Solar System," Yin said. "This pushes us back to two million years after the beginning of solar system formation."

It had been thought that Vesta and the larger inner planets could have got much of their material from the asteroid belt. But a key finding from the study was that the inner planets (Mercury, Venus, Earth and moon, Mars and inner dwarf planets) got most of their mass from colliding and merging with other large, molten bodies early in the solar system. The asteroid belt itself represents the leftover material of planet formation, but did not contribute much to the larger worlds.

Read more at Science Daily

Extinction and origination patterns change after mass extinctions

Scientists at Stanford University have discovered a surprising pattern in how life reemerges from cataclysm. Research published Oct. 6 in Proceedings of the Royal Society B shows the usual rules of body size evolution change not only during mass extinction, but also during subsequent recovery.

Since the 1980s, evolutionary biologists have debated whether mass extinctions and the recoveries that follow them intensify the selection criteria of normal times -- or fundamentally shift the set of traits that mark groups of species for destruction. The new study finds evidence for the latter in a sweeping analysis of marine fossils from most of the past half-billion years.

Whether and how evolutionary dynamics shift in the wake of global annihilation has "profound implications not only for understanding the origins of the modern biosphere but also for predicting the consequences of the current biodiversity crisis," the authors write.

"Ultimately, we want to be able to look at the fossil record and use it to predict what will go extinct, and more importantly, what comes back," said lead author Pedro Monarrez, a postdoctoral scholar in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "When we look closely at 485 million years of extinctions and recoveries in the world's oceans, there does appear to be a pattern in what comes back based on body size in some groups."

Build back smaller?

The study builds on recent Stanford research that looked at body size and extinction risk among marine animals in groupings known as genera, one taxonomic level above species. That study found smaller-bodied genera on average are equally or more likely to than their larger relatives to go extinct.

The new study found this pattern holds true across 10 classes of marine animals for the long stretches of time between mass extinctions. But mass extinctions shake up the rules in unpredictable ways, with extinction risks becoming even greater for smaller genera in some classes, and larger genera losing out in others.

The results show smaller genera in a class known as crinoids -- sometimes called sea lilies or fairy money -- were substantially more likely to be wiped out during mass extinction events. In contrast, no detectable size differences between victims and survivors turned up during "background" intervals. Among trilobites, a diverse group distantly related to modern horseshoe crabs, the chances of extinction decreased very slightly with body size during background intervals -- but increased about eightfold with each doubling of body length during mass extinction.

When they looked beyond the marine genera that died out to consider those that were the first of their kind, the authors found an even more dramatic shift in body size patterns before and after extinctions. During background times, newly evolved genera tend to be slightly larger than those that came before. During recovery from mass extinction, the pattern flips, and it becomes more common for originators in most classes to be tiny compared to holdover species who survived the cataclysm.

Gastropod genera including sea snails are among a few exceptions to the build-back-smaller pattern. Gastropod genera that originated during recovery intervals tended to be larger than the survivors of the preceding catastrophe. Nearly across the board, the authors write, "selectivity on body size is more pronounced, regardless of direction, during mass extinction events and their recovery intervals than during background times."

Think of this as the biosphere's version of choosing starters and benchwarmers based on height and weight more than skill after losing a big match. There may well be a logic to this game plan in the arc of evolution. "Our next challenge is to identify the reasons why so many originators after mass extinction are small," said senior author Jonathan Payne, the Dorrell William Kirby Professor at Stanford Earth.

Scientists don't yet know whether those reasons might relate to global environmental conditions, such as low oxygen levels or rising temperatures, or to factors related to interactions between organisms and their local surroundings, like food scarcity or a dearth of predators. According to Payne, "Identifying the causes of these patterns may help us not only to understand how our current world came to be but also to project the long-term evolutionary response to the current extinction crisis."

Fossil data

This is the latest in a series of papers from Payne's research group that harness statistical analyses and computer simulations to uncover evolutionary dynamics in body size data from marine fossil records. In 2015, the team recruited high school interns and undergraduates to help calculate the body size and volume of thousands of marine genera from photographs and illustrations. The resulting dataset included most fossil invertebrate animal genera known to science and was at least 10 times larger than any previous compilation of fossil animal body sizes.

The group has since expanded the dataset and plumbed it for patterns. Among other results, they've found that larger body size has become one of the biggest determinants of extinction risk for ocean animals for the first time in the history of life on Earth.

For the new study, Monarrez, Payne and co-author Noel Heim of Tufts University used body size data from marine fossil records to estimate the probability of extinction and origination as a function of body size across most of the past 485 million years. By pairing their body size data with occurrence records from the public Paleobiology Database, they were able to analyze 284,308 fossil occurrences for ocean animals belonging to 10,203 genera. "This dataset allowed us to document, in different groups of animals, how evolutionary patterns change when a mass extinction comes along," said Payne.

Future recovery

Other paleontologists have observed that smaller-bodied animals become more common in the fossil record following mass extinctions -- often calling it the "Lilliput Effect," after the kingdom of tiny people in Jonathan Swift's 18th-century novel Gulliver's Travels.

Findings in the new study suggest animal physiology offers a plausible explanation for this pattern. The authors found the classic shrinking pattern in most classes of marine animals with low activity levels and slower metabolism. Species in these groups that first evolved right after a mass extinction tended to have smaller bodies than those that originated during background intervals. In contrast, when new species evolved in groups of more active marine animals with faster metabolism, they tended to have larger bodies in the wake of extinction and smaller bodies during normal times.

The results highlight mass extinction as a drama in two acts. "The extinction part changes the world by removing not just a lot of organisms or a lot of species, but by removing them in various selective patterns. Then, recovery isn't just equal for everyone who survives. A new set of biases go into the recovery pattern," Payne said. "It's only by combining those two that you can really understand the world that we get five or 10 million years after an extinction event."

Read more at Science Daily

Think a census of humans is hard? Try counting their brain cells!

In 2013, the U.S. government began investing $100 million to decipher how the human brain works in a collaborative project called the BRAIN Initiative. Cold Spring Harbor Laboratory (CSHL) and other researchers built tools and set standards for describing all the cells in the brain. On October 7, 2021 the initiative reached a major milestone, publishing a comprehensive census of cell types in the mouse, monkey, and human primary motor cortex in Nature.

The BRAIN Initiative Cell Census Network (BICCN) is the consortium of neuroscientists, computational scientists, physicists, geneticists, and instrument makers within the BRAIN Initiative tasked with counting and mapping all the cells in the brain.

Z. Josh Huang, an adjunct professor at CSHL, leads one branch of the BICCN that includes five principal investigators from CSHL and researchers from other institutions. His lab outlined ways to classify new cell subtypes within the mouse forebrain based on their shapes, connections, and the genes they use.

CSHL Professor Partha Mitra and other CSHL collaborators taught a computer to recognize different parts of neurons, then mapped the cells onto a topological world to see how those neurons are likely to connect.

CSHL Associate Professor Jesse Gillis' lab developed a statistics-based computer tool to categorize cells based on similarities in their component parts. This program, called MetaNeighbor, uses RNA transcripts (the instructions to build the components) to compare and categorize mammalian brain cells.

CSHL Professor Anthony Zador's lab developed MAPseq to map how different brain cells connect and interact. Several years later, Zador and his team developed BARseq and BARseq2, which can map connections and gene-use in thousands of neurons in a single mouse at single-neuron resolution.

CSHL Associate Professor Pavel Osten leads another branch of the BICCN dedicated to finding anatomical differences between female and male mouse brains. He and his lab developed qBrain, a method that combines brain imaging techniques to map cells and connections of the mouse primary motor cortex in three dimensions.

Read more at Science Daily

Pollution from freight traffic disproportionately impacts communities of color across 52 US cities

In urban areas across the U.S., low-income neighborhoods and communities of color experience an average of 28% more nitrogen dioxide (NO2) pollution than higher-income and majority-white neighborhoods. The disparity is driven primarily by proximity to trucking routes on major roadways, where diesel trucks are emitters of NO2 and other air pollutants.

Nitrogen dioxide is a common air pollutant that can cause a range of health problems, such as chronic respiratory illness and asthma. But it can be difficult to trace.

A new study used high-resolution air pollution data measured with satellites to track NO2 for nearly two years in major cities across the U.S. The researchers then paired the pollution data with both demographic data and metrics that analyze the degree of racial segregation in a community.

Cities with bigger populations tended to have larger disparities in NO2 pollution between low-income neighborhoods of color and high-income white neighborhoods, according to the study. Phoenix, Los Angeles and Newark, N.J., have the highest NO2 inequalities, all with a discrepancy in NO2 exposure of over 40%.

Both commuter traffic and heavy-duty trucks contribute NO2 and other pollutants, but diesel trucks are the dominant source, contributing on average up to half of a city's NO2 despite being at most 5% of traffic. Because diesel trucks also emit other harmful gases and particulates, changes in NO2 are also thought to reflect exposure to other pollutants as well.

The findings are detailed in the AGU journal Geophysical Research Letters, which publishes high-impact, short-format reports with immediate implications spanning all Earth and space sciences.

"One of the novel things we looked at was the integration of segregation metrics and air quality. Previously, we had been limited in our ability to address air pollution inequality, but with improvements in satellite resolution we are now able to get spatially and temporally continuous data at finer resolutions within cities," said Angelique Demetillo, an atmospheric chemist at the University of Virginia and lead author of the study.

Measuring pollutants like NO2 is difficult to do at a scale that's useful to local policymakers. While previous studies have shown disparities in air quality, the new findings offer near-daily pollution data at small scales, providing important quantitative information policymakers can use to guide zoning and public health and that can reflect the lived experience of community members.

The new study found a 60% drop in heavy trucking on weekends results in a 40% decrease in air pollution inequality. That can point policymakers to a clear emissions-reducing target.

"In terms of environmental justice, one of the things we have lacked is these observations across an entire city that also have temporal variability that we can use to understand the sources [of pollution]," said Sally Pusede, an atmospheric chemist at the University of Virginia who co-authored the study.

"We have these new data and methodologies that continue to show us what we already know through experience, but in the U.S., it's [quantitative] data that informs policy," said Regan Patterson, a transportation and public health expert at the Congressional Black Caucus Foundation.

Bigger city, bigger disparity

Transitioning to electric heavy-duty trucks could be one way of reducing pollution exposure in neighborhoods close to highways. California already has a mandate of doing this by 2045. But, Pusede pointed out, while emissions from diesel trucks are the biggest driver of exposure inequality, other pollution sources contribute to the problem. "Even if we eliminated emissions from trucking, we would still see inequalities present because there are other sources of inequality."

Discrepancies in exposure to pollution between communities of color and white communities are well-documented. They often stem from zoning practices that result in communities of color forming in less desirable areas or infrastructure like highways being built in close proximity to -- or through -- a neighborhood.

Patterson said over the long term, transformative changes are needed to truly begin to remove NO2 pollution disparity. "How do you rectify the inequities that have literally been built into the environment, where certain groups are more likely to be adjacent to major roadways?" she asked.

Both Patterson and Pusede referenced a bill in the new infrastructure package aimed at physically reconnecting communities by removing freeways as a necessary step toward equalizing air quality. More immediately, Demetillo hoped her study and studies like it will help put air-quality information into the hands of community members.

Read more at Science Daily

Oct 6, 2021

Process leading to supernova explosions and cosmic radio bursts unearthed

A promising method for producing and observing on Earth a process important to black holes, supernova explosions and other extreme cosmic events has been proposed by scientists at Princeton University's Department of Astrophysical Sciences, SLAC National Acceleraor Laboratory, and the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL). The process, called quantum electrodynamic (QED) cascades, can lead to supernovas -- exploding stars -- and fast radio bursts that equal in milliseconds the energy the sun puts out in three days.

First demonstration

The researchers produced the first theoretical demonstration that colliding a laboratory laser with a dense electron beam can produce high-density QED cascades. "We show that what was thought to be impossible is in fact possible," said Kenan Qu, lead author of a paper in Physical Review Letters (PRL) that describes the breakthrough demonstration. "That in turn suggests how previously unobserved collective effects can be probed with existing state-of-the-art laser and electron beam technologies."

The process unfolds in a straightforward manner. Colliding a strong laser pulse with a high energy electron beam splits a vacuum into high-density electron-positron pairs that begin to interact with one another. This interaction creates what are called collective plasma effects that influence how the pairs respond collectively to electrical or magnetic fields.

Plasma, the hot, charged state of matter composed of free electrons and atomic nuclei, makes up 99 percent of the visible universe. Plasma fuels fusion reactions that power the sun and stars, a process that PPPL and scientists around the world are seeking to develop on Earth. Plasma processes throughout the universe are strongly influenced by electromagnetic fields.

The PRL paper focuses on the electromagnetic strength of the laser and the energy of the electron beam that the theory brings together to create QED cascades. "We seek to simulate the conditions that create electron-positron pairs with sufficient density that they produce measurable collective effects and see how to unambiguously verify these effects," Qu said.

The tasks called for uncovering the signature of successful plasma creation through a QED process. Researchers found the signature in the shift of a moderately intense laser to a higher frequency caused by the proposal to send the laser against an electron beam. "That finding solves the joint problem of producing the QED plasma regime most easily and observing it most easily," Qu said. "The amount of the shift varies depending on the density of the plasma and the energy of the pairs."

Beyond current capabilities

Theory previously showed that sufficiently strong lasers or electric or magnetic fields could create QED pairs. But the required magnitudes are so high as to be beyond current laboratory capabilities.

However, "It turns out that current technology in lasers and relativistic beams [that travel near the speed of light], if co-located, is sufficient to access and observe this regime," said physicist Nat Fisch, professor of astrophysical sciences and associate director for academic affairs at PPPL, and a co-author of the PRL paper and principal investigator of the project. "A key point is to use the laser to slow down the pairs so that their mass decreases, thereby boosting their contribution to the plasma frequency and making the collective plasma effects greater," Fisch said. "Co-locating current technologies is vastly cheaper than building super-intense lasers," he said.

This work was funded by grants from the National Nuclear Security Administration and the Air Force Office of Scientific Research. Researchers now are gearing up to test the theoretical findings at SLAC at Stanford University, where a moderately strong laser is being developed and the source of electrons beams is already there. Physicist Sebastian Meuren, a co-author of the paper and a former post-doctoral visitor at PPPL who now is at SLAC, is centrally involved in this effort.

Read more at Science Daily

Early human activities impacted Earth’s atmosphere more than previously known

Several years ago, while analyzing ice core samples from Antarctica's James Ross Island, scientists Joe McConnell, Ph.D., and Nathan Chellman, Ph.D., from DRI, and Robert Mulvaney, Ph.D., from the British Antarctic Survey noticed something unusual: a substantial increase in levels of black carbon that began around the year 1300 and continued to the modern day.

Black carbon, commonly referred to as soot, is a light-absorbing particle that comes from combustion sources such as biomass burning (e.g. forest fires) and, more recently, fossil fuel combustion. Working in collaboration with an international team of scientists from the United Kingdom, Austria, Norway, Germany, Australia, Argentina, and the U.S., McConnell, Chellman, and Mulvaney set out to uncover the origins of the unexpected increase in black carbon captured in the Antarctic ice.

The team's findings, which published this week in Nature, point to an unlikely source: ancient Maori land-burning practices in New Zealand, conducted at a scale that impacted the atmosphere across much of the Southern Hemisphere and dwarfed other preindustrial emissions in the region during the past 2,000 years.

"The idea that humans at this time in history caused such a significant change in atmospheric black carbon through their land clearing activities is quite surprising," said McConnell, research professor of hydrology at DRI who designed and led the study. "We used to think that if you went back a few hundred years you'd be looking at a pristine, pre-industrial world, but it's clear from this study that humans have been impacting the environment over the Southern Ocean and the Antarctica Peninsula for at least the last 700 years."

Tracing the black carbon to its source

To identify the source of the black carbon, the study team analyzed an array of six ice cores collected from James Ross Island and continental Antarctica using DRI's unique continuous ice-core analytical system. The method used to analyze black carbon in ice was first developed in McConnell's lab in 2007.

While the ice core from James Ross Island showed a notable increase in black carbon beginning around the year 1300, with levels tripling over the 700 years that followed and peaking during the 16th and 17th centuries, black carbon levels at sites in continental Antarctica during the same period of time stayed relatively stable.

Andreas Stohl, Ph.D., of the University of Vienna led atmospheric model simulations of the transport and deposition of black carbon around the Southern Hemisphere that supported the findings.

"From our models and the deposition pattern over Antarctica seen in the ice, it is clear that Patagonia, Tasmania, and New Zealand were the most likely points of origin of the increased black carbon emissions starting about 1300," said Stohl.

After consulting paleofire records from each of the three regions, only one viable possibility remained: New Zealand, where charcoal records showed a major increase in fire activity beginning about the year 1300. This date also coincided with the estimated arrival, colonization, and subsequent burning of much of New Zealand's forested areas by the Maori people.

This was a surprising conclusion, given New Zealand's relatively small land area and the distance (nearly 4,500 miles), that smoke would have travelled to reach the ice core site on James Ross Island.

Read more at Science Daily

Differences in brain structure between siblings make some more susceptible to developing severe antisocial behavior

Structural differences in the area of the brain responsible for decision making could explain why two siblings living in the same family might differ in their risk of developing the condition conduct disorder.

Psychologists and neuroscientists have long puzzled over why siblings with seemingly the same upbringing and genetic makeup might differ so significantly in terms of their behaviour: how do some young people growing up in families with antisocial or criminal behaviour manage to stay out of trouble?

Researchers at the universities of Bath and Southampton investigated this question by studying different members of the same families -- some with the mental health condition conduct disorder, and some with no behavioural problems.

Conduct disorder is characterised by repetitive patterns of aggressive and antisocial behaviour. It results in substantial personal and financial costs for affected individuals, their families and society in general and is one of the most common reasons for referral to Child and Adolescent Mental Health Services in the UK.

Conduct disorder has a prevalence rate of around 5% among young people aged between 5 and 16, although there is a steep social class gradient: a 2004 survey revealed almost 40% of looked-after children, those who had been abused or on safeguarding registers, had conduct disorder. Despite all this, general awareness of the condition remains low and it is not recognised by many psychologists or psychiatrists.

The new study, published today in the journal Psychological Medicine, sought to understand underlying mechanisms which might determine someone's risk of developing the condition. The international team, including Dr Graeme Fairchild at the University of Bath, conducted MRI brain scans on 41 adolescents with conduct disorder, 24 unaffected siblings (who had a brother or sister with conduct disorder but did not show the condition themselves) and 38 typically developing controls with no family history of conduct disorder.

Their analysis found that young people with conduct disorder and their relatives both displayed structural differences in the brain -- in a part of the brain called the inferior parietal cortex. However, there were also structural changes in the brain that were specific to the conduct disorder group in brain regions responsible for empathy and cognitive control / inhibiting behaviour that were not found in the unaffected siblings.

In addition, the researchers also found changes in the prefrontal cortex, a brain area involved in planning and decision-making, that were specific to the unaffected sibling group -- which may explain why they are protected from showing antisocial behaviour despite growing up with either environmental or genetic risk factors for conduct disorder. Previous work from the same team found that despite differences in antisocial behaviour between siblings, both those with conduct disorder and their unaffected siblings had difficulties in recognising emotional facial expressions.

Dr Graeme Fairchild from the University of Bath's Department of Psychology explains: "Our study aimed to understand the root causes of conduct disorder, specifically what makes members of the same family differ in their antisocial behaviour and are there genetic risk markers for conduct disorder in the brain.

"This is one of the first family-based studies of conduct disorder and it confirms that the brain is important for distinguishing between members of the same family who are at higher risk of developing antisocial or criminal behaviour.

"Interestingly, whilst our previous work showed common impairments between affected and unaffected siblings in recognising facial expressions, this study suggests that key behavioural differences may be determined by small changes in the part of the brain responsible for executive functioning or decision-making. These differences could make some siblings more prone to risky behaviour and should now be a focus of future study."

Read more at Science Daily

Protecting the ozone layer is delivering vast health benefits

An international agreement to protect the ozone layer is expected to prevent 443 million cases of skin cancer and 63 million cataract cases for people born in the United States through the end of this century, according to new research.

The research team, by scientists at the National Center for Atmospheric Research (NCAR), ICF Consulting, and U.S. Environmental Protection Agency (EPA), focused on the far-reaching impacts of a landmark 1987 treaty known as the Montreal Protocol and later amendments that substantially strengthened it. The agreement phased out the use of chemicals such as chlorofluorocarbons (CFCs) that destroy ozone in the stratosphere.

Stratospheric ozone shields the planet from harmful levels of the Sun's ultraviolet (UV) radiation, protecting life on Earth.

To measure the long-term effects of the Montreal Protocol, the scientists developed a computer modeling approach that enabled them to look to both the past and the future by simulating the treaty's impact on Americans born between 1890 and 2100. The modeling revealed the treaty's effect on stratospheric ozone, the associated reductions in ultraviolet radiation, and the resulting health benefits.

In addition to the number of skin cancer and cataract cases that were avoided, the study also showed that the treaty, as most recently amended, will prevent approximately 2.3 million skin cancer deaths in the U.S.

"It's very encouraging," said NCAR scientist Julia Lee-Taylor, a co-author of the study. "It shows that, given the will, the nations of the world can come together to solve global environmental problems."

The study, funded by the EPA, was published in ACS Earth and Space Chemistry. NCAR is sponsored by the National Science Foundation.

Mounting concerns over the ozone layer

Scientists in the 1970s began highlighting the threat to the ozone layer when they found that CFCs, used as refrigerants and in other applications, release chlorine atoms in the stratosphere that set off chemical reactions that destroy ozone. Concerns mounted the following decade with the discovery of an Antarctic ozone hole.

The loss of stratospheric ozone would be catastrophic, as high levels of UV radiation have been linked to certain types of skin cancer, cataracts, and immunological disorders. The ozone layer also protects terrestrial and aquatic ecosystems, as well as agriculture.

Policy makers responded to the threat with the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer, in which nations agreed to curtail the use of certain ozone-destroying substances. Subsequent amendments strengthened the treaty by expanding the list of ozone-destroying substances (such as halons and hydrochlorofluorocarbons, or HCFCs) and accelerating the timeline for phasing out their use. The amendments were based on Input from the scientific community, including a number of NCAR scientists, that were summarized in quadrennial Ozone Assessment reports.

To quantify the impacts of the treaty, the research team built a model known as the Atmospheric and Health Effects Framework. This model, which draws on various data sources about ozone, public health, and population demographics, consists of five computational steps. These simulate past and future emissions of ozone-destroying substances, the impacts of those substances on stratospheric ozone, the resulting changes in ground-level UV radiation, the U.S. population's exposure to UV radiation, and the incidence and mortality of health effects resulting from the exposure.

The results showed UV radiation levels returning to 1980 levels by the mid-2040s under the amended treaty. In contrast, UV levels would have continued to increase throughout this century if the treaty had not been amended, and they would have soared far higher without any treaty at all.

Even with the amendments, the simulations show excess cases of cataracts and various types of skin cancer beginning to occur with the onset of ozone depletion and peaking decades later as the population exposed to the highest UV levels ages. Those born between 1900 and 2040 experience heightened cases of skin cancer and cataracts, with the worst health outcomes affecting those born between about 1950 and 2000.

However, the health impacts would have been far more severe without the treaty, with cases of skin cancer and cataracts rising at an increasingly rapid rate through the century.

"We peeled away from disaster," Lee-Taylor said. "What is eye popping is what would have happened by the end of this century if not for the Montreal Protocol. By 2080, the amount of UV has tripled. After that, our calculations for the health impacts start to break down because we're getting so far into conditions that have never been seen before."

The research team also found that more than half the treaty's health benefits could be traced to the later amendments rather than the original 1987 Montreal Protocol. Overall, the treaty prevented more than 99% of potential health impacts that would have otherwise occurred from ozone destruction. This showed the importance of the treaty's flexibility in adjusting to evolving scientific knowledge, the authors said.

Read more at Science Daily

Oct 5, 2021

Scientists confirm decrease in Pluto’s atmospheric density

When Pluto passed in front of a star on the night of August 15, 2018, a Southwest Research Institute-led team of astronomers had deployed telescopes at numerous sites in the U.S. and Mexico to observe Pluto's atmosphere as it was briefly backlit by the well-placed star. Scientists used this occultation event to measure the overall abundance of Pluto's tenuous atmosphere and found compelling evidence that it is beginning to disappear, refreezing back onto its surface as it moves farther away from the Sun.

The occultation took about two minutes, during which time the star faded from view as Pluto's atmosphere and solid body passed in front of it. The rate at which the star disappeared and reappeared determined the density profile of Pluto's atmosphere.

"Scientists have used occultations to monitor changes in Pluto's atmosphere since 1988," said Dr. Eliot Young, a senior program manager in SwRI's Space Science and Engineering Division. "The New Horizons mission obtained an excellent density profile from its 2015 flyby, consistent with Pluto's bulk atmosphere doubling every decade, but our 2018 observations do not show that trend continuing from 2015."

Several telescopes deployed near the middle of the shadow's path observed a phenomenon called a "central flash," caused by Pluto's atmosphere refracting light into a region at the very center of the shadow. When measuring an occultation around an object with an atmosphere, the light dims as it passes through the atmosphere and then gradually returns. This produces a moderate slope on either end of the U-shaped light curve. In 2018, refraction by Pluto's atmosphere created a central flash near the center of its shadow, turning it into a W-shaped curve.

"The central flash seen in 2018 was by far the strongest that anyone has ever seen in a Pluto occultation," Young said. "The central flash gives us very accurate knowledge of Pluto's shadow path on the Earth."

Like Earth, Pluto's atmosphere is predominantly nitrogen. Unlike Earth, Pluto's atmosphere is supported by the vapor pressure of its surface ices, which means that small changes in surface ice temperatures would result in large changes in the bulk density of its atmosphere. Pluto takes 248 Earth years to complete one full orbit around the Sun, and its distance varies from its closest point, about 30 astronomical units from the Sun (1 AU is the distance from the Earth to the Sun), to 50 AU from the Sun.

For the past quarter century, Pluto has been receiving less and less sunlight as it moves farther away from the Sun, but, until 2018, its surface pressure and atmospheric density continued to increase. Scientists attributed this to a phenomenon known as thermal inertia.

"An analogy to this is the way the Sun heats up sand on a beach," said SwRI Staff Scientist Dr. Leslie Young, who specializes in modeling the interaction between the surfaces and atmospheres of icy bodies in the outer solar system. "Sunlight is most intense at high noon, but the sand then continues soaking up the heat over course of the afternoon, so it is hottest in late afternoon. The continued persistence of Pluto's atmosphere suggests that nitrogen ice reservoirs on Pluto's surface were kept warm by stored heat under the surface. The new data suggests they are starting to cool."

Read more at Science Daily

Extreme exoplanet even more exotic than originally thought

Considered an ultra-hot Jupiter -- a place where iron gets vaporized, condenses on the night side and then falls from the sky like rain -- the fiery, inferno-like WASP-76b exoplanet may be even more sizzling than scientists had realized.

An international team, led by scientists at Cornell University, University of Toronto and Queen's University Belfast, reports the discovery of ionized calcium on the planet -- suggesting an atmospheric temperature higher than previously thought, or strong upper atmosphere winds.

The discovery was made in high-resolution spectra obtained with Gemini North near the summit of Mauna Kea in Hawaii.

Hot Jupiters are named for their high temperatures, due to proximity to their stars. WASP-76b, discovered in 2016, is about 640 light-years from Earth, but so close to its F-type star, which is slightly hotter than the sun, that the giant planet completes one orbit every 1.8 Earth days.

The research results are the first of a multiyear, Cornell-led project, Exoplanets with Gemini Spectroscopy survey, or ExoGemS, that explores the diversity of planetary atmospheres.

"As we do remote sensing of dozens of exoplanets, spanning a range of masses and temperatures, we will develop a more complete picture of the true diversity of alien worlds -- from those hot enough to harbor iron rain to others with more moderate climates, from those heftier than Jupiter to others not much bigger than the Earth," said co-author Ray Jayawardhana, Harold Tanner Dean of the College of Arts and Sciences at Cornell University and a professor of astronomy.

"It's remarkable that with today's telescopes and instruments, we can already learn so much about the atmospheres -- their constituents, physical properties, presence of clouds and even large-scale wind patterns -- of planets that are orbiting stars hundreds of light-years away," Jayawardhana said.

The group spotted a rare trio of spectral lines in highly sensitive observations of the exoplanet WASP-76b's atmosphere, published in the Astrophysical Journal Letters on Sept. 28 and presented on Oct. 5 at the annual meeting of the Division for Planetary Sciences of the American Astronomical Society.

"We're seeing so much calcium; it's a really strong feature," said first author Emily Deibert, a University of Toronto doctoral student, whose adviser is Jayawardhana.

"This spectral signature of ionized calcium could indicate that the exoplanet has very strong upper atmosphere winds," Deibert said. "Or the atmospheric temperature on the exoplanet is much higher than we thought."

Read more at Science Daily

Hidden mangrove forest in the Yucatan peninsula reveals ancient sea levels

Deep in the heart of the Yucatan Peninsula, an ancient mangrove ecosystem flourishes more than 200 kilometers (124 miles) from the nearest ocean. This is unusual because mangroves -- salt-tolerant trees, shrubs, and palms -- are typically found along tropical and subtropical coastlines.

A new study led by researchers across the University of California system in the United States and researchers in Mexico focuses on this luxuriant red mangrove forest. This "lost world" is located far from the coast along the banks of the San Pedro Martir River, which runs from the El Petén rainforests in Guatemala to the Balancán region in Tabasco, Mexico.

Because the red mangrove (Rhizophora mangle) and other species present in this unique ecosystem are only known to grow in salt water or somewhat salty water, the binational team set out to discover how the coastal mangroves were established so deep inland in fresh water completely isolated from the ocean. Their findings were published Oct. 4 in the Proceedings of the National Academy of Sciences.

Integrating genetic, geologic, and vegetation data with sea-level modeling, the study provides a first glimpse of an ancient coastal ecosystem. The researchers found that the San Pedro mangrove forests reached their current location during the last interglacial period, some 125,000 years ago, and have persisted there in isolation as the oceans receded during the last glaciation.

The study provides a snapshot of the global environment during the last interglacial period, when the Earth became very warm and polar ice caps melted entirely, making global sea levels much higher than they are today.

"The most amazing part of this study is that we were able to examine a mangrove ecosystem that has been trapped in time for more than 100,000 years," said study co-author Octavio Aburto-Oropeza, a marine ecologist at Scripps Institution of Oceanography at UC San Diego and a PEW Marine Fellow. "There is certainly more to discover about how the many species in this ecosystem adapted throughout different environmental conditions over the past 100,000 years. Studying these past adaptations will be very important for us to better understand future conditions in a changing climate."

Combining multiple lines of evidence, the study demonstrates that the rare and unique mangrove ecosystem of the San Pedro River is a relict -- that is, organisms that have survived from an earlier period -- from a past warmer world when relative sea levels were six to nine meters (20 to 30 feet) higher than at present, high enough to flood the Tabasco lowlands of Mexico and reach what today are tropical rainforests on the banks of the San Pedro River.

The study highlights the extensive landscape impacts of past climate change on the world's coastlines and shows that during the last interglacial, much of the Gulf of Mexico coastal lowlands were under water. Aside from providing an important glimpse of the past and revealing the changes suffered by the Mexican tropics during the ice ages, these findings also open opportunities to better understand future scenarios of relative sea-level rise as climate change progresses in a human-dominated world.

Carlos Burelo, a botanist at the Universidad Juárez Autónoma de Tabasco and a native of the region, drew the attention of the rest of the team towards the existence of this relict ecosystem in 2016. "I used to fish here and play on these mangroves as a kid, but we never knew precisely how they got there," said Burelo. "That was the driving question that brought the team together."

Burelo's field work and biodiversity surveys in the region established the solid foundation of the study. His remarkable discovery of the ancient ecosystem is documented in "Memories of the Future: the modern discovery of a relict ecosystem," anaward-winning short film produced by Scripps alumnus Ben Fiscella Meissner (MAS MBC '17).

Felipe Zapata and Claudia Henriquez of UCLA led the genetic work to estimate the origin and age of the relict forest. Sequencing segments of the genomes of the red mangrove trees, they were able to establish that this ecosystem migrated from the coasts of the Gulf of Mexico into the San Pedro River over 100,000 years ago and stayed there in isolation after the ocean receded when temperatures dropped. While mangroves are the most notable species in the forest, they found nearly 100 other smaller species that also have a lineage from the ocean.

"This discovery is extraordinary," said Zapata. "Not only are the red mangroves here with their origins printed in their DNA, but the whole coastal lagoon ecosystem of the last interglacial has found refuge here."

Paula Ezcurra, science program manager at the Climate Science Alliance, carried out the sea-level modeling, noting that the coastal plains of the southern Gulf of Mexico lie so low that a relatively small change in sea level can produce dramatic effects inland. She said a fascinating piece of this study is how it highlights the benefits of working collaboratively among scientists from different disciplines.

"Each piece of the story alone is not sufficient, but when taken together, the genetics, geology, botany, and field observations tell an incredible story. Each researcher involved lent their expertise that allowed us to uncover the mystery of a 100,000+ year-old forest," said Ezcurra, an alumna of Scripps Oceanography (MAS CSP '17).

The field work was led by the ecologists on the team -- Octavio Aburto-Oropeza, Paula Ezcurra, Exequiel Ezcurra of UC Riverside, and Sula Vanderplank of Pronatura Noroeste. Visiting the study sites several times starting in 2016, they collected rocks, sediments and fossils to analyze in the lab, helping them pinpoint evidence from the past that is consistent with a marine environment.

The authors note that the region surrounding the study sites was systematically deforested in the 1970s by a misguided development plan; the banks of the San Pedro River were only spared because the bulldozers could not reach it. The area is still threatened by human activities, so the researchers stressed the need to protect this biologically important area in the future.

Read more at Science Daily

Brain-circuit discovery may help explain sex differences in binge drinking

A brain circuit that works as a "brake" on binge alcohol drinking may help explain male-female differences invulnerability to alcohol use disorders, according to a preclinical study led by scientists at Weill Cornell Medicine.

In the study, which appeared August 23 in Nature Communications, the researchers examined a brain region in mice called the bed nucleus of the stria terminalis (BNST) -- a major node in a stress-response network whose activity in humans has been linked to binge drinking behaviors. The researchers found that one important population of BNST neurons is more excitable in female mice than in males, helping to account for female mice's greater susceptibility to binge drinking.

The researchers also found that a distant cluster of neurons called the paraventricular nucleus of the thalamus (PVT), which is wired into the BNST, acts as a brake on its activity and has a stronger influence on the female BNST compared with the male BNST. Thus, the PVT is able to curb excessive alcohol consumption through this circuit brake in female mice but not males. While females may be offered more protection through this mechanism, they may also be more vulnerable to disease when this brake is disrupted.

"This study highlights that there are sex differences in the brain biology that controls alcohol drinking behaviors, and we really need to understand those differences if we're going to develop optimal treatments for alcohol use disorder," said senior author Dr. Kristen Pleil, assistant professor of pharmacology at Weill Cornell Medicine.

Women tend to consume less alcohol than men do, but researchers believe that is due mostly to cultural factors, and in recent decades that gender gap has narrowed significantly, especially among younger women. Women may in fact have an inherently greater vulnerability to alcohol use disorders, for reasons that lie deep within mammalian biology.

"Females across mammalian species, compared to males, display greater binge drinking and progress from first alcohol use to disease states more quickly," Dr. Pleil said. "But there has been hardly any research on the neural details that underlie this sex difference."

For the study, she and her team showed that BNST neurons, whose activity enhances binge-drinking behavior in mice, are more excitable and likely to fire spontaneously in female mice compared to males, apparently due to greater stimulation from other brain regions wired into the BNST. This higher excitability in females means that more inhibition of the female BNST is needed to prevent or reduce binge-drinking behavior.

The researchers found that the brain region with the densest projection to the BNST is the PVT -- which works as a natural inhibitor of BNST activity, more so in female mice. They found that reducing the strength of this PVT projection promotes binge alcohol drinking behavior in female mice, but not in male mice, whose BNST activity is lower to begin with.

The results, Dr. Pleil said, indicate that although this BNST-driven stress response circuit is tuned to be more excitable in females, it is also more heavily regulated in females, perhaps as an adaptation for more female-specific behaviors.

What behaviors? That is still unclear, although the researchers found that altering BNST activity via the PVT had no effect on the mice's intake of sweet-tasting sucrose -- suggesting that the PVT-BNST circuit, with its greater sensitivity and tighter regulation in females, evolved for something more specific than guiding general reward-seeking behaviors.

"Female mammals have a different set of goals compared to males, and may need to be more sensitive to different types of reward," Dr. Pleil said.

She added that sex differences in the PVT-BNST circuit may be relevant to sex differences not only in alcohol-use disorders but also in anxiety disorders -- which are much more common in women and frequently co-occur with binge drinking. The researchers found that enhancing PVT inhibition of the BNST led to reduced avoidance behaviors -- a proxy for reduced anxiety in humans -- in both male and female mice.

Read more at Science Daily

Mitigating lung damage, mortality due to SARS-CoV-2

In a new paper, researchers at the University of Illinois Chicago report that a drug approved for treating patients with autoimmune disease helped to prevent lung damage and death in mice infected with the SARS-CoV-2 virus, which causes COVID-19 in humans.

The results of their study provide strong evidence that inflammatory lung vascular leakage -- or leaky lungs -- is a key feature of COVID-19 illness. Vascular leakage can be caused by severe inflammation and results in a buildup of fluid in the lungs, which interferes with oxygen uptake. Mice infected with SARS-CoV-2 showed very clear and early signs of leakage from the blood vessels of the lung.

The research also suggests targeted drug treatments that suppress only select immune system pathways, like the rheumatoid arthritis drug used in the study that targets the molecular receptor called IL-1, might be a more suitable therapy for COVID-19 patients than drug treatments that suppress the entire immune system.

The study was led by senior author Asrar Malik, head of the department of pharmacology and regenerative medicine at the College of Medicine, and by co-senior author Jalees Rehman, professor of medicine in the department of pharmacology and regenerative medicine.

"With COVID-19, we need to strike a balance. On the one hand, we need a strong immune system to eliminate the virus. On the other hand, several studies suggest that in patients with severe COVID-19, the immune system can go overboard and even cause damage to our own body," Rehman said. "So, while we need the immune system to work efficiently, we also need to prevent it from becoming hyperactive and causing collateral damage."

The need for balance is why the UIC researchers decided to study the effects of a drug that works on only one targeted immune system pathway and see if that would help prevent SARS-CoV-2-induced leaky lungs.

For the study, the researchers observed mice infected with the virus and tracked the progression of illness. They saw that the mice quickly showed symptoms like weight loss, fluid buildup in the lungs from leaky lung blood vessels, and even indicators of lung scarring, such as increased collagen levels in lung tissue.

"This is important evidence that blood vessel leakage in the lungs is a key feature of severe COVID-19 and that treatments which prevent or reduce vascular leakage warrant further study," Rehman said.

The researchers also treated some of the mice with the approved autoimmune disease drug, called anakinra, to block the IL-1 receptor, a key molecule regulating inflammation.

"We saw that the mice who received the drug had reduced signs of disease -- including less lung fluid buildup and less scarring of the lungs -- and better survival," Rehman said.

Rehman said these findings pave the way for helping COVID-19 patients and illuminate the need for more research on targeted, personalized treatments.

"Obviously, the best approach to reducing short-term and long-term damage as a result of COVID-19 is to get vaccinated and reduce the risk of SARS-CoV-2 infection as well as the risk of severe disease. However, the hesitancy of many individuals to get vaccinated as well as the lack of access to vaccines in many parts of the world means that we will continue to see patients with severe COVID-19 in the near future. Our results suggest that it is possible to identify a select a vulnerable COVID-19 patient population that is most likely to benefit from this therapy," Rehman said.

In their paper, "Interleukin-1RA Mitigates SARS-CoV-2-Induced Inflammatory Lung Vascular Leakage and Mortality in Humanized K18-hACE-2 Mice," the researchers hypothesize that by assessing the level of certain inflammatory signals in patients, such as the activation of the IL-1 receptor pathway, scientists could identify when a patient's immune system might be heading into overdrive and use a targeted immunosuppressant, like anakinra, to keep inflammation at the right balance.

"It is important to get the right drug to the right patient at the right time, and this study shines a light on a path forward for clinical trials that are investigating this drug and others that target specific components of the immune system," Rehman said.

Read more at Science Daily

Oct 4, 2021

'Mini psyches' give insights into mysterious metal-rich near-earth asteroids

Metal-rich near-Earth asteroids, or NEAs, are rare, but their presence provides the intriguing possibility that iron, nickel and cobalt could someday be mined for use on Earth or in Space.

New research, published in the Planetary Science Journal, investigated two metal-rich asteroids in our own cosmic backyard to learn more about their origins, compositions and relationships with meteorites found on Earth.

These metal-rich NEAs were thought to be created when the cores of developing planets were catastrophically destroyed early in the solar system's history, but little more is known about them. A team of students co-led by University of Arizona planetary science associate professor Vishnu Reddy studied asteroids 1986 DA and 2016 ED85 and discovered that their spectral signatures are quite similar to asteroid 16 Psyche, the largest metal-rich body in the solar system. Psyche, located in the main asteroid belt between the orbits of Mars and Jupiter rather than near Earth, is the target of NASA's Psyche mission.

"Our analysis shows that both NEAs have surfaces with 85% metal such as iron and nickel and 15% silicate material, which is basically rock," said lead author Juan Sanchez, who is based at the Planetary Science Institute. "These asteroids are similar to some stony-iron meteorites such as mesosiderites found on Earth."

Astronomers have been speculating as to what the surface of Psyche is made of for decades. By studying metal-rich NEAs that come close to the Earth, they hope to identify specific meteorites that resemble Psyche's surface.

"We started a compositional survey of the NEA population in 2005, when I was a graduate student, with the goal of identifying and characterizing rare NEAs such as these metal-rich asteroids," said Reddy, principal investigator of the NASA grant that funded the work. "It is rewarding that we have discovered these 'mini Psyches' so close to the Earth."

"For perspective, a 50-meter (164-foot) metallic object similar to the two asteroids we studied created the Meteor Crater in Arizona," said Adam Battle, who is a co-author of the paper along with fellow Lunar and Planetary Laboratory graduate students Benjamin Sharkey and Theodore Kareta, and David Cantillo, an undergraduate student in the Department of Geosciences.

The paper also explored the mining potential of 1986 DA and found that the amount of iron, nickel and cobalt that could be present on the asteroid would exceed the global reserves of these metals.

Additionally, when an asteroid is catastrophically destroyed, it produces what is called an asteroid family -- a bunch of small asteroids that share similar compositions and orbital paths.

The team used the compositions and orbits of asteroids 1986 DA and 2016 ED85 to identify four possible asteroid families in the outer region of the main asteroid belt, which is home to the largest reservoir of small bodies in the inner part of the solar system. This also happens to be the region where most of the largest known metallic asteroids including 16 Psyche reside.

"We believe that these two 'mini Psyches' are probably fragments from a large metallic asteroid in the main belt, but not 16 Psyche itself," Cantillo said. "It's possible that some of the iron and stony-iron meteorites found on Earth could have also come from that region in the solar system too."

Read more at Science Daily

Earliest evidence yet of huge hippos in Britain

Palaeobiologists have unearthed the earliest evidence yet of hippos in the UK.

Excavations at Westbury Cave in Somerset, led by University of Leicester PhD student Neil Adams, uncovered a million-year-old hippo tooth which shows the animal roamed Britain much earlier than previously thought.

In a new study published in the Journal of Quaternary Science and co-authored with researchers from Royal Holloway, University of London, the tooth is identified as belonging to an extinct species of hippo called Hippopotamus antiquus, which ranged across Europe in warm periods during the Ice Age.

It was much larger than the modern African hippo, weighing around 3 tonnes, and was even more reliant on aquatic habitats than its living relative.

Research demonstrates that the fossil is over one million years old, eclipsing the previous record of hippo in the UK by at least 300,000 years and filling an important gap in the British fossil record.

Neil Adams, PhD researcher in the Centre for Palaeobiology Research at the University of Leicester and Earth Collections Project Officer at the Oxford University Museum of Natural History, said:

"It was very exciting to come across a hippo tooth during our recent excavations at Westbury Cave. It is not only the first record of hippo from the site, but also the first known hippo fossil from any site in Britain older than 750,000 years.

"Erosion caused by the coming and going of ice sheets, as well as the gradual uplift of the land, has removed large parts of the deposits of this age in Britain. Our comparisons with sites across Europe show that Westbury Cave is an important exception and the new hippo dates to a previously unrecognised warm period in the British fossil record."

Scientists know remarkably little about the fauna, flora and environments in Britain between about 1.8 and 0.8 million years ago, a key period when early humans were beginning to occupy Europe.

But new research at Westbury Cave is helping to fill in this gap. It shows that during this interval there were periods warm and wet enough to allow hippos to migrate all the way from the Mediterranean to southern England.

Professor Danielle Schreve, Professor of Quaternary Science at Royal Holloway and co-author of the study, said:

"Hippos are not only fabulous animals to find but they also reveal evidence about past climates. Many megafaunal species (those over a tonne in weight) are quite broadly tolerant of temperature fluctuations but in contrast, we know modern hippos cannot cope with seasonally frozen water bodies.

"Our research has demonstrated that in the fossil record, hippos are only found in Britain during periods of climatic warmth, when summer temperatures were a little warmer than today but most importantly, winter temperatures were above freezing."

By examining the European fossil record, the research team show that the Westbury Cave hippo was likely to have lived during a particularly warm period around 1.1 to 1.0 million years ago.

Hippo remains of this age are known from Germany, France and the Netherlands and the new fossil from Somerset represents a previously unknown part of this colonisation of northwest Europe.

Read more at Science Daily

How apples get their shapes

Apples are among the oldest and most recognizable fruits in the world. But have you ever really considered an apple's shape? Apples are relatively spherical except for that characteristic dimple at the top where the stem grows.

How do apples grow that distinctive shape?

Now, a team of mathematicians and physicists have used observations, lab experiments, theory and computation to understand the growth and form of the cusp of an apple.

The paper is published in Nature Physics.

"Biological shapes are often organized by the presence of structures that serve as focal points," said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and senior author of the study. "These focal points can sometimes take the form of singularities where deformations are localized. A ubiquitous example is seen in the cusp of an apple, the inward dimple where the stalk meets the fruit."

Mahadevan had already developed a simple theory to explain the form and growth of apples but the project began to bear fruit when the researchers were able to connect observations of real apples at different growth stages and gel experiments to mimic the growth along with theory and computations.

The research team began by collecting apples at various growth stages from an orchard at Peterhouse College at University of Cambridge in the U.K., (the alma mater of another famous apple lover, Sir Isaac Newton).

Using those apples, the team mapped the growth of the dimple, or cusp as they called it, over time.

To understand the evolution of the shape of the apple and the cusp in particular, the researchers turned to a long-standing mathematical theory known as singularity theory. Singularity theory is used to describe a host of different phenomena, from black holes, to more mundane examples such as the light patterns at the bottom of a swimming pool, droplet breakup and crack propagation.

"What is exciting about singularities is that they are universal. The apple cusp has nothing in common with light patterns in a swimming pool, or a droplet breaking off from a column of water, yet it makes the same shape as they do," said Thomas Michaels, a former postdoctoral fellow at SEAS and co-lead author of the paper, now at University College London. "The concept of universality goes very deep and can be very useful because it connects singular phenomena observed in very different physical systems."

Building from this theoretical framework, the researchers used numerical simulation to understand how differential growth between the fruit cortex and the core drives formation of the cusp. They then corroborated the simulations with experiments which mimicked the growth of apples using gel that swelled over time. The experiments showed that different rates of growth between the bulk of the apple and the stalk region resulted in the dimple-like cusp.

"Being able to control and replay morphogenesis of singular cusps in the laboratory with simple material toolkits was particularly exciting," said Aditi Chakrabarti, a postdoctoral fellow at SEAS and co-author of the paper. "Varying the geometry and composition of the gel mimics showed how multiple cusps form, as seen in some apples and other drupes, such as peaches, apricots, cherries and plums."

The team found that the underlying fruit anatomy along with mechanical instability may play joint roles in giving rise to multiple cusps in fruits.

"Morphogenesis, literally the origin of shape, is one of the grand questions in biology," said Mahadevan. "The shape of the humble apple has allowed us to probe some physical aspects of a biological singularity. Of course, we now need to understand the molecular and cellular mechanisms behind the formation of the cusp, as we move slowly towards a broader theory of biological shape."

Read more at Science Daily