Jul 8, 2022

New insights about surface, structure of asteroid Bennu

When NASA's OSIRIS-REx spacecraft collected samples from asteroid Bennu's surface in 2020, forces measured during the interaction provided scientists with a direct test of the poorly understood near-subsurface physical properties of rubble-pile asteroids. Now, a Southwest Research Institute-led study has characterized the layer just below the asteroid's surface as composed of weakly bound rock fragments containing twice the void space as the overall asteroid.

"The low gravity of rubble-pile asteroids such as Bennu weakens its near-subsurface by not compressing the upper layers, minimizing the influence of particle cohesion," said SwRI's Dr. Kevin Walsh, lead author of a paper about this research published in the journal Science Advances. "We conclude that a low density, weakly bound subsurface layer should be a global property of Bennu, not just localized to the contact point."

Fitting its designation as a "rubble-pile asteroid," Bennu is a spheroidal collection of rock fragments and debris 1,700 feet in diameter and held together by gravity. It is thought to have been formed after a collision involving a larger main-asteroid-belt object. Rocks are scattered across its heavily cratered surface, indicating that it has had a rough-and-tumble existence since being liberated from its much larger parent asteroid some millions or billions of years ago.

The goal of OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer) mission is to collect and return at least 60 grams of surface material from Bennu and deliver it to Earth in 2023. Sample collection activities provided additional insights.

According to Walsh, researchers involved in the OSIRIS-REx mission have so far measured Bennu's thermal properties and craters to estimate the strength and porosity of discrete particles of rubble-pile asteroids. The ensemble of particles, or regolith, at an asteroid's surface controlling and influencing long-term evolution have not yet been probed directly until now.

Before, during, and after the sampling event, the Sample Acquisition Verification Camera (SamCam) of the OSIRIS-REx Camera Suite captured images looking at the Touch-and-Go Sample Acquisition Mechanism (TAGSAM) robotic arm.

"The SamCam images bracketing the moment of contact show the contact caused considerable disturbance at the sample site," said Dr. Ron Ballouz, a co-author from Johns Hopkins University's Applied Physics Laboratory. "Nearly every visible particle is moved or re-oriented at all points along the circumference of TAGSAM up to a 15-inch radius."

These SamCam images showed the downward force of TAGSAM lifted a nearly 16-inch rock. Though strong enough to withstand breaking, the rock was re-oriented and small debris lofted off its surface. The mobility of these millimeter-scale particles under relatively weak forces suggests minimal cohesive bonding with the surface of the larger rock.

Scientists have theorized that the average regolith particle size increases as asteroid size decreases, because larger bodies retain smaller materials due to a higher surface gravity. The team then compared Bennu to similar rubble-pile asteroids.

"We discovered a dichotomy between the rough, boulder-covered surfaces of Bennu and Ryugu versus Itokawa, which includes ponds of smaller particles across 20% of its surface," Walsh said. "This could have several explanations, including that the latter's near-surface has compressed enough to frustrate these microparticles percolating into the interior or perhaps the granular deposits are subsurface layers revealed by a recent disruptive reorganization of the body."

Read more at Science Daily

Ozone depletion over North Pole produces weather anomalies

Many people are familiar with the hole in the ozone layer over Antarctica, but what is less well known is that occasionally, the protective ozone in the stratosphere over the Arctic is destroyed as well, thinning the ozone layer there. This last happened in the spring months of 2020, and before that, in the spring of 2011.

Each time the ozone layer has been thinned out, climate researchers subsequently observed weather anomalies across the entire northern hemisphere. In central and northern Europe, Russia and especially in Siberia, those spring seasons were exceptionally warm and dry. In other areas, such as polar regions, however, wet conditions prevailed. These weather anomalies were particularly pronounced in 2020. Switzerland was also unusually warm and dry that spring.

Whether there is a causal relationship between stratospheric ozone destruction and the observed weather anomalies is a matter of debate in climate research. The polar vortex in the stratosphere, which forms in winter and decays in spring, also plays a role. Scientists who have studied the phenomenon so far have arrived at contradictory results and different conclusions.

New findings are now shedding light on the situation, thanks to doctoral student Marina Friedel and Swiss National Science Foundation Ambizione Fellow Gabriel Chiodo. Both are members of the group headed by Thomas Peter, Professor of Atmospheric Chemistry at ETH Zurich, and are collaborating with Princeton University and other institutions.

Simulations reveal correlation

To uncover a possible causal relationship, the researchers ran simulations that integrated ozone depletion into two different climate models. Most climate models consider only physical factors, not variations in stratospheric ozone levels, in part because this would require much more computing power.

But the new calculations make it clear: the cause of the weather anomalies observed in the northern hemisphere in 2011 and 2020 is mostly ozone depletion over the Arctic. The simulations the researchers ran with the two models largely coincided with observational data from those two years, as well as eight other such events that were used for comparison purposes. However, when the scientists "turned off" ozone destruction in the models, they could not reproduce those results.

"What surprised us most from a scientific point of view is that, even though the models we were using for the simulation are utterly different, they produced similar results," says co-author Gabriel Chiodo, SNSF Ambizione Fellow at the Institute for Atmospheric and Climate Science.

The mechanism explained


The phenomenon as the researchers have now studied it begins with ozone depletion in the stratosphere. For ozone to be broken down there, temperatures in the Arctic must be very low. "Ozone destruction occurs only when it is cold enough and the polar vortex is strong in the stratosphere, about 30 to 50 kilometres above the ground," Friedel points out.

Normally, ozone absorbs UV radiation emitted by the sun, thereby warming the stratosphere and helping to break down the polar vortex in spring. But if there is less ozone, the stratosphere cools and the vortex becomes stronger. "A strong polar vortex then produces the effects observed at the Earth's surface," Chiodo says. Ozone thus plays a major role in temperature and circulation changes around the North Pole.

Greater accuracy possible for long-term forecasts

The new findings could help climate researchers make more accurate seasonal weather and climate forecasts in future. This allows for better prediction of heat and temperature changes, "which is important for agriculture," Chiodo says.

Read more at Science Daily

Gestures can improve understanding in language disorders

When words fail, gestures can help to get the message across -- especially for people who have a language disorder. An international research team has now shown that listeners attend the gestures of people with aphasia more often and for much longer than previously thought. This has implications for the use of gestures in speech therapy.

People who suffer from an acquired language disorder due to a brain injury -- for example after a stroke, traumatic brain injury or brain tumor -- often have difficulties communicating with others. Previous research on aphasia indicates that these patients often try to express their needs using hand gestures. It was previously assumed that conversation partners pay relatively little attention to such non-verbal forms of communication -- but this assumption was based on research involving participants without language disorders.

Communicating with gestures

A new study from the University of Zurich, carried out together with researchers from the Netherlands and Japan, looked at whether gestures receive more attention if the verbal communication is impeded by aphasia. The researchers showed healthy volunteers video clips in which people with and without speech disorders described an accident and a shopping experience. As the participants watched the video clips, their eye movements were recorded.

Focus of attention shifts

"Our results show that when people have very severe speaking difficulties and produce less informative speech, their conversation partner is more likely to pay attention to their hand movements and to look longer at their gestures," says Basil Preisig of the Department of Comparative Language Science at UZH. In people who have no limitations in verbal production, hand gestures are granted less attention. Thus, it seems that listeners shift their attention when the speaker has a speech impediment and focus more on the speaker's nonverbal information provided through gestures. "For people with aphasia, it may be worth using gestures more in order to be better understood by the other person," says Preisig.

Read more at Science Daily

The importance of elders

According to long-standing canon in evolutionary biology, natural selection is cruelly selfish, favoring traits that help promote reproductive success. This usually means that the so-called "force" of selection is well equipped to remove harmful mutations that appear during early life and throughout the reproductive years. However, by the age fertility ceases, the story goes that selection becomes blind to what happens to our bodies. After the age of menopause, our cells are more vulnerable to harmful mutations. In the vast majority of animals, this usually means that death follows shortly after fertility ends.

Which puts humans (and some species of whale) in a unique club: animals that continue to live long after their reproductive lives end. How is it that we can live decades in selection's shadow?

"From the perspective of natural selection, long post-menopausal life is a puzzle," said UC Santa Barbara anthropology professor Michael Gurven. In most animals, including chimpanzees -- our closest primate brethren -- this link between fertility and longevity is very pronounced, where survival drops in sync with the ability to reproduce. Meanwhile in humans, women can live for decades after their ability to have children ends. "We don't just gain a few extra years -- we have a true post-reproductive life stage," Gurven said.

In a paper published in the Proceedings of the National Academy of Sciences, senior author Gurven, with former UCSB postdoctoral fellow and population ecologist Raziel Davison, challenge the longstanding view that the force of natural selection in humans must decline to zero once reproduction is complete.

They assert that a long post-reproductive lifespan is not just due to recent advancements in health and medicine. "The potential for long life is part of who we are as humans, an evolved feature of the life course," Gurven said.

The secret to our success? Our grandparents.

"Ideas about the potential value of older adults have been floating around for awhile," Gurven said. "Our paper formalizes those ideas, and asks what the force of selection might be once you take into account the contributions of older adults."

For example, one of the leading ideas for human longevity is called the Grandmother Hypothesis -- the idea that, through their efforts, maternal grandmothers can increase their fitness by helping improve the survival of their grandchildren, thereby enabling their daughters to have more children. Such fitness effects help ensure that the grandmother's DNA is passed down.

"And so that's not reproduction, but it's sort of an indirect reproduction. The ability to pool resources, and not just rely on your own efforts, is a game changer for highly social animals like humans," Davison said.

In their paper, the researchers take the kernel of that idea -- intergenerational transfers, or resource sharing between old and young -- and show that it, too, has played a fundamental role in the force of selection at different ages. Food sharing in non-industrial societies is perhaps the most obvious example.

"It takes up to two decades from birth before people produce more food than they're consuming," said Gurven, who has studied the economy and demography of the Tsimané and other indigenous groups of South America. A lot of food has to be procured and shared to get kids to the point where they can fend for themselves and be productive group members. Adults fill most of this need with their ability to obtain more food than they need for themselves, a provisioning strategy that has sustained pre-industrial societies for ages and also carries over into industrialized societies.

"In our model, the large surplus that adults produce helps improve the survival and fertility of close kin, and of other group members who reliably share their food, too," Davison said. "Viewed through the lens of food production and its effects, it turns out that the indirect fitness value of adults is also highest among reproductive-aged adults. But using demographic and economic data from multiple hunter-gatherers and horticulturalists, we find that the surplus provided by older adults also generates positive selection for their survival. We calculate all this extra fitness in late adulthood to be worth up to a few extra kids!"

"We show that elders are valuable, but only up to a point," contends Gurven. "Not all grandmothers are worth their weight. By about their mid-seventies, hunter-gatherers and farmers end up soaking up more resources than they provide. Plus, by their mid-seventies, most of their grandkids won't be dependents anymore, and so the circle of close kin who stand to benefit from their help is small."

But food isn't everything. Beyond getting fed, children are also taught and socialized, trained in relevant skills and worldviews. This is where older adults can make their biggest contributions: While they don't contribute as much to the food surplus, they have the accumulation of a lifetime of skills they can deploy to ease the burden of childcare on parents, as well as knowledge and training that they can pass on to their grandchildren.

"Once you take into account that elders are also actively involved in helping others forage, then it adds even more fitness value to their activity and to them being alive," Gurven said. "Not only do elders contribute to the group, but their usefulness helps ensure that they also receive from the surpluses, protections and care from their group. In other words, interdependence runs both ways, from old to young, and young to old."

"If you're part of my social world, there might be some kickback," Davison explained. "So to the extent that we're interdependent, I'm vested in your interest, beyond just simple kinship. I'm interested in getting you to be as skilled as possible because some of your productivity could help me down the road."

Gurven and Davison found that rather than our long lifespans opening up opportunities that led to a human-like foraging economy and social behavior, the reverse is more likely -- our skills-intensive strategies and long-term investments in the health of the group preceded and evolved with our shift to our particular human life history, with its extended childhood and unusually long post-reproductive stage.

In contrast, chimpanzees -- who represent our best guess as to what humans' last common ancestor may have been like -- are able to forage for themselves by age 5. However, their foraging activities require less skill, and they produce minimal surplus. Even so, the authors show that if a chimpanzee-like ancestor would share their food more widely, they could still generate enough indirect fitness contributions to increase the force of selection in later adulthood.

"What this suggests is that human longevity is really a story about cooperation," said Gurven. "Chimpanzee grandmothers are rarely observed doing anything for their grandkids."

Though the authors say their work is more about how the capacity for long life came to first exist in the Homo lineage, the implication that we owe it to elders everywhere is an important reminder looking forward.

"Despite elders being far more numerous today than ever before in the past, there's still much ageism and underappreciation of older adults," Gurven said. "When COVID seemed to be most deadly just for older adults, many shrugged their shoulders about the urgency of lockdown or other major precautions.

Read more at Science Daily

Jul 7, 2022

Porosity of the moon's crust reveals bombardment history

Around 4.4 billion years ago, the early solar system resembled a game of space rock dodgeball, as massive asteroids and comets, and, later, smaller rocks and galactic debris pummeled the moon and other infant terrestrial bodies. This period ended around 3.8 billion years ago. On the moon, this tumultuous time left behind a heavily cratered face, and a cracked and porous crust.

Now MIT scientists have found that the porosity of the moon's crust, reaching well beneath the surface, can reveal a great deal about the moon's history of bombardment.

In a study appearing in Nature Geoscience, the team has shown through simulations that, early on in the bombardment period, the moon was highly porous -- almost one-third as porous as pumice. This high porosity was likely a result of early, massive impacts that shattered much of the crust.

Scientists have assumed that a continuous onslaught of impacts would slowly build up porosity. But surprisingly, the team found that nearly all the moon's porosity formed rapidly with these massive imapcts, and that the continued onslaught by smaller impactors actually compacted its surface. These later, smaller impacts acted instead to squeeze and compact some of the moon's existing cracks and faults.

From their simulations, the researchers also estimated that the moon experienced double the number of impacts as can be seen on the surface. This estimate is lower than what others have assumed.

"Previous estimates put that number much higher, as many as 10 times the impacts as we see on the surface, and we're predicting there were fewer impacts," says study co-author Jason Soderblom, a research scientist in MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS). "That matters because that limits the total material that impactors like asteroids and comets brought to the moon and terrestrial bodies, and gives constraints on the formation and evolution of planets throughout the solar system."

The study's lead author is EAPS postdoc Ya Huei Huang, along with collaborators at Purdue University and Auburn University.

A porous record

In the team's new study, the researchers looked to trace the moon's changing porosity and use those changes below the surface to estimate the number of impacts that occurred on its surface.

"We know the moon was so bombarded that what we see on the surface is no longer a record of every impact the moon has ever had, because at some point, impacts were erasing previous impacts," Soderblom says. "What we're finding is that the way impacts created porosity in the crust is not destroyed, and that can give us a better constraint on the total number of impacts that the moon was subject to."

To trace the evolution of the moon's porosity, the team looked to measurements taken by NASA's Gravity Recovery and Interior Laboratory, or GRAIL, an MIT-designed mission that launched twin spacecraft around the moon to precisely map the surface gravity.

Researchers have converted the mission's gravity maps into detailed maps of the density of the moon's underlying crust. From these density maps, scientists have also been able to map the current-day porosity throughout the lunar crust. These maps show that regions surrounding the youngest craters are highly porous, while less porous regions surround older craters.

Crater chronology


In their new study, Huang, Soderblom and their colleagues looked to simulate how the moon's porosity changed as it was bombarded with first large and then smaller impacts. They included in their simulation the age, size, and location of the 77 largest craters on the moon's surface, along with GRAIL-derived estimates of each crater's current-day porosity. The simulation includes all known basins, from the oldest to the youngest impact basins on the moon, and span ages between 4.3 billion and 3.8 billion years old.

For their simulations, the team used the youngest craters with the highest current-day porosity as a starting point to represent the moon's initial porosity in the early stages of the lunar heavy bombardment. They reasoned that older craters that formed in the early stages would have started out highly porous but would have been exposed to further impacts over time that compacted and reduced their initial porosity. In contrast, younger craters, though they formed later on, would have experienced fewer if any subsequent impacts. Their underlying porosity would then be more representative of the moon's initial conditions.

"We use the youngest basin that we have on the moon, that hasn't been subject to too many impacts, and use that as a way to start as initial conditions," Huang explains. "We then use an equation to tune the number of impacts needed to get from that initial porosity to the more compacted, present-day porosity of the oldest basins."

The team studied the 77 craters in chronological order, based on their previously determined ages. For each crater, the team modeled the amount by which the underlying porosity changed compared to the initial porosity represented by the youngest crater. They assumed a bigger change in porosity was associated with a larger number of impacts, and used this correlation to estimate the number of impacts that would have generated each crater's current-day porosity.

These simulations showed a clear trend: At the start of the lunar heavy bombardment, 4.3 billion years ago, the crust was highly porous -- about 20 percent (by comparison, the porosity of pumice is about 60 to 80 percent). Closer to 3.8 billion years ago, the crust became less porous, and remains at its current-day porosity of about 10 percent.

Read more at Science Daily

The beginning of life: The early embryo is in the driver's seat

One often thinks that the early embryo is fragile and needs support. However, at the earliest stages of development, it has the power to feed the future placenta and instructs the uterus so that it can nest. Using 'blastoids', in vitro embryo models formed with stem cells, the Lab of Nicolas Rivron at IMBA showed that the earliest molecular signals that induce placental development and prepare the uterus come from the embryo itself. The findings, now published in Cell Stem Cell, could contribute to a better understanding of human fertility.

Who takes care of whom at the onset of life? The placenta and the uterus nurture and shelter the fetus. But the situation at the very early stage of development, when the blastocyst still floats in the uterus, was unclear so far. Now, the research group of Nicolas Rivron at IMBA (Institute of Molecular Biotechnology of the Austrian Academy of Sciences) uncovered basic principles of early development using blastoids.

Blastoids are in vitro models of the blastocyst, the mammalian embryo in the first few days following fertilization. These embryo models were first developed by the Rivron lab from mouse stem cells (Nature, 2018) and then from human stem cells (Nature, 2021). Blastoids provide an ethical alternative to the use of embryos for research and, importantly, enable multiple discoveries.

Now, blastoids settled a "chicken or egg" dilemma. Using mouse blastoids, the researchers found that the early embryonic part (~10 cells) instructs the future placental part (~100 cells) to form, and the uterine tissues to change. "By doing this, the embryo invests in its own future: it promotes the formation of the tissues that will soon take care of its development. The embryo is in control, instructing the creation of a supporting surrounding," states Nicolas Rivron.

Indeed, the team discovered several molecules secreted by the few cells from which the fetus develops, the epiblasts. They observed that these molecules tell other cells, the trophoblasts that later form the placenta, to self-renew and proliferate, two stem cell properties that are essential for the placenta to grow.

The team also found that these molecules induce the trophoblasts to secrete two other molecules, WNT6 and WNT7B. WNT6 and WNT7B tell the uterus to wrap around the blastocyst. "Other researchers had previously seen that WNT molecules are involved in the uterine reaction. Now we show that these signals are WNT6/7B and that they are produced by the blastocyst trophoblasts to notify the uterus to react. The relevance could be high because we have verified that these two molecules are also expressed by the trophoblasts of the human blastocyst," states Nicolas Rivron.

The team made their findings partly by examining the extent of implantation of the mouse blastoids in an in vivo implantation mouse model. "I was very surprised by the efficiency at which our blastoids implanted into the uterus. And by changing the properties of the trophoblasts within blastoids, including the secretion levels of WNT6/7B, we could clearly change the size of the uterine cocoon," says co-first author Jinwoo Seong, a postdoctoral fellow in the Rivron lab, who performed these experiments.

Because implantation is the bottleneck in human pregnancies -- around 50 percent of pregnancies fail at that time -- and WNT6 and WNT7B are also present in human blastocysts, these findings might explain why, sometimes, things go wrong. "We are currently repeating these experiments with human blastoids and uterine cells, all in a dish, to estimate the conservation of such basic principles of development. These discoveries might ultimately contribute to improving IVF procedures, developing fertility drugs, and contraceptives" says Nicolas Rivron.

Read more at Science Daily

A new giant dinosaur gives insight into why many prehistoric meat-eaters had such tiny arms

A team co-led by University of Minnesota Twin Cities researcher Peter Makovicky and Argentinean colleagues Juan Canale and Sebastian Apesteguía has discovered a new huge, meat-eating dinosaur, dubbed Meraxes gigas. The new dinosaur provides clues about the evolution and biology of dinosaurs such as the Carcharodontosaurus and Tyrannosaurus rex—specifically, why these animals had such big skulls and tiny arms.

The study is published in Current Biology, a peer-reviewed scientific biology journal.

The researchers initially discovered Meraxes in Patagonia in 2012 and have spent the last several years extracting, preparing, and analyzing the specimen. The dinosaur is part of the Carcharodontosauridae family, a group of giant carnivorous theropods that also includes Giganotosaurus, one of the largest known meat-eating dinosaurs and one of the reptilian stars of the recently released “Jurassic World: Dominion” movie.

Though not the largest among carcharodontosaurids, Meraxes was still an imposing animal measuring around 36 feet from snout to tail tip and weighing approximately 9,000 pounds. The researchers recovered the Meraxes from rocks that are around 90-95 million years old, alongside other dinosaurs including several long-necked sauropod specimens.

Meraxes is among the most complete carcharodontosaurid skeleton paleontologists have found yet in the southern hemisphere and includes nearly the entirety of the animal’s skull, hips, and both left and right arms and legs.

“The neat thing is that we found the body plan is surprisingly similar to tyrannosaurs like T. rex,” said Peter Makovicky, one of the principal authors of the study and a professor in the University of Minnesota N.H. Winchell School of Earth and Environmental Sciences. “But, they’re not particularly closely related to T. rex. They're from very different branches of the meat-eating dinosaur family tree. So, having this new discovery allowed us to probe the question of, ‘Why do these meat-eating dinosaurs get so big and have these dinky little arms?’”

“The discovery of this new carcharodontosaurid, the most complete up to now, gives us an outstanding opportunity to learn about their systematics, paleobiology, and true size like never before,” said Sebastian Apesteguía, a co-author of the study and a researcher at Maimónides University in Argentina.

With the statistical data that Meraxes provided, the researchers found that large, mega-predatory dinosaurs in all three families of therapods grew in similar ways. As they evolved, their skulls grew larger and their arms progressively shortened.

The possible uses of the tiny forelimbs in T. rex and other large carnivorous dinosaurs have been the topic of much speculation and debate.

“What we’re suggesting is that there’s a different take on this,” Makovicky said. “We shouldn’t worry so much about what the arms are being used for, because the arms are actually being reduced as a consequence of the skulls becoming massive. Whatever the arms may or may not have been used for, they’re taking on a secondary function since the skull is being optimized to handle larger prey.”

The researchers also found that carcharodontosaurids including species from Patagonia evolved very quickly, but then disappeared suddenly from the fossil record very soon after.

“Usually when animals are on the verge of extinction, it’s because they’re evolutionary rates are quite slow, meaning they aren’t adapting very quickly to their environment,” explained  Juan Canale, the study’s lead author and a researcher at the National University of Río Negro. “Here, we have evidence that Meraxes and its relatives were evolving quite fast and yet within a few million years of being around, they disappeared, and we don’t know why. It’s one of these finds where you answer some questions, but it generates more questions for the future.”

Read more at Science Daily

How sound reduces pain in mice

An international team of scientists has identified the neural mechanisms through which sound blunts pain in mice. The findings, which could inform development of safer methods to treat pain, were published in Science. The study was led by researchers at the National Institute of Dental and Craniofacial Research (NIDCR); the University of Science and Technology of China, Hefei; and Anhui Medical University, Hefei, China. NIDCR is part of the National Institutes of Health.

"We need more effective methods of managing acute and chronic pain, and that starts with gaining a better understanding of the basic neural processes that regulate pain," said NIDCR Director Rena D'Souza, D.D.S., Ph.D. "By uncovering the circuitry that mediates the pain-reducing effects of sound in mice, this study adds critical knowledge that could ultimately inform new approaches for pain therapy."

Dating back to 1960, studies in humans have shown that music and other kinds of sound can help alleviate acute and chronic pain, including pain from dental and medical surgery, labor and delivery, and cancer. However, how the brain produces this pain reduction, or analgesia, was less clear.

"Human brain imaging studies have implicated certain areas of the brain in music-induced analgesia, but these are only associations," said co-senior author Yuanyuan (Kevin) Liu, Ph.D., a Stadtman tenure-track investigator at NIDCR. "In animals, we can more fully explore and manipulate the circuitry to identify the neural substrates involved."

The researchers first exposed mice with inflamed paws to three types of sound: a pleasant piece of classical music, an unpleasant rearrangement of the same piece, and white noise. Surprisingly, all three types of sound, when played at a low intensity relative to background noise (about the level of a whisper) reduced pain sensitivity in the mice. Higher intensities of the same sounds had no effect on animals' pain responses.

"We were really surprised that the intensity of sound, and not the category or perceived pleasantness of sound would matter," Liu said.

To explore the brain circuitry underlying this effect, the researchers used non-infectious viruses coupled with fluorescent proteins to trace connections between brain regions. They identified a route from the auditory cortex, which receives and processes information about sound, to the thalamus, which acts as a relay station for sensory signals, including pain, from the body. In freely moving mice, low-intensity white noise reduced the activity of neurons at the receiving end of the pathway in the thalamus.

In the absence of sound, suppressing the pathway with light- and small molecule-based techniques mimicked the pain-blunting effects of low-intensity noise, while turning on the pathway restored animals' sensitivity to pain.

Liu said it is unclear if similar brain processes are involved in humans, or whether other aspects of sound, such as its perceived harmony or pleasantness, are important for human pain relief.

"We don't know if human music means anything to rodents, but it has many different meanings to humans -- you have a lot of emotional components," he said.

The results could give scientists a starting point for studies to determine whether the animal findings apply to humans, and ultimately could inform development of safer alternatives to opioids for treating pain.

Read more at Science Daily

Jul 6, 2022

8000 kilometers per second: Star with the shortest orbital period around black hole discovered

Researchers at the University of Cologne and Masaryk University in Brno (Czech Republic) have discovered the fastest known star, which travels around a black hole in record time. The star, S4716, orbits Sagittarius A*, the black hole in the centre of our Milky Way, in four years and reaches a speed of around 8000 kilometres per second. S4716 comes as close as 100 AU (astronomical unit) to the black hole -- a small distance by astronomical standards. One AU corresponds to 149,597,870 kilometres. The study has been published in The Astrophysical Journal.

In the vicinity of the black hole at the centre of our galaxy is a densely packed cluster of stars. This cluster, called S cluster, is home to well over a hundred stars that differ in their brightness and mass. S stars move particularly fast. 'One prominent member, S2, behaves like a large person sitting in front of you in a movie theatre: it blocks your view of what's important,' said Dr Florian Peissker, lead author of the new study. 'The view into the centre of our galaxy is therefore often obscured by S2. However, in brief moments we can observe the surroundings of the central black hole.'

By means of continuously refining methods of analysis, together with observations covering almost twenty years, the scientist now identified without a doubt a star that travels around the central supermassive black hole in just four years. A total of five telescopes observed the star, with four of these five being combined into one large telescope to allow even more accurate and detailed observations. 'For a star to be in a stable orbit so close and fast in the vicinity of a supermassive black hole was completely unexpected and marks the limit that can be observed with traditional telescopes,' said Peissker.

Moreover, the discovery sheds new light on the origin and evolution of the orbit of fast-moving stars in the heart of the Milky Way. 'The short-period, compact orbit of S4716 is quite puzzling,' Michael Zajaček, an astrophysicist at Masaryk University in Brno who was involved in the study, said. 'Stars cannot form so easily near the black hole. S4716 had to move inwards, for example by approaching other stars and objects in the S cluster, which caused its orbit to shrink significantly,' he added.

From Science Daily

Volcano's eruption will help scientists plot weather, climate

As it captivated people around the world, the January eruption of the Hunga Tonga-Hunga Ha'apai volcano gave scientists a once-in-a-lifetime chance to study how the atmosphere works, unlocking keys to better predict the weather and changing climate.

The volcano, located in the South Pacific nation of Tonga, became active Dec. 20, 2021, and erupted Jan. 15, 2022. The blast obliterated one of the country's many islands and was described by NASA as more powerful than an atomic bomb.

UMass Lowell's Mathew Barlow, professor of environmental, earth and atmospheric sciences, was among an international team of scientists who studied the atmospheric response to the eruption, the likes of which has never before been recorded. The group's findings were published in Nature.

As part of his work, Barlow created an animated video from satellite data that shows the eruption's dramatic effects. The event saw atmospheric waves pulse around the globe several times and stretch from Earth to the edge of space, some at speeds of 720 mph. The eruption also shot a plume of water vapor, along with volcanic ash, soil and smoke, 31 miles into the air. A short video produced by the researchers summarizes the effects.

"Some of the wave types the Hunga Tonga generated are very important to understanding how the atmosphere works and our ability to make effective computer models for weather forecasting and climate projections," said Barlow, a faculty member in UMass Lowell's Climate Change Initiative. "Through the expulsion of particles into the high atmosphere, some strong eruptions can also have a cooling effect on the climate, though the amount produced by Hunga Tonga does not appear sufficient for a notable climate effect, unlike other volcanic eruptions over the last century, like the Pinatubo eruption in Alaska in 1991."

According to Barlow, the Hunga Tonga explosion appears to be the strongest single burst of volcanic energy released in 140 years, since the eruption of the Krakatoa volcano in Indonesia in 1883. Coupled with advances in satellite imagery, the strength of the Hunga Tonga eruption gave scientists an unprecedented view of atmospheric waves. Barlow said he and fellow researchers were able to analyze its effects in near-real time communication with agencies across the globe.

Read more at Science Daily

New research challenges long-held beliefs about limb regeneration

Ken Muneoka is no stranger to disrupting the field of regeneration; for example, in a 2019 ground-breaking publication in Nature, the Texas A&M University College of Veterinary Medicine & Biomedical Sciences (CVMBS) professor proved for the first time that joint regeneration in mammals was possible.

Now, his team is again challenging other centuries-old beliefs about the fundamental science of the field, this time related to how mammals might regenerate damaged parts of the body.

In humans, the natural ability to regenerate is limited to tissues like the epidermis, the outermost layer of skin, and some organs, such as the liver.

Other species, most notably salamanders, have the ability to regenerate complex structures such as bones, joints, and even entire limbs. As a result, scientists have been studying these species for more than 200 years to try to understand the mechanisms behind limb regeneration in the hopes of someday translating those mechanisms to induce more extensive regeneration in humans.

That research has led to a common belief that the single biggest key for limb regeneration is the presence of nerves.

While that may be true for salamanders and other species, it isn't the case in mammals, according to two of Muneoka's recently published studies. The first study, published last year in the Journal of Bone and Mineral Research, established that mechanical loading (the ability to apply force to or with an affected area) is a requirement for mammals. The second, published earlier this year in Developmental Biology, established that the absence of nerves does not inhibit regeneration.

Together, these findings present a sizeable shift in the thinking of how regeneration could work in human medicine.

"What these two studies show counteracts the two-century-old dogma that you need nerves to regenerate," Muneoka said. "What replaces it in mammals is that you need mechanical loading, not nerves."

Importance Of Mechanical Load

Scientists have long believed that two things must be present in an affected area in order to induce regeneration in mammals. The first is growth factors, which are molecules that can stimulate cells to regrow and reconstruct parts of the body.

In natural regeneration, these growth factors, which vary from species to species and by area being regenerated, are produced by the body. For human-induced regeneration, these growth factors must be introduced to the area.

The second factor believed to be necessary was nerves. This belief was predicated by many previous human-induced mammal regeneration studies on areas, usually digit tips, without nerves, in which the whole limbs were also no longer usable.

Those studies would have the predicted outcome -- when growth factors were introduced regeneration did not take place-leading to the conclusion that, like in other species, nerves were a requirement for regeneration.

But the mechanical load aspect was ignored.

In their studies, Muneoka and colleagues decided to take a step back and ask the question, "is it really the nerves, or is lack of mechanical load part of the equation as well?"

Connor Dolan, a former graduate student in Muneoka's lab and first author on both new studies (who now works at the Walter Reed National Military Medical Center), came up with a way to test the denervation requirement in mammals that was inspired by astronauts.

The technique, called hindlimb suspension, has been used by NASA and other scientists for decades to test how mammals react to zero gravity environments. A similar process is used during medical procedures on legs of large animals to prevent the animals from putting weight on the affected limbs.

"Dolan found that when the limbs were suspended, even though they still had lots of nerves and could move around, they couldn't actually put pressure on their limbs so the digit tips wouldn't regenerate," Muneoka said. "It just completely inhibited regeneration."

As soon as the mechanical load returns, however, regeneration is rescued.

"Absolutely nothing happens during the suspension," Muneoka said. "But once the load returns, there will be a couple weeks of delay, but then they'll begin to regenerate."

That first step proved that even though nerves might be required, the mechanical loading was a critical component to regeneration.

Taking the research a step further, Dolan's second publication showed that nerves weren't required by demonstrating that if a mouse has no nerves in one of its digits but does in the others -- so that it's still exerting force on the denervated digit -- that digit will still regenerate.

"He found that they regenerate a little bit slower, but they regenerated perfectly normally," Muneoka said.

Ramifications Of The Research


Muneoka is quick to point out that their studies aren't saying that previous research is wrong, just that it doesn't directly apply to humans.

"There have been a number of studies in salamanders that prove that when you remove the nerves, they do not regenerate," Muneoka said. "Researchers have also been able to put growth factors they know are being produced by nerves into the cells and rescue regeneration.

"So, salamanders probably do need nerves to regenerate," he said. "But if we're going to regenerate limbs in humans, it's going to be a lot more like what happens in mice."

Since first beginning to look at regeneration more than 20 years ago, a number of Muneoka's ideas have pushed back against the generally accepted theories about regeneration. He said that getting these two papers published took almost three years because they originally tried to submit them together.

"Many scientists don't embrace this idea," he said. "A lot of people's careers are really dependent on their studies of nerves and how they affect regeneration. For a study to come out and say that for humans it's unlikely you'll need the nerves, the whole biomedical application of what people are doing in salamanders and fish kind of goes out the window."

Looking Down The Road


Nerves not being required for regeneration in mammals may seem like an academic point. After all, what would be the point of regenerating a limb if the person couldn't feel it or control it because it had no nerves. In that sense, nerves are still going to be an important part of the puzzle.

From Muneoka's perspective, the shift is that instead of thinking of nerves as a requirement for regeneration, nerves are a part of what needs to be regenerated.

Larry Suva, head of the CVMBS' Department of Veterinary Physiology & Pharmacology (VTPP), says the issue is that nobody was even thinking about the load aspect previously.

"Think of a blast injury where a soldier is left with a stump," Suva said. "No one, until this paper came out, was even thinking about a requirement from mechanical influences. You had people see that a denervated animal doesn't regenerate and they're thinking it's because the nerve was cut, but nobody was studying the mechanical load aspect."

As Suva puts it, science is full of people looking where the light is best.

"I work on bones, so when I see a problem, I look at the bone problem," he said. "People who work on nerves, all they look at are nerves. So it's very rare that someone like Dr. Muneoka will take a step back and take a more holistic view.

"That's what he brought to this idea, to this 200-year-old data," Suva said. "We now have to look at regeneration through a different lens because now we know the mechanical influences are extremely important."

One of the results of research focusing on nerves is that scientists have been able to recreate the growth factors that nerves produce, which has allowed researchers to start regeneration in salamanders, even if the nerves aren't present. Suva said that with these new findings, scientists will now know they have to do the same with the mechanical load aspect if they want to start regeneration in mammals.

"Scientists already have been able to trick the body into thinking nerves are still present," he said. "But now they know they'll also have to trick it into thinking there's a mechanical load, something that has not been done before."

Because cells react differently under mechanical load, somehow, that load is being translated biochemically inside the cell.

"There's a small number of labs looking at the biochemical basis for what mechanical load does to a cell," Muneoka said. "If we could understand that biochemical signal, then perhaps the physical force of mechanical load can be replaced by some sort of cocktail of molecules that will create the same signals in the cells."

The end of the road toward full human regeneration may still be a long way in the future, but Suva says that this kind of fundamental shift in thinking is a major marker on that road.

"Regeneration of a human limb may still be science fiction, but we know some facts about it, and now we know you have to have that mechanical load along with the growth factors," he said. "That changes how future scientists and engineers are going to solve this problem.

Read more at Science Daily

Discovery reveals large, year-round ozone hole over tropics

An ozone hole, seven times larger than the Antarctic ozone hole, is currently sitting over tropical regions and has been since the 1980s, according to a Canadian researcher.

In AIP Advances, by AIP Publishing, Qing-Bin Lu, a scientist from the University of Waterloo in Ontario, Canada, reveals a large, all-season ozone hole -- defined as an area of ozone loss larger than 25% compared with the undisturbed atmosphere -- in the lower stratosphere over the tropics comparable in depth to that of the well-known springtime Antarctic hole, but its area is roughly seven times greater.

"The tropics constitute half the planet's surface area and are home to about half the world's population," said Lu. "The existence of the tropical ozone hole may cause a great global concern.

"The depletion of the ozone layer can lead to increased ground-level UV radiation, which can increase risk of skin cancer and cataracts in humans, as well as weaken human immune systems, decrease agricultural productivity, and negatively affect sensitive aquatic organisms and ecosystems."

Lu's observation of the ozone hole comes as a surprise to his peers in the scientific community, since it was not predicted by conventional photochemical models. His observed data agree well with the cosmic-ray-driven electron reaction (CRE) model and strongly indicate the identical physical mechanism working for both Antarctic and tropical ozone holes.

As with the polar ozone hole, approximately 80% of the normal ozone value is found to be depleted at the center of the tropical ozone hole. Preliminary reports show ozone depletion levels over equatorial regions are already endangering large populations and the associated UV radiation reaching these regions is far greater than expected.

In the mid-1970s, atmospheric research suggested the ozone layer, which absorbs most of the sun's ultraviolet radiation, might be depleted because of industrial chemicals, primarily chlorofluorocarbons (CFCs). The 1985 discovery of the Antarctic ozone hole confirmed CFC-caused ozone depletion. Although bans on such chemicals have helped slow ozone depletion, evidence suggests ozone depletion persisted.

Lu said the tropical and polar ozone holes play a major role in cooling and regulating stratospheric temperatures, mirroring the formation of three "temperature holes" in the global stratosphere. He said this finding may prove crucial to better understanding global climate change.

Lu's discovery builds on previous studies of the CRE-initiated ozone-depleting mechanism that he and his colleagues originally proposed about two decades ago.

Read more at Science Daily

Jul 5, 2022

Shedding light on comet Chury's unexpected chemical complexity

Comets are fossils from the ancient times and from the depths of our Solar System, and they are relics from the formation of the sun, planets, and moons. A team led by chemist Dr. Nora Hänni of the Physics Institute of the University of Bern, Department of Space Research and Planetary Sciences, has now succeeded for the first time in identifying a whole series of complex organic molecules at a comet as they report in a study published end of June in the   journal Nature Communications.

More precise analysis thanks to Bernese mass spectrometer

In the mid-1980ies, a fleet of spacecraft was sent out by the large space agencies to fly past Halley's comet. Onboard were several mass spectrometers that measured the chemical composition of both the comet's coma -- the thin atmosphere due to sublimation of cometary ices close to the Sun -, and also that of impacting dust particles. However, data collected by these instruments did not have the resolution needed to allow for unambiguous interpretation.

Now, more than 30 years later, the high-resolution mass spectrometer ROSINA, a Bern-led instrument onboard ESA's Rosetta spacecraft, collected data at comet 67P/Churyumov-Gerasimenko, also known as Chury, between 2014 and 2016. These data now allow the researchers to shed light for the first time on the complex organic budget of Chury.

The secret was hidden in the dust

When Chury reached its perihelion, the closest point to the Sun, it became very active. Sublimating cometary ices created outflow that dragged along dust particles. Expelled particles were heated up by solar irradiation to temperatures beyond those typically experienced at the cometary surface. This allows larger and heavier molecules to desorb, making them available to the high-resolution mass spectrometer ROSINA-DFMS (Rosetta Orbiter Sensor for Ion and Neutral Analysis-Double Focusing Mass Spectrometer). The astrophysicist Prof. em. Dr. Kathrin Altwegg, Principal Investigator of the ROSINA instrument and co-author of the new study, says: "Due to the extremely dusty conditions, the spacecraft had to retreat to a safe distance of a bit more than 200 km above the cometary surface in order for the instruments to be able to operate under steady conditions." Hence, it was possible to detect species composed of more than a handful of atoms which had previously remained hidden in the cometary dust.

The interpretation of such complex data is challenging. However, the Bernese team of researchers successfully identified a number of complex organic molecules, which have never been found in a comet before. "We found for instance naphthalene, which is responsible for the characteristic smell of mothballs. And we also found benzoic acid, a natural component of incense. In addition, we identified benzaldehyde, widely used to confer almond flavour to foods, and many other molecules." These heavy organics would apparently make Chury's scent even more complex, but also more appealing,  according to Hänni.

Apart from fragrant molecules, also many species with so-called prebiotic functionality have been identified in Chury's organics budget (e.g., formamide). Such compounds are important intermediates in the synthesis of biomolecules (e.g., sugars or amino acids). "It therefore seems likely that impacting comets -- as essential suppliers of organic material -- also contributed to the emergence of carbon-based life on Earth," explains Hänni.

Similar organics in Saturn and meteorites

In addition to the identification of individual molecules, the researchers also carried out a detailed characterization of the full ensemble of complex organic molecules in comet Chury, allowing to put it into the larger Solar System context. Parameters like the average sum formula of this organic material or the average bonding geometry of the carbon atoms in it are of importance for a broad scientific community, ranging from astronomers to Solar System scientists.

"It turned out that, on average, Chury's complex organics budget is identical to the soluble part of meteoritic organic matter," explains Hänni and adds: "Moreover, apart from the relative amount of hydrogen atoms, the molecular budget of Chury also strongly resembles the organic material raining down on Saturn from its innermost ring, as detected by the INMS mass spectrometer onboard NASA's Cassini spacecraft."

Read more at Science Daily

Birds warned of food shortages by neighbor birds change physiology and behavior to prepare

Songbirds learning from nearby birds that food supplies might be growing short respond by changing their physiology as well as their behavior, research by the Oregon State University College of Science shows.

After receiving social information from food-restricted neighbors for three days, the red crossbills in the study raised their pace of consumption, increased their gut mass and maintained the size of the muscle responsible for flight when their own eating opportunities were subsequently limited to two short feeding periods per day.

Findings of the study by OSU's Jamie Cornelius, published in the journal Proceedings of the Royal Society B, suggest that birds can use social information about food shortages to effect an adaptive advantage for survival.

"This is an entirely new form of physiological plasticity in birds and builds on prior work showing that social cues during stress can actually change how the brain processes stressors," said Cornelius, assistant professor of integrative biology.

Cornelius, an ecophysiologist, looks at the mechanisms that wild animals, particularly songbirds, use as they cope with unpredictable and extreme events in their environment, including fluctuations in food availability. Her research combines natural history, endocrinology and biotelemetry to probe for a better understanding of what limits an animal's fitness under difficult conditions.

"Animals have all kinds of strategies for dealing with challenging environments, ranging from seasonal avoidance strategies like hibernation or migration to behaviors like caching or altered foraging activity," she said. "Physiological adjustments in metabolic rate, digestive capacity and energy reserves can sometimes accompany behavioral changes, but those things can take time to execute. That means unpredictable environmental conditions are particularly challenging for many animals."

Cornelius showed in earlier research that a red crossbill with a food-restricted neighbor will secrete higher than usual levels of the stress hormone corticosterone during its own food-stress periods and also undergo brain activity changes that prepare the bird to respond more strongly to the challenge.

The red crossbill, known scientifically as Loxia curvirostra, is a nomadic species that migrates based on food availability and incorporates other birds' calls or behavior into its decision-making on how to respond to food deprivation.

Found throughout Europe and North America, the crossbill is a member of the finch family. It is known, as its name suggests, for upper and lower beak tips that cross, a feature that helps it pull seeds from conifer cones and other fruits.

"Crossbills are an interesting study system because of their dependence on conifer seeds," Cornelius said. "Conifer seed crops are somewhat unpredictable both in where seed crops develop each year and in how long a seed crop might support birds. We use crossbills as a study system to try to understand what strategies birds might have available when food suddenly declines because crossbills may have to cope with this more often than other species."

In this research, which involved crossbills in captivity, some of the birds received three days of social information from food-deprived birds prior to their own food limitations; other birds received three days of social information from food-deprived birds at the same time as their own food deprivation.

Cornelius refers to the former collection of birds as the social predictive focal group and the latter as the social parallel focal group.

Read more at Science Daily

Molecule boosts fat burning

Normally, fat cells store energy. In brown fat cells, however, energy is dissipated as heat -- brown fat thus serves as a biological heater. Most mammals therefore have this mechanism. In humans it keeps newborns warm, in human adults, brown fat activation positively correlates with cardio-metabolic health.

"Nowadays, however, we're toasty warm even in winter," explains Prof. Dr. Alexander Pfeifer from the Institute of Pharmacology and Toxicology at the University of Bonn. "So our body's own furnaces are hardly needed anymore." At the same time, we are eating an increasingly energy-dense diet and are also moving far less than our ancestors. These three factors are poison for brown fat cells: They gradually cease to function and eventually even die. On the other hand, the number of severely overweight people worldwide continues to increase. "Research groups around the world are therefore looking for substances that stimulate brown fat and thus increase fat burning," says Pfeifer.

Dying fat cells boost energy combustion of their neighbors

Together with a group of colleagues, the team at the University of Bonn has now identified a key molecule named inosine that is capable of burning fat. "It is known that dying cells release a mix of messenger molecules that influence the function of their neighbors," explains Dr. Birte Niemann from Pfeifer's research group. Together with her colleague Dr. Saskia Haufs-Brusberg, she planned and conducted the central experiments of the study. "We wanted to know if this mechanism also exists in brown fat."

The researchers therefore studied brown fat cells subjected to severe stress, so that the cells were virtually dying. "We found that they secrete the purine inosine in large quantities," Niemann says. More interesting, however, was how intact brown fat cells responded to the molecular call for help: They were activated by inosine (or simply by dying cells in their vicinity). Inosine thus fanned the furnace inside them. White fat cells also converted to their brown siblings. Mice fed a high-energy diet and treated with inosine at the same time remained leaner compared to control animals and were protected from diabetes.

The inosine transporter seems to play an important role in this context: This protein in the cell membrane transports inosine into the cell, thus lowering the extracellular concentration. Therefore, inosine can no longer exert its combustion-promoting effect.

Drug inhibits the inosine transporter

"There is a drug that was actually developed for coagulation disorders, but also inhibits the inosine transporter," says Pfeifer, who is also a member of the Transdisciplinary Research Areas "Life and Health" and "Sustainable Futures" at the University of Bonn. "We gave this drug to mice, and as a result they burned more energy." Humans also have an inosine transporter. In two to four percent of all people, it is less active due to a genetic variation. "Our colleagues at the University of Leipzig have genetically analyzed 900 individuals," Pfeifer explains. "Those subjects with the less active transporter were significantly leaner on average."

These results suggest that inosine also regulates thermogenesis in human brown fat cells. Substances that interfere with the activity of the transporter could therefore potentially be suitable for the treatment of obesity. The drug already approved for coagulation disorders could serve as a starting point. "However, further studies in humans are needed to clarify the pharmacological potential of this mechanism," Pfeifer says. Neither does he believe that a pill alone will be the solution to the world's rampant obesity pandemic. "But the available therapies are not effective enough at the moment," he stresses. "We therefore desperately need medications to normalize energy balance in obese patients."

The key role played by the body's own heating system is also demonstrated by a major new joined research consortium: The German Research Foundation (DFG) recently approved a Transregional Collaborative Research Center in which the Universities of Bonn, Hamburg and Munich conduct targeted research on brown adipose tissue.

Read more at Science Daily

Scientists discover cancer trigger that could spur targeted drug therapies

Researchers at the Department of Energy's Oak Ridge National Laboratory have definitively linked the function of a specific domain of proteins important in plant-microbe biology to a cancer trigger in humans, knowledge that had eluded scientists for decades.

The team's findings, published in Nature Communications Biology, open up a new avenue for the development of selective drug therapies to fight a variety of cancers such as those that begin in the breast and stomach.

ORNL scientists set out to prove experimentally what they first deduced with computational studies: that the plasminogen-apple-nematode, or PAN, domain is linked to the cell proliferation that drives tumor growth in humans and defense signaling during plant-microbe interactions in bioenergy crops. The association was first made as researchers explored the genomes of crops like poplar and willow.

In the latest study, the ORNL team pinpointed four core amino acids called cysteine residues in the HGF protein critical to the PAN domain's function and studied their behavior in human cancer cell lines. They found that mutating any one of those amino acids turned off the signaling pathway known as HGF-c-MET that is abnormally heightened in cancer cells, causing them to rapidly multiply and spread.

Since cysteine residues are known to have many functions, the scientists also randomly tested other cysteines throughout the protein and found that none of them had the same impact on shutting down HGF-c-MET signaling. Mutating the four key cysteines had no effect on the overall structure of the protein, and merely inhibited the cancer signaling pathway, the team noted in the study.

Disrupting the right signal is one of the biggest challenges in developing new cancer therapies, said ORNL geneticist Wellington Muchero.

"It's very difficult to engineer molecules to interfere with an entire protein," he said. "Knowing the specific amino acids to target within that protein is a big advancement. You don't have to search the entire protein; just look for these four specific residues."

The identification of those core residues is a testament to the predictive power the team has built at ORNL, leveraging the lab's expertise in plant biology and biochemistry, genetics, and computational biology, as well as its supercomputing resources and the CRISPR/CAS-9 gene editing tool.

The discovery could lead to treatments for other diseases, including disrupting the infection pathway in mosquitos to make them less able to carry the malaria parasite, and fighting the HLB virus killing citrus trees in Florida and California by targeting the Asian citrus psyllid insect that spreads it.

In plants, ORNL scientists are using their knowledge of the PAN domain to improve resistance to pathogens and pests in biomass crops, such as poplar and willow, that can be broken down and converted to sustainable jet fuel. They are exploring the genetic processes that encourage beneficial interactions between plants and microbes to build hardiness in those crops.

The research demonstrates the close similarities in the DNA structure of plants, humans and other organisms, which make plants an important discovery platform, Muchero said. "We can do things with plants that you cannot do with humans or animals in the research process," he added.

"I can work with equal efficiency in plant and human cancers. The expertise is the same," said Debjani Pal, an ORNL postdoctoral researcher with a background in biochemistry and human cancer research. "We've established a globalized experimental platform here at ORNL that shows no matter what system you're using, plant or animal, if your hypothesis is correct then the science is repeatable in all of them, no matter what cell line you're using."

"At the bottom of it all, we have the same biological underpinnings," Muchero said.

Read more at Science Daily

Jul 4, 2022

Floating in space might be fun, but study shows it's hard on earthly bodies

Ever wondered if you have anything in common with an astronaut? Turns out there are 206 things -- your bones. It's these parts of our body that are the focus of a research study on bone loss in astronauts, and the important question of whether bone can be re-gained after returning to Earth.

The TBone study was started in 2015 by Dr. Steven Boyd, PhD, director of the McCaig Institute for Bone and Joint Health and professor in the Cumming School of Medicine. The study has followed 17 astronauts before and after spaceflight over the last seven years to understand whether bone recovers after 'long-duration' spaceflight. Findings are published in Scientific Reports, and while it might not seem like it matters to you here on Earth, the research is important to better understand bone health generally.

"Bone loss happens in humans -- as we age, get injured, or any scenario where we can't move the body, we lose bone," says Dr. Leigh Gabel, PhD, assistant professor in Kinesiology, and lead author of the study.

"Understanding what happens to astronauts and how they recover is incredibly rare. It lets us look at the processes happening in the body in such a short time frame. We would have to follow someone for decades on Earth to see the same amount of bone loss," Gabel says.

The researchers travelled to Johnson Space Center in Houston, Texas to scan the wrists and ankles of the astronauts before they left for space, on their return to Earth, and then at six- and 12-months.

"We found that weight-bearing bones only partially recovered in most astronauts one year after spaceflight," she says. "This suggests the permanent bone loss due to spaceflight is about the same as a decade worth of age-related bone loss on Earth."

This loss happens because bones that would normally be weight-bearing on Earth, like your legs, don't have to carry weight in microgravity -- you just float.

"We've seen astronauts who had trouble walking due to weakness and lack of balance after returning from spaceflight, to others who cheerfully road their bike on Johnson Space Center campus to meet us for a study visit. There is quite a variety of response among astronauts when they return to Earth, says Boyd.

Former UCalgary Chancellor and astronaut, Dr. Robert Thirsk, BSc (Eng)'76, Hon. LLD'09, MD, knows firsthand how bizarre the return to Earth can be. "Just as the body must adapt to spaceflight at the start of a mission, it must also readapt back to Earth's gravity field at the end," says Thirsk. "Fatigue, light-headedness, and imbalance were immediate challenges for me on my return. Bones and muscles take the longest to recover following spaceflight. But within a day of landing, I felt comfortable again as an Earthling."

Some astronauts who flew on shorter missions, under six months, recovered bone strength and density in the lower body, compared to those who flew for longer durations.

Access to astronauts is rare -- the study team includes two members from the European Space Agency (ESA), Dr. Anna-Maria Liphardt, PhD, and Martina Heer, PhD, as well as two from NASA, Dr. Scott Smith, PhD, and Dr. Jean Sibonga, PhD. The study was funded by the Canadian Space Agency and conducted in partnership with ESA, NASA and astronauts from North America, Europe, and Asia.

As future space missions are exploring travel to more distant locations, the study's next iteration will explore the effects of even longer trips, to support astronauts who may one day travel beyond the International Space Station.

As Thirsk says, "Astronauts will venture to deep space this decade and, in the coming centuries, humanity will populate other star systems. Let's push back the frontiers of space exploration now to make this vision possible."

Read more at Science Daily

What are whale sharks up to?

The largest fish in the ocean is a globe-trotter that can occasionally be found basking in the coastal waters of the Panamanian Pacific. However, little more is known about the habits of the whale shark (Rhincodon typus) in the region. By satellite-tracking the whereabouts of 30 of them, scientists from the Smithsonian Tropical Research Institute (STRI), the Anderson Cabot Center for Ocean Life and the University of Panama explored the factors influencing this endangered species' behavior.

The R. typus, like other large sharks, may take years or even decades to reach maturity and reproduce, making them vulnerable to population declines, especially when combined with human threats. For instance, they may be caught in fishing nets as bycatch or face the risk of vessel strikes when shipping lanes overlap with their feeding sites. Being able to understand and predict whale shark behavior is a necessary step for protecting the species.

The satellite monitoring of this species, led by STRI marine ecologist Héctor Guzmán, found that whale sharks feed mainly in coastal waters, seamounts and ridges of the Panamanian Pacific, where they can find an abundance of their favorite foods: small fish and plankton. They were also spotted swimming north and southbound along the coast, towards Mexico and Ecuador, and towards the open ocean to feed.

"This species requires clear regional planning," said Guzmán. "Once the feeding and breeding aggregation areas are identified, some protection measures should be implemented. The newly announced marine protected area expansions across the region provide an interesting platform for large-scale conservation practices."

Although they used marine protected areas, the whale sharks also spent time in industrial fishing and vessel traffic zones, which could endanger them according to the new article published in Frontiers in Marine Science.

"The study shows how complex it is to protect whale sharks: tagged individuals visited 17 marine protected areas in 5 countries, but more than 77% of their time they were in areas without any protection," said Catalina Gómez, co-author of the study and marine ecologist at the University of Panama.

Thus, for highly migratory and endangered species such as the whale shark, conservation measures should go beyond the establishment of local marine protected areas.

Efforts should focus on protecting large oceanic areas and establishing marine corridors that transcend national borders, for example: the newly expanded Cordillera de Coiba Marine Protected Area in Panama or the Marine Conservation Corridor of the Eastern Tropical Pacific which connects Coiba with Costa Rica's Cocos Islands, the Galapagos in Ecuador and Colombia's Malpelo Island.

"A periodic tagging program should continue for two main reasons: first, we still don't know where the species reproduces and tracking may lead us in the right direction," said Guzmán. "Second, we know that they are moving across extensive areas. We have identified potential corridors or seaways, as well as aggregation areas, that require management attention and clear protection rules. Tracking will allow us to better identify those regional routes."

The satellite tracking also revealed a whale shark migratory pattern that seems to be associated with circular ocean currents called eddies.

"Eddies are recognized as potential feeding areas for migratory species or food epicenters in the oceans, so they can swim in those areas for a long time while foraging and feeding," said Guzman. "However, eddies are dynamic systems and change constantly in speed or strength, size and location, even seasonally. These feeding areas are important for conservation, especially considering their dynamics and potential changes associated with climate change."

Read more at Science Daily

How placentas evolved in mammals

The fossil record tells us about ancient life through the preserved remains of body parts like bones, teeth and turtle shells. But how to study the history of soft tissues and organs, which can decay quickly, leaving little evidence behind?

In a new study, scientists use gene expression patterns, called transcriptomics, to investigate the ancient origins of one organ: the placenta, which is vital to pregnancy.

"In some mammals, like humans, the placenta is really invasive, so it invades all the way through the wall of the uterus, into the maternal tissue. In other mammals, the placenta just touches the wall of the uterus. And then there's everything in between," says senior author Vincent J. Lynch, PhD, associate professor of biological sciences in the University at Buffalo College of Arts and Sciences.

"So what kind of placentas were early placentas?" he says. "We use gene expression patterns to reconstruct the evolution of the placenta and predict what the placenta of the last common ancestor of eutherian mammals looked like. Our data tells us that this placenta was invasive, and that non-invasive placentas evolved multiple times among mammals. This addresses a 150-year-old mystery: People have been debating what kind of placenta the first one was since then."

As Lynch explains, all living mammals other than marsupials and egg-laying monotremes are eutherians, which have long pregnancies in which the developing fetus evokes a strong physiological response in the mother.

The research was published on June 30 in eLife. Lynch led the study with first author Katelyn Mika, PhD, University of Chicago postdoctoral scholar in human genetics and in organismal biology and anatomy. Camilla M. Whittington, PhD, and Bronwyn M. McAllan, PhD, both at the University of Sydney, are also co-authors.

"Our ability to ask how the placenta might have functioned at different points during its evolution by using the gene expression profiles of currently existing animals to reconstruct the ancestors is a really cool approach and provides us more information on how changing gene expression can contribute to the evolution of a new trait," Mika says.

To conduct the analysis, the team compared the genes active in the uterus of various mammals during pregnancy. After finding that these gene expression profiles correlated with the degree of placental invasiveness, the scientists used their data to predict what ancestral mammalian placentas looked like.

The study included about 20 species, such as the egg-laying platypus, pouch-bearing marsupials, and a range of eutherian mammals that give birth to live young.

The small subset is one limitation of the analysis: The authors write in eLife that research on a larger number of species is needed to help determine the strength of the findings.

Nevertheless, the study makes important contributions in understanding how pregnancy evolved, Lynch says. The results could also benefit modern medicine.

Read more at Science Daily

Dinosaurs took over amid ice, not warmth, says a new study of ancient mass extinction

Many of us know the conventional theory of how the dinosaurs died 66 million years ago: in Earth's fiery collision with a meteorite, and a following global winter as dust and debris choked the atmosphere. But there was a previous extinction, far more mysterious and less discussed: the one 202 million years ago, which killed off the big reptiles who up until then ruled the planet, and apparently cleared the way for dinosaurs to take over. What caused the so-called Triassic-Jurassic Extinction, and why did dinosaurs thrive when other creatures died?

We know that the world was generally hot and steamy during the Triassic Period, which preceded the extinction, and during the following Jurassic, which kicked off the age of dinosaurs. However, a new study turns the idea of heat-loving dinosaurs on its head: It presents the first physical evidence that Triassic dinosaur species -- then a minor group largely relegated to the polar regions -- regularly endured freezing conditions there. The telltale indicators: dinosaur footprints along with odd rock fragments that only could have been deposited by ice. The study's authors say that during the extinction, cold snaps already happening at the poles spread to lower latitudes, killing off the coldblooded reptiles. Dinosaurs, already adapted, survived the evolutionary bottleneck and spread out. The rest is ancient history.

"Dinosaurs were there during the Triassic under the radar all the time," said Paul Olsen, a geologist at Columbia University's Lamont-Doherty Earth Observatory, and lead author of the study. "The key to their eventual dominance was very simple. They were fundamentally cold-adapted animals. When it got cold everywhere, they were ready, and other animals weren't."

The study, based on recent excavations in the remote desert of northwest China's Junggar Basin, was just published in the journal Science Advances.

Dinosaurs are thought to have first appeared during the Triassic Period in temperate southerly latitudes about 231 million years ago, when most of the planet's land was joined together in one giant continent geologists call Pangaea. They made it to the far north by about 214 million years ago. Until the mass extinction at 202 million years, the more expansive tropical and subtropical regions in between were dominated by reptiles including relatives of crocodiles and other fearsome creatures.

During the Triassic, and for most of the Jurassic, atmospheric concentrations of carbon dioxide ranged at or above 2000 parts per million -- five times today's levels -- so temperatures must have been intense. There is no evidence of polar ice caps then, and excavations have shown that deciduous forests grew in polar regions. However, some climate models suggest that the high latitudes were chilly some of the time; even with all that CO2, they would have received little sunlight much of the year, and temperatures would decline at least seasonally. But until now, no one has produced any physical evidence that they froze.

At the end of the Triassic, a geologically brief period of perhaps a million years saw the extinction of more than three quarters of all terrestrial and marine species on the planet, including shelled creatures, corals and all sizable reptiles. Some animals living in burrows, such as turtles, made it through, as did a few early mammals. It is unclear exactly what happened, but many scientists connect it to a series of massive volcanic eruptions that could have lasted hundreds of years at a stretch. At this time, Pangaea started to split apart, opening what is now the Atlantic Ocean, and separating what are now the Americas from Europe, Africa and Asia. Among other things, the eruptions would have caused atmospheric carbon dioxide to skyrocket beyond its already high levels, causing deadly temperatures spikes on land, and turning ocean waters too acid for many creatures to survive.

The authors of the new study cite a third factor: During the eruptions' fiercest phases, they would have belched sulfur aerosols that deflected so much sunlight, they caused repeated global volcanic winters that overpowered high greenhouse-gas levels. These winters might have lasted a decade or more; even the tropics may have seen sustained freezing conditions. This killed uninsulated reptiles, but cold-adapted, insulated dinosaurs were able to hang on, say the scientists.

The researchers' evidence: fine-grained sandstone and siltstone formations left by sediments in shallow ancient lake bottoms in the Junggar Basin. The sediments formed 206 million years ago during the late Triassic, through the mass extinction and beyond. At that time, before landmasses rearranged themselves, the basin lay at about 71 degrees north, well above the Arctic Circle. Footprints found by the authors and others show that dinosaurs were present along shorelines. Meanwhile, in the lakes themselves, the researchers found abundant pebbles up to about 1.5 centimeters across within the normally fine sediments. Far from any apparent shoreline, the pebbles had no business being there. The only plausible explanation for their presence: they were ice-rafted debris (IRD).

Briefly, IRD is created when ice forms against a coastal landmass and incorporates bits of underlying rock. At some point the ice becomes unmoored and drifts away into the adjoining water body. When it melts, the rocks drop to the bottom, mixing with normal fine sediments. Geologists have extensively studied ancient IRD in the oceans, where it is delivered by glacial icebergs, but rarely in lake beds; the Junggar Basin discovery adds to the scant record. The authors say the pebbles were likely picked up during winter, when lake waters froze along pebbly shorelines. When warm weather returned, chunks of that ice floated off with samples of the pebbles in tow, and later dropped them.

"This shows that these areas froze regularly, and the dinosaurs did just fine," said study co-author Dennis Kent, a geologist at Lamont-Doherty.

How did they do it? Evidence has been building since the 1990s that many if not all non-avian dinosaurs including tyrannosaurs had primitive feathers. If not for flight, some coverings could have used for mating display purposes, but the researchers say their main purpose was insulation. There is also good evidence that, unlike the cold-blooded reptiles, many dinosaurs possessed warm-blooded, high-metabolism systems. Both qualities would have helped dinosaurs in chilly conditions.

"Severe wintery episodes during volcanic eruptions may have brought freezing temperatures to the tropics, which is where many of the extinctions of big, naked, unfeathered vertebrates seem to have occurred," said Kent. "Whereas our fine feathered friends acclimated to colder temperatures in higher latitudes did OK."

The findings defy the conventional imagery of dinosaurs, but some prominent specialists say they are convinced. "There is a stereotype that dinosaurs always lived in lush tropical jungles, but this new research shows that the higher latitudes would have been freezing and even covered in ice during parts of the year," said Stephen Brusatte, a professor of paleontology and evolution at the University of Edinburgh. "Dinosaurs living at high latitudes just so happened to already have winter coats [while] many of their Triassic competitors died out."

Randall Irmis, curator of paleontology at the Natural History Museum of Utah, and specialist in early dinosaurs, agrees. "This is the first detailed evidence from the high paleolatitudes, the first evidence for the last 10 million years of the Triassic Period, and the first evidence of truly icy conditions," he said. "People are used to thinking of this as being a time when the entire globe was hot and humid, but that just wasn't the case."

Olsen says the next step to better understand this period is for more researchers to look for fossils in former polar areas like the Junggar Basin. "The fossil record is very bad, and no one is prospecting," he said. "These rocks are gray and black, and it is much harder to prospect [for fossils] in these strata. Most paleontologists are attracted to the late Jurassic, where it's known there are many big skeletons to be had. The paleo-Arctic is basically ignored."

Read more at Science Daily

Jul 3, 2022

Capturing the onset of galaxy rotation in the early universe

As telescopes have become more advanced and powerful, astronomers have been able to detect more and more distant galaxies. These are some of the earliest galaxies to form in our universe that began to recede away from us as the universe expanded. In fact, the more the distance, the faster a galaxy appears to move away from us. Interestingly, we can estimate how fast a galaxy is moving, and in turn, when it was formed based on how "redshifted" its emission appears. This is similar to a phenomenon called "Doppler effect," where objects moving away from an observer emit the light that appears shifted towards longer wavelengths (hence the term "redshift") to the observer.

The Atacama Large Millimeter/submillimeter Array (ALMA) telescope located in the midst of the Atacama Desert in Chile is particularly well-suited for observing such redshifts in galaxy emissions. Recently, a team of international researchers including Professor Akio Inoue and graduate student Tsuyoshi Tokuoka from Waseda University, Japan, Dr. Takuya Hashimoto at University of Tsukuba, Japan, Professor Richard S. Ellis at University College London, and Dr. Nicolas Laporte, a research fellow at the University of Cambridge, UK, has observed redshifted emissions of a distant galaxy, MACS1149-JD1 (hereafter JD1), which has led them to some interesting conclusions. "Beyond finding high-redshift, namely very distant, galaxies, studying their internal motion of gas and stars provides motivation for understanding the process of galaxy formation in the earliest possible universe," explains Ellis. The findings of their study have been published in The Astrophysical Journal Letters.

Galaxy formation begins with the accumulation of gas and proceeds with the formation of stars from that gas. With time, star formation progresses from the center outward, a galactic disk develops, and the galaxy acquires a particular shape. As star formation continues, newer stars form in the rotating disk while older stars remain in the central part. By studying the age of the stellar objects and the motion of the stars and gas in the galaxy, it is possible to determine the stage of evolution the galaxy has reached.

Conducting a series of observations over a period of two months, the astronomers successfully measured small differences in the "redshift" from position to position inside the galaxy and found that JD1 satisfied the criterion for a galaxy dominated by rotation. Next, they modeled the galaxy as a rotating disk and found that it reproduced the observations very well. The calculated rotational speed was about 50 kilometers per second, which was compared to the rotational speed of the Milky Way disk of 220 kilometers per second. The team also measured the diameter of JD1 at only 3,000 light-years, much smaller than that of the Milky Way at 100,000 light-years across.

The significance of their result is that JD1 is by far the most distant and, therefore, earliest source yet found that has a rotating disk of gas and stars. Together with similar measurements of nearer systems in the research literature, this has allowed the team to delineate the gradual development of rotating galaxies over more than 95% of our cosmic history.

Furthermore, the mass estimated from the rotational speed of the galaxy was in line with the stellar mass previously estimated from the galaxy's spectral signature, and came predominantly from that of "mature" stars that formed about 300 million years ago. "This shows that the stellar population in JD1 formed at an even earlier epoch of the cosmic age," says Hashimoto.

Read more at Science Daily

Gemini North spies ultra-faint fossil galaxy discovered on outskirts of Andromeda

An unusual ultra-faint dwarf galaxy has been discovered on the outer fringes of the Andromeda Galaxy thanks to the sharp eyes of an amateur astronomer examining archival data processed by NSF's NOIRLab's Community Science and Data Center. Follow-up by professional astronomers using the International Gemini Observatory, a Program of NSF's NOIRLab, revealed that the dwarf galaxy -- Pegasus V -- contains very few heavier elements and is likely to be a fossil of the first galaxies.

An unusual ultra-faint dwarf galaxy has been discovered on the edge of the Andromeda Galaxy using several facilities of NSF's NOIRLab. The galaxy, called Pegasus V, was first detected as part of a systematic search for Andromeda dwarfs coordinated by David Martinez-Delgado from the Instituto de Astrofísica de Andalucía, Spain, when amateur astronomer Giuseppe Donatiello found an interesting 'smudge' in data in a DESI Legacy Imaging Surveys image. The image was taken with the US Department of Energy-fabricated Dark Energy Camera on the Víctor M. Blanco 4-meter Telescope at Cerro Tololo Inter-American Observatory (CTIO). The data were processed through the Community Pipeline which is operated by NOIRLab's Community Science and Data Center (CSDC).

Follow-up deeper observations by astronomers using the larger, 8.1-meter Gemini North telescope with the GMOS instrument, revealed faint stars in Pegasus V, confirming that it is an ultra-faint dwarf galaxy on the outskirts of the Andromeda Galaxy. Gemini North in Hawai'i is one half of the International Gemini Observatory.

The observations with Gemini revealed that the galaxy appears to be extremely deficient in heavier elements compared to similar dwarf galaxies, meaning that it is very old and likely to be a fossil of the first galaxies in the Universe.

"We have found an extremely faint galaxy whose stars formed very early in the history of the Universe,"commented Michelle Collins, an astronomer at the University of Surrey, UK and lead author of the paper announcing this discovery. "This discovery marks the first time a galaxy this faint has been found around the Andromeda Galaxy using an astronomical survey that wasn't specifically designed for the task."

The faintest galaxies are considered to be fossils of the very first galaxies that formed, and these galactic relics contain clues about the formation of the earliest stars. While astronomers expect the Universe to be teeming with faint galaxies like Pegasus V, they have not yet discovered nearly as many as their theories predict. If there are truly fewer faint galaxies than predicted this would imply a serious problem with astronomers' understanding of cosmology and dark matter.

Discovering examples of these faint galaxies is therefore an important endeavor, but also a difficult one. Part of the challenge is that these faint galaxies are extremely tricky to spot, appearing as just a few sparse stars hidden in vast images of the sky.

"The trouble with these extremely faint galaxies is that they have very few of the bright stars which we typically use to identify them and measure their distances," explained Emily Charles, a PhD student at the University of Surrey who was also involved in the study. "Gemini's 8.1-meter mirror allowed us to find faint, old stars which enabled us both to measure the distance to Pegasus V and to determine that its stellar population is extremely old."

The strong concentration of old stars that the team found in Pegasus V suggests that the object is likely a fossil of the first galaxies. When compared with the other faint galaxies around Andromeda, Pegasus V seems uniquely old and metal-poor, indicating that its star formation ceased very early indeed.

"We hope that further study of Pegasus V's chemical properties will provide clues into the earliest periods of star formation in the Universe," concluded Collins. "This little fossil galaxy from the early Universe may help us understand how galaxies form, and whether our understanding of dark matter is correct."

"The public-access Gemini North telescope provides an array of capabilities for community astronomers," said Martin Still, Gemini Program Officer at the National Science Foundation. "In this case, Gemini supported this international team to confirm the presence of the dwarf galaxy, associate it physically with the Andromeda Galaxy, and determine the metal-deficient nature of its evolved stellar population."

Read more at Science Daily