Aug 12, 2022

Hubble sees red supergiant star Betelgeuse slowly recovering after blowing its top

The star Betelgeuse appears as a brilliant, ruby-red, twinkling spot of light in the upper right shoulder of the winter constellation Orion the Hunter. But when viewed close up, astronomers know it as a seething monster with a 400-day-long heartbeat of regular pulsations. This aging star is classified as a supergiant because it has swelled up to an astonishing diameter of approximately 1 billion miles. If placed at the center of our solar system it would reach out to the orbit of Jupiter. The star's ultimate fate is to explode as a supernova. When that eventually happens it will be briefly visible in the daytime sky from Earth. But there are a lot of fireworks going on now before the final detonation.

Astronomers using Hubble and other telescopes have deduced that the star blew off a huge piece of its visible surface in 2019.

This has never before been seen on a star. Our petulant Sun routinely goes through mass ejections of its outer atmosphere, the corona. But those events are orders of magnitude weaker than what was seen on Betelgeuse. The first clue came when the star mysteriously darkened in late 2019. An immense cloud of obscuring dust formed from the ejected surface as it cooled. Astronomers have now pieced together a scenario for the upheaval. And the star is still slowly recovering; the photosphere is rebuilding itself. And the interior is reverberating like a bell that has been hit with a sledgehammer, disrupting the star's normal cycle. This doesn't mean the monster star is going to explode any time soon, but the late-life convulsions may continue to amaze astronomers.

Analyzing data from NASA's Hubble Space Telescope and several other observatories, astronomers have concluded that the bright red supergiant star Betelgeuse quite literally blew its top in 2019, losing a substantial part of its visible surface and producing a gigantic Surface Mass Ejection (SME). This is something never before seen in a normal star's behavior.

Our Sun routinely blows off parts of its tenuous outer atmosphere, the corona, in an event known as a Coronal Mass Ejection (CME). But the Betelgeuse SME blasted off 400 billion times as much mass as a typical CME!

The monster star is still slowly recovering from this catastrophic upheaval. "Betelgeuse continues doing some very unusual things right now; the interior is sort of bouncing," said Andrea Dupree of the Center for Astrophysics | Harvard & Smithsonian in Cambridge, Massachusetts.

These new observations yield clues as to how red stars lose mass late in their lives as their nuclear fusion furnaces burn out, before exploding as supernovae. The amount of mass loss significantly affects their fate. However, Betelgeuse's surprisingly petulant behavior is not evidence the star is about to blow up anytime soon. So the mass loss event is not necessarily the signal of an imminent explosion.

Dupree is now pulling together all the puzzle pieces of the star's petulant behavior before, after, and during the eruption into a coherent story of a never-before-seen titanic convulsion in an aging star.

This includes new spectroscopic and imaging data from the STELLA robotic observatory, the Fred L. Whipple Observatory's Tillinghast Reflector Echelle Spectrograph (TRES), NASA's Solar Terrestrial Relations Observatory spacecraft (STEREO-A), NASA's Hubble Space Telescope, and the American Association of Variable Star Observers (AAVSO). Dupree emphasizes that the Hubble data was pivotal to helping sort out the mystery.

"We've never before seen a huge mass ejection of the surface of a star. We are left with something going on that we don't completely understand. It's a totally new phenomenon that we can observe directly and resolve surface details with Hubble. We're watching stellar evolution in real time."

The titanic outburst in 2019 was possibly caused by a convective plume, more than a million miles across, bubbling up from deep inside the star. It produced shocks and pulsations that blasted off the chunk of the photosphere leaving the star with a large cool surface area under the dust cloud that was produced by the cooling piece of photosphere. Betelgeuse is now struggling to recover from this injury.

Weighing roughly several times as much as our Moon, the fractured piece of photosphere sped off into space and cooled to form a dust cloud that blocked light from the star as seen by Earth observers. The dimming, which began in late 2019 and lasted for a few months, was easily noticeable even by backyard observers watching the star change brightness. One of the brightest stars in the sky, Betelgeuse is easily found in the right shoulder of the constellation Orion.

Even more fantastic, the supergiant's 400-day pulsation rate is now gone, perhaps at least temporarily. For almost 200 years astronomers have measured this rhythm as evident in changes in Betelgeuse's brightness variations and surface motions. Its disruption attests to the ferocity of the blowout.

The star's interior convection cells, which drive the regular pulsation may be sloshing around like an imbalanced washing machine tub, Dupree suggests. TRES and Hubble spectra imply that the outer layers may be back to normal, but the surface is still bouncing like a plate of gelatin dessert as the photosphere rebuilds itself.

Though our Sun has coronal mass ejections that blow off small pieces of the outer atmosphere, astronomers have never witnessed such a large amount of a star's visible surface get blasted into space. Therefore, surface mass ejections and coronal mass ejections may be different events.

Read more at Science Daily

Astronomers confirm star wreck as source of extreme cosmic particles

Astronomers have long sought the launch sites for some of the highest-energy protons in our galaxy. Now a study using 12 years of data from NASA's Fermi Gamma-ray Space Telescope confirms that one supernova remnant is just such a place.

Fermi has shown that the shock waves of exploded stars boost particles to speeds comparable to that of light. Called cosmic rays, these particles mostly take the form of protons, but can include atomic nuclei and electrons. Because they all carry an electric charge, their paths become scrambled as they whisk through our galaxy's magnetic field. Since we can no longer tell which direction they originated from, this masks their birthplace. But when these particles collide with interstellar gas near the supernova remnant, they produce a tell-tale glow in gamma rays -- the highest-energy light there is.

"Theorists think the highest-energy cosmic ray protons in the Milky Way reach a million billion electron volts, or PeV energies," said Ke Fang, an assistant professor of physics at the University of Wisconsin, Madison. "The precise nature of their sources, which we call PeVatrons, has been difficult to pin down."

Trapped by chaotic magnetic fields, the particles repeatedly cross the supernova's shock wave, gaining speed and energy with each passage. Eventually, the remnant can no longer hold them, and they zip off into interstellar space.

Boosted to some 10 times the energy mustered by the world's most powerful particle accelerator, the Large Hadron Collider, PeV protons are on the cusp of escaping our galaxy altogether.

Astronomers have identified a few suspected PeVatrons, including one at the center of our galaxy. Naturally, supernova remnants top the list of candidates. Yet out of about 300 known remnants, only a few have been found to emit gamma rays with sufficiently high energies.

One particular star wreck has commanded a lot of attention from gamma-ray astronomers. Called G106.3+2.7, it's a comet-shaped cloud located about 2,600 light-years away in the constellation Cepheus. A bright pulsar caps the northern end of the supernova remnant, and astronomers think both objects formed in the same explosion.

Fermi's Large Area Telescope, its primary instrument, detected billion-electron-volt (GeV) gamma rays from within the remnant's extended tail. (For comparison, visible light's energy measures between about 2 and 3 electron volts.) The Very Energetic Radiation Imaging Telescope Array System (VERITAS) at the Fred Lawrence Whipple Observatory in southern Arizona recorded even higher-energy gamma rays from the same region. And both the High-Altitude Water Cherenkov Gamma-Ray Observatory in Mexico and the Tibet AS-Gamma Experiment in China have detected photons with energies of 100 trillion electron volts (TeV) from the area probed by Fermi and VERITAS.

"This object has been a source of considerable interest for a while now, but to crown it as a PeVatron, we have to prove it's accelerating protons," explained co-author Henrike Fleischhack at the Catholic University of America in Washington and NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The catch is that electrons accelerated to a few hundred TeV can produce the same emission. Now, with the help of 12 years of Fermi data, we think we've made the case that G106.3+2.7 is indeed a PeVatron."

A paper detailing the findings, led by Fang, was published Aug. 10 in the journal Physical Review Letters.

The pulsar, J2229+6114, emits its own gamma rays in a lighthouse-like beacon as it spins, and this glow dominates the region to energies of a few GeV. Most of this emission occurs in the first half of the pulsar's rotation. The team effectively turned off the pulsar by analyzing only gamma rays arriving from the latter part of the cycle. Below 10 GeV, there is no significant emission from the remnant's tail.

Above this energy, the pulsar's interference is negligible and the additional source becomes readily apparent. The team's detailed analysis overwhelmingly favors PeV protons as the particles driving this gamma-ray emission.

Read more at Science Daily

Breakthrough in search for tinnitus cure

After 20 years searching for a cure for tinnitus, researchers at the University of Auckland are excited by 'encouraging results' from a clinical trial of a mobile-phone-based therapy. The study randomised 61 patients to one of two treatments, the prototype of the new 'digital polytherapeutic' or a popular self-help app producing white noise.

On average, the group with the polytherapeutic (31 people) showed clinically significant improvements at 12 weeks, while the other group (30 people) did not. The results have just been published in Frontiers in Neurology. "This is more significant than some of our earlier work and is likely to have a direct impact on future treatment of tinnitus," Associate Professor in Audiology Grant Searchfield says.

Key to the new treatment is an initial assessment by an audiologist who develops the personalised treatment plan, combining a range of digital tools, based on the individual's experience of tinnitus. "Earlier trials have found white noise, goal-based counselling, goal-oriented games and other technology-based therapies are effective for some people some of the time," says Dr Searchfield. "This is quicker and more effective, taking 12 weeks rather than 12 months for more individuals to gain some control."

There is no pill that can cure tinnitus. "What this therapy does is essentially rewire the brain in a way that de-emphasises the sound of the tinnitus to a background noise that has no meaning or relevance to the listener," Dr Searchfield says.

Audiology research fellow Dr Phil Sanders says the results are exciting and he found running the trial personally rewarding.

"Sixty-five percent of participants reported an improvement. For some people, it was life-changing -- where tinnitus was taking over their lives and attention."

Some people didn't notice an improvement and their feedback will inform further personalisation, Dr Sanders says.

Tinnitus is a phantom noise and its causes are complex. It has so far defied successful treatment.

While most people experience tinnitus, or ringing in the ears at least on occasions, around five percent experience it to a distressing degree. Impacts can include trouble sleeping, difficulty carrying out daily tasks and depression.

Dr Searchfield says seeing his patients' distress and having no effective treatment to offer inspired his research. "I wanted to make a difference."

Read more at Science Daily

Not all in the genes: Are we inheriting more than we think?

A fundamental discovery about a driver of healthy development in embryos could rewrite our understanding of what can be inherited from our parents and how their life experiences may shape us.

The new research suggests that epigenetic information, which sits on top of DNA and is normally reset between generations, is more frequently carried from mother to offspring than previously thought.

The study, led by researchers from WEHI (Melbourne, Australia), significantly broadens our understanding of which genes have epigenetic information passed from mother to child and which proteins are important for controlling this unusual process.

Epigenetics is a rapidly growing field of science that investigates how our genes are switched on and off to allow one set of genetic instructions to create hundreds of different cell types in our body.

Epigenetic changes can be influenced by environmental variations such as our diet, but these changes do not alter DNA and are normally not passed from parent to offspring.

While a tiny group of 'imprinted' genes can carry epigenetic information across generations, until now, very few other genes have been shown to be influenced by the mother's epigenetic state.

The new research reveals that the supply of a specific protein in the mother's egg can affect the genes that drive skeletal patterning of offspring.

Chief investigator Professor Marnie Blewitt said the findings initially left the team surprised.

"It took us a while to process because our discovery was unexpected," Professor Blewitt, Joint Head of the Epigenetics and Development Division at WEHI, said.

"Knowing that epigenetic information from the mother can have effects with life-long consequences for body patterning is exciting, as it suggests this is happening far more than we ever thought.

"It could open a Pandora's box as to what other epigenetic information is being inherited."

The study, led by WEHI in collaboration with Associate Professor Edwina McGlinn from Monash University and The Australian Regenerative Medicine Institute, is published in Nature Communications.

The new research focused on the protein SMCHD1, an epigenetic regulator discovered by Professor Blewitt in 2008, and Hox genes, which are critical for normal skeletal development.

Hox genes control the identity of each vertebra during embryonic development in mammals, while the epigenetic regulator prevents these genes from being activated too soon.

In this study, the researchers discovered that the amount of SMCHD1 in the mother's egg affects the activity of Hox genes and influences the patterning of the embryo. Without maternal SMCHD1 in the egg, offspring were born with altered skeletal structures.

First author and PhD researcher Natalia Benetti said this was clear evidence that epigenetic information had been inherited from the mother, rather than just blueprint genetic information.

"While we have more than 20,000 genes in our genome, only that rare subset of about 150 imprinted genes and very few others have been shown to carry epigenetic information from one generation to another," Benetti said.

"Knowing this is also happening to a set of essential genes that have been evolutionarily conserved from flies through to humans is fascinating."

The research showed that SMCHD1 in the egg, which only persists for two days after conception, has a life-long impact.

Variants in SMCHD1 are linked to developmental disorder Bosma arhinia microphthalmia syndrome (BAMS) and facioscapulohumeral muscular dystrophy (FSHD), a form of muscular dystrophy. The researchers say their findings could have implications for women with SMCHD1 variants and their children in the future.

A drug discovery effort at WEHI is currently leveraging the SMCHD1 knowledge established by the team to design novel therapies to treat developmental disorders, such as Prader Willi Syndrome and the degenerative disorder FSHD.

Read more at Science Daily

Aug 11, 2022

One more clue to the Moon's origin

Humankind has maintained an enduring fascination with the Moon. It was not until Galileo's time, however, that scientists really began study it. Over the course of nearly five centuries, researchers put forward numerous, much debated theories as to how the Moon was formed. Now, geochemists, cosmochemists, and petrologists at ETH Zurich shed new light on the Moon's origin story. In a study just published in the journal, Science Advances, the research team reports findings that show that the Moon inherited the indigenous noble gases of helium and neon from Earth's mantle. The discovery adds to the already strong constraints on the currently favoured "Giant Impact" theory that hypothesizes the Moon was formed by a massive collision between Earth and another celestial body.

Meteorites from the Moon to Antarctica


During her doctoral research at ETH Zurich, Patrizia Will analysed six samples of lunar meteorites from an Antarctic collection, obtained from NASA. The meteorites consist of basalt rock that formed when magma welled up from the interior of the Moon and cooled quickly. They remained covered by additional basalt layers after their formation, which protected the rock from cosmic rays and, particularly, the solar wind. The cooling process resulted in the formation of lunar glass particles amongst the other minerals found in magma. Will and the team discovered that the glass particles retain the chemical fingerprints (isotopic signatures) of the solar gases: helium and neon from the Moon's interior. Their findings strongly support that the Moon inherited noble gases indigenous to the Earth. "Finding solar gases, for the first time, in basaltic materials from the Moon that are unrelated to any exposure on the lunar surface was such an exciting result," says Will.

Without the protection of an atmosphere, asteroids continually pelt the Moon's surface. It likely took a high-energy impact to eject the meteorites from the middle layers of the lava flow similar to the vast plains known as the Lunar Mare. Eventually the rock fragments made their way to Earth in the form of meteorites. Many of these meteorite samples are picked up in the deserts of North Africa or in, in this case, the "cold desert" of Antarctica where they are easier to spot in the landscape.

Grateful Dead lyrics inspire lab instrument

In the Noble Gas Laboratory at ETH Zurich resides a state-of-the-art noble gas mass spectrometer named, "Tom Dooley" -- sung about in the Grateful Dead tune by the same name. The instrument got its name, when earlier researchers, at one point, suspended the highly sensitive equipment from the ceiling of the lab to avoid interference from the vibrations of everyday life. Using the Tom Dooley instrument, the research team was able to measure sub-millimetre glass particles from the meteorites and rule out solar wind as the source of the detected gases. The helium and neon that they detected were in a much higher abundance than expected.

The Tom Dooley is so sensitive that it is, in fact, the only instrument in the world capable of detecting such minimal concentrations of helium and neon. It was used to detect these noble gases in the 7 billion years old grains in the Murchison meteorite -- the oldest known solid matter to-date.

Searching for the origins of life

Knowing where to look inside NASA's vast collection of some 70,000 approved meteorites represents a major step forward. "I am strongly convinced that there will be a race to study heavy noble gases and isotopes in meteoritic materials," says ETH Zurich Professor Henner Busemann, one of the world's leading scientists in the field of extra-terrestrial noble gas geochemistry. He anticipates that soon researchers will be looking for noble gases such as xenon and krypton which are more challenging to identify. They will also be searching for other volatile elements such as hydrogen or halogens in the lunar meteorites.

Read more at Science Daily

First stars and black holes

Just milliseconds after the universe's Big Bang, chaos reigned. Atomic nuclei fused and broke apart in hot, frenzied motion. Incredibly strong pressure waves built up and squeezed matter so tightly together that black holes formed, which astrophysicists call primordial black holes.

Did primordial black holes help or hinder formation of the universe's first stars, eventually born about 100 million years later?

Supercomputer simulations helped investigate this cosmic question, thanks to simulations on the Stampede2 supercomputer of the Texas Advanced Computing Center (TACC), part of The University of Texas at Austin.

"We found that the standard picture of first-star formation is not really changed by primordial black holes," said Boyuan Liu, a post-doctoral researcher at the University of Cambridge. Liu is the lead author of computational astrophysics research published August 2022 in the Monthly Notices of the Royal Astronomical Society.

In the early universe, the standard model of astrophysics holds that black holes seeded the formation of halo-like structures by virtue of their gravitational pull, analogous to how clouds form by being seeded by dust particles. This is a plus for star formation, where these structures served as scaffolding that helped matter coalesce into the first stars and galaxies.

However, a black hole also causes heating by gas or debris falling into it. This forms a hot accretion disk around the black hole, which emits energetic photons that ionize and heat the surrounding gas.

And that's a minus for star formation, as gas needs to cool down to be able to condense to high enough density that a nuclear reaction is triggered, setting the star ablaze.

"We found that these two effects -- black hole heating and seeding -- almost cancel each other out and the final impact is small for star formation," Liu said.

Depending on which effect wins over the other, star formation can be accelerated, delayed or prevented by primordial black holes. "This is why primordial black holes can be important," he added.

Liu emphasized that it is only with state-of-the-art cosmological simulations that one can understand the interplay between the two effects.

Regarding the importance of primordial black holes, the research also implied that they interact with the first stars and produce gravitational waves. "They may also be able to trigger the formation of supermassive black holes. These aspects will be investigated in follow-up studies," Liu added.

For the study, Liu and colleagues used cosmological hydrodynamic zoom-in simulations as their tool for state-of-the-art numerical schemes of the gravity hydrodynamics, chemistry and cooling in structure formation and early star formation.

"A key effect of primordial black holes is that they are seeds of structures," Liu said. His team built the model that implemented this process, as well as incorporating the heating from primordial black holes.

They then added a sub-grid model for black hole accretion and feedback. The model calculates at each timestep how a black hole accretes gas and also how it heats its surroundings.

"This is based on the environment around the black hole known in the simulations on the fly," Liu said.

XSEDE awarded the science team allocations on the Stampede2 system of TACC.

"Supercomputing resources in computational astrophysics are absolutely vital," said study co-author Volker Bromm, professor and chair, Department of Astronomy, UT Austin.

Bromm explained that in theoretical astrophysics, the ruling paradigm for understanding the formation and evolution of cosmic structure is to use ab initio simulations, which follow the 'playbook' of the universe itself -- the governing equations of physics.

The simulations use data from the universe's initial conditions to high precision based on observations of the cosmic microwave background. Simulation boxes are then set up that follow the cosmic evolution timestep by timestep.

But the challenges in computational simulation of structure formation lie in the way large scales of the universe -- millions to billions of light years and billions of years -- mesh with the atomic scales where stellar chemistry happens.

"The microcosm and the macrocosm interact," Bromm said.

"TACC and XSEDE resources have been absolutely vital for us to push the frontier of computation astrophysics. Everyone who is at UT Austin -- faculty members, postdocs, students -- benefits from the fact that we have such a premier supercomputing center. I'm extremely grateful," Bromm added.

"If we look into one typical structure that can form the first stars, we need around one million elements to fully resolve this halo or structure," Liu said. "This is why we need to use supercomputers at TACC."

Liu said that using Stampede2, a simulation running on 100 cores can complete in just a few hours versus years on a laptop, not to mention the bottlenecks with memory and reading or writing data.

"The overall game plan with our work is that we want to understand how the universe was transformed from the simple initial conditions of the Big Bang," explained Bromm.

The structures that emerged from the Big Bang were driven by the dynamical importance of dark matter.

The nature of dark matter remains one of the biggest mysteries in science.

The clues of this hypothetical yet unobservable substance are undeniable, seen in the impossible rotational speeds of galaxies. The mass of all the stars and planets in galaxies like our Milky Way do not have enough gravity to keep them from flying apart. The 'x-factor' is called dark matter, yet laboratories have not yet directly detected it.

However, gravitational waves have been detected, first by LIGO in 2015.

"It is possible that primordial black holes can explain these gravitational wave events that we have been detecting over the past seven years," Liu said. "This just motivates us."

Said Bromm: "Supercomputers are enabling unprecedented new insights into how the universe works. The universe provides us with extreme environments that are extremely challenging to understand. This also gives motivation to build ever-more-powerful computation architectures and devise better algorithmic structures. There's great beauty and power to the benefit of everyone."

Read more at Science Daily

New giant deep-sea isopod discovered in the Gulf of Mexico

Researchers have identified a new species of Bathonymus, the famed genera of deep-sea isopods whose viral internet fame has made them the most famous aquatic crustaceans since Sebastian of The Little Mermaid.

There are around 20 species of living Bathonymus, a mysterious and primitive group that inhabits the benthic zone of the ocean -- its deepest reaches, rarely explored in person. Isopod crustaceans are only distantly related to their better-known decapod relatives, the crabs, shrimp, and lobsters.

Publishing their findings in the peer-reviewed Journal of Natural History, a group of Taiwanese, Japanese, and Australian researchers reveal the latest creature to this list -- B. yucatanensis, a new species which is around 26cm long -- some 2,500% larger than the common woodlouse.

Deep sea isopods belong to the same group that contains the terrestrial isopods known variously as woodlice, pillbugs, and roly polys, which feed on decaying matter and are likely familiar to anyone who has lifted up a rock or dug around in the garden. Indeed, they look quite similar but for their extraordinary size -- the largest of them grow to nearly 50 centimeters. And, just like woodlouse, although they perhaps look a little scary, they are completely harmless to humans.

Their strange features and unusual dimensions have spawned endless memes and a range of products celebrating their endearing weirdness, from plush toys to phone cases.

This finding of B. yucatanensis adds another addition to the isopod pantheon and brings the total of known species of Bathonymus in the Gulf of Mexico to three -- B. giganteus was described in 1879 and B. maxeyorum was described in 2016.

It was initially thought to be a variation of B. giganteus, one of the largest of the deep-sea isopods. But closer examination of the specimen, which was captured in a baited trap in 2017 in the Gulf of Mexico off the Yucatán Peninsula at around 600 to 800 meters down, revealed an array of unique features.

"B. yucatanensis is morphologically distinct from both B. giganteus and B. maxeyorum," the authors claim.

Held by the Enoshima Aquarium in Japan, the individual studied was subtly different than its relatives. "Compared to B. giganteus, B. yucatanensis has more slender body proportions and is shorter in total length … and the pereopods [thoracic limbs] are more slender," the researchers observe. It also has longer antennae. The two species have the same number of pleotelson spines. These spines protrude from the tail end of the crustacean.

"Bathynomus giganteus was discovered over a century ago, and more than 1,000 specimens have been studied with no suggestion until now of a second species with the same number of pleotelsonic spines," they add. "Superficial examination, using only pleotelson spines, could easily result in specimens of B. yucatanensis being misidentified as B. giganteus."

"Compared with B. maxeyorum, the most distinctive feature is the number of pleotelson spines -- 11 spines in B. yucatanensis versus 7 in B. maxeyorum." The blotchy, creamy yellow coloration of the shell further distinguished it from its greyer relatives.

In order to be sure, the scientists conducted a molecular genetic analysis comparing B. giganteus and B. yucatanensis. "Due to the different sequences of the two genes (COI and 16S rRNA), coupled with differences in morphology, we identified it as a new species," they write. The phylogenetic tree they constructed showed B. yucatanensis as most closely related to B. giganteus.

"B. giganteus is indeed the species closest to B. yucatanensis," the authors assert. "This indicates that the two species likely had a common ancestor. Additionally, there may also be other undiscovered Bathynomus spp. in the tropical western Atlantic.

The paper also clarifies that specimens from the South China Sea identified as B. kensleyi are actually B. jamesi. B. kensleyi is restricted to the Coral Sea, off the coast of Australia.

"It is increasingly evident that species of Bathynomus may be exceedingly similar in overall appearance, and also that there is a long history of misidentification of species in the genus," the authors caution.

Read more at Science Daily

Prehistoric podiatry: How dinos carried their enormous weight

Scientists have cracked an enduring mystery, discovering how sauropod dinosaurs -- like Brontosaurus and Diplodocus -- supported their gigantic bodies on land.

A University of Queensland and Monash University-led team used 3D modelling and engineering methods to digitally reconstruct and test the function of foot bones of different sauropods.

Dr Andréas Jannel conducted the research during his PhD studies at UQ's Dinosaur Lab and said the team found that the hind feet of sauropod had a soft tissue pad beneath the 'heel', cushioning the foot to absorb their immense weight.

"We've finally confirmed a long-suspected idea and we provide, for the first time, biomechanical evidence that a soft tissue pad -- particularly in their back feet -- would have played a crucial role in reducing locomotor pressures and bone stresses," Dr Jannel said.

"It is mind-blowing to imagine that these giant creatures could have been able to support their own weight on land."

Sauropods were the largest terrestrial animals that roamed the Earth for more than 100 million years.

They were first thought to have been semi-aquatic with water buoyancy supporting their massive weight, a theory disproved by the discovery of sauropod tracks in terrestrial deposits in the mid-twentieth century.

Monash University's Dr Olga Panagiotopoulou said it had also been thought sauropods had feet similar to a modern-day elephant.

"Popular culture -- think Jurassic Park or Walking with Dinosaurs -- often depicts these behemoths with almost-cylindrical, thick, elephant-like feet," Dr Panagiotopoulou said.

"But when it comes to their skeletal structure, elephants are actually 'tip-toed' on all four feet, whereas sauropods have different foot configurations in their front and back feet.

"Sauropod's front feet are more columnar-like, while they present more 'wedge high heels' at the back supported by a large soft tissue pad."

UQ's Associate Professor Steve Salisbury said this was because sauropods and elephants had different evolutionary origins.

"Elephants belong to an ancient order of mammals called proboscideans, which first appeared in Africa roughly 60 million years ago as small, nondescript herbivores, " Associate Professor Salisbury said.

"In contrast, sauropods -- whose ancestors first appeared 230 million years ago -- are more closely related to birds.

"They were agile, two-legged herbivores and it was only later in their evolution that they walked on all fours.

"Crucially, the transition to becoming the largest land animals to walk the earth seems to have involved the adaptation of a heel pad."

The researchers now plan to use the 3D modelling and engineering methods to make further discoveries.

"I'm keen to apply a similar method to an entire limb and to include additional soft tissue such as muscles, which are rarely preserved in fossils," Dr Jannel said.

"We're also excited to study the limbs and feet of other prehistoric animals.

Read more at Science Daily

Aug 10, 2022

Evidence that giant meteorite impacts created the continents

New Curtin research has provided the strongest evidence yet that Earth's continents were formed by giant meteorite impacts that were particularly prevalent during the first billion years or so of our planet's four-and-a-half-billion year history.

Dr Tim Johnson, from Curtin's School of Earth and Planetary Sciences, said the idea that the continents originally formed at sites of giant meteorite impacts had been around for decades, but until now there was little solid evidence to support the theory.

"By examining tiny crystals of the mineral zircon in rocks from the Pilbara Craton in Western Australia, which represents Earth's best-preserved remnant of ancient crust, we found evidence of these giant meteorite impacts," Dr Johnson said.

"Studying the composition of oxygen isotopes in these zircon crystals revealed a 'top-down' process starting with the melting of rocks near the surface and progressing deeper, consistent with the geological effect of giant meteorite impacts.

"Our research provides the first solid evidence that the processes that ultimately formed the continents began with giant meteorite impacts, similar to those responsible for the extinction of the dinosaurs, but which occurred billions of years earlier."

Dr Johnson said understanding the formation and ongoing evolution of the Earth's continents was crucial given that these landmasses host the majority of Earth's biomass, all humans and almost all of the planet's important mineral deposits.

"Not least, the continents host critical metals such as lithium, tin and nickel, commodities that are essential to the emerging green technologies needed to fulfil our obligation to mitigate climate change," Dr Johnson said.

"These mineral deposits are the end result of a process known as crustal differentiation, which began with the formation of the earliest landmasses, of which the Pilbara Craton is just one of many.

"Data related to other areas of ancient continental crust on Earth appears to show patterns similar to those recognised in Western Australia. We would like to test our findings on these ancient rocks to see if, as we suspect, our model is more widely applicable."

Read more at Science Daily

Stars determine their own masses

Last year, a team of astrophysicists including key members from Northwestern University launched STARFORGE, a project that produces the most realistic, highest-resolution 3D simulations of star formation to date. Now, the scientists have used the highly detailed simulations to uncover what determines the masses of stars, a mystery that has captivated astrophysicists for decades.

In a new study, the team discovered that star formation is a self-regulatory process. In other words, stars themselves set their own masses. This helps explain why stars formed in disparate environments still have similar masses. The new finding may enable researchers to better understand star formation within our own Milky Way and other galaxies.

The study was published last week in the Monthly Notices of the Royal Astronomical Society. The collaborative team included experts from Northwestern, University of Texas at Austin (UT Austin), Carnegie Observatories, Harvard University and the California Institute of Technology. The lead author of the new study is Dávid Guszejnov, a postdoctoral fellow at UT Austin.

"Understanding the stellar initial mass function is such an important problem because it impacts astrophysics across the board -- from nearby planets to distant galaxies," said Northwestern's Claude-André Faucher-Giguère, a study co-author. "This is because stars have relatively simple DNA. If you know the mass of a star, then you know most things about the star: how much light it emits, how long it will live and what will happen to it when it dies. The distribution of stellar masses is thus critical for whether planets that orbit stars can potentially sustain life, as well as what distant galaxies look like."

Faucher-Giguère is an associate professor of physics and astronomy in Northwestern's Weinberg College of Arts and Sciences and a member of the Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA).

Outer space is filled with giant clouds, consisting of cold gas and dust. Slowly, gravity pulls far-flung specks of this gas and dust toward each other to form dense clumps. Materials in these clumps fall inward, crashing and sparking heat to create a newborn star.

Surrounding each of these "protostars" is a rotating disk of gas and dust. Every planet in our solar system was once specks in such a disk around our newborn sun. Whether or not planets orbiting a star could host life is dependent on the mass of the star and how it formed. Therefore, understanding star formation is crucial to determining where life can form in the universe.

"Stars are the atoms of the galaxy," said Stella Offner, associate professor of astronomy at UT Austin. "Their mass distribution dictates whether planets will be born and if life might develop."

Every subfield in astronomy depends on the mass distribution of stars -- or initial mass function (IMF) -- which has proved challenging for scientists to model correctly. Stars much bigger than our sun are rare, making up only 1% of newborn stars. And, for every one of these stars there are up to 10 sun-like stars and 30 dwarf stars. Observations found that no matter where we look in the Milky Way these ratios (i.e., the IMF) are the same, for both newly formed star clusters and for those that are billions of years old.

This is the mystery of the IMF. Every population of stars in our galaxy, and in all the dwarf galaxies that surround us, has this same balance -- even though their stars were born under wildly different conditions over billions of years. In theory, the IMF should vary dramatically, but it is virtually universal, which has puzzled astronomers for decades.

"For a long time, we have been asking why," Guszejnov said. "Our simulations followed stars from birth to the natural endpoint of their formation to solve this mystery."

The new simulations, however, showed that stellar feedback, in an effort to oppose gravity, pushes stellar masses toward the same mass distribution. These simulations are the first to follow the formation of individual stars in a collapsing giant cloud, while also capturing how these newly formed stars interact with their surroundings by giving off light and shedding mass via jets and winds -- a phenomenon referred to as "stellar feedback."

The STARFORGE project is a multi-institutional initiative, co-led by Guszejnov and Michael Grudić of Carnegie Observatories. Grudić was a CIERA postdoctoral fellow at Northwestern when the project was initiated. STARFORGE simulations are the first to simultaneously model star formation, evolution and dynamics while accounting for stellar feedback, including jets, radiation, wind and nearby supernovae activity. While other simulations have incorporated individual types of stellar feedback, STARFORGE puts them all together to simulate how these various processes interact to affect star formation.

Read more at Science Daily

Hibernation slows biological aging in bats

The most common bat in the United States, the big brown bat, boasts an unusually long lifespan of up to 19 years. A new study led by University of Maryland researchers identifies one of the secrets to this bat's exceptional longevity: hibernation.

"Hibernation has allowed bats, and presumably other animals, to stay in northerly or very southerly regions where there's no food in the winter," said the study's senior author, UMD Biology Professor Gerald Wilkinson. "Hibernators tend to live much longer than migrators. We knew that, but we didn't know if we would detect changes in epigenetic age due to hibernation."

The researchers determined that hibernating over one winter extends a big brown bat's epigenetic clock -- a biological marker of aging -- by three-quarters of a year. The study, published in the journal Proceedings of the Royal Society BonAugust 10, 2022, also included scientists from McMaster University and the University of Waterloo, both in Ontario, Canada.

They analyzed small tissue samples taken from the wings of 20 big brown bats (Eptesicus fuscus) during two periods: in the winter when they hibernated and in the summer when they were active. The bats, kept in a research colony at McMaster University, ranged in age from less than 1 year old to a little over 10 years old.

Once the samples were collected, the researchers measured changes in DNA methylation -- a biological process associated with gene regulation -- between samples taken from the same animal during active and hibernating periods. They discovered that changes in DNA methylation occurred at certain sites in the bat's genome, and these sites appeared to be affecting metabolism during hibernation.

"It's pretty clear that the sites that decrease methylation in the winter are the ones that appear to be having an active effect," Wilkinson said. "Many of the genes that are nearest to them are known to be involved in regulating metabolism, so they presumably keep metabolism down."

Some of these genes are the same ones that Wilkinson and fellow researchers identified as "longevity genes" in a previous study. Wilkinson said that there is significant overlap between the hibernation genes and the longevity genes, further highlighting the link between hibernation and longer lifespans.

The earlier study also established the first epigenetic clock for bats, capable of accurately predicting the age of any bat in the wild. That clock was applied to this latest study, enabling the researchers to demonstrate that hibernation reduces a bat's epigenetic age in comparison to a non-hibernating animal of the same age.

Studies like this help explain why bats have longer lifespans than expected for a small mammal about the size of a mouse. However, they also raise new questions.

"We still don't have a very good understanding of why some bats can live a really long time and other ones don't," Wilkinson said. "We've shown that the ones that live a really long time all share the ability to hibernate, or to go into torpor frequently. That seems to be a corollary, but it's not sufficient because hibernating rodents don't live 20 years."

Read more at Science Daily

Potential long-term treatment for asthma found

A possible way to tackle one of the underlying causes of asthma has been developed by researchers from Aston University and Imperial College London.

In tests in mice, the researchers were able to virtually eliminate asthmatic symptoms within two weeks and return their airways to near normal.

Just under 5.5 million people in the UK receive treatment for asthma and around 1,200 people die of the disease each year.

Asthma causes the airways to become thickened and constricted, resulting in symptoms such as wheezing and shortness of breath.

Current treatments, including steroids, provide short term relief from these symptoms, by either relaxing the airways or reducing inflammation. However, no current drugs address the structural changes asthma makes to the airway and lungs, in order to offer a longer-lasting treatment.

Lead researcher, Dr Jill Johnson, from Aston University's School of Biosciences, said: "By targeting the changes in the airway directly, we hope this approach could eventually offer a more permanent and effective treatment than those already available, particularly for severe asthmatics who don't respond to steroids. However, our work is still at an early stage and further research is needed before we can begin to test this in people."

The research focused on a type of stem cell known as a pericyte, which is mainly found in the lining of blood vessels. When asthmatics have an allergic and inflammatory reaction, for example to house dust mites, this causes the pericytes to move to the airway walls. Once there, the pericytes develop into muscle cells and other cells that make the airway thicker and less flexible.

This movement of the pericytes is triggered by a protein known as CXCL12. The researchers used a molecule called LIT-927 to block the signal from this protein, by introducing it into the mice's nasal passages. Asthmatic mice that were treated with LIT-927 had a reduction in symptoms within one week and their symptoms virtually disappeared within two weeks. The researchers also found that the airway walls in mice treated with LIT-927 were much thinner than those in untreated mice, closer to those of healthy controls.

The team are now applying for further funding to carry out more research into dosage and timing, This would help them to determine when might be the most effective time to administer the treatment during the progress of the disease, how much of LIT-927 is needed, and to better understand its impact on lung function. They believe that, should this research be successful, it will still be several years before the treatment could be tested in people.

Read more at Science Daily

Aug 9, 2022

Robotic motion in curved space defies standard laws of physics

When humans, animals, and machines move throughout the world, they always push against something, whether it's the ground, air, or water. Until recently, physicists believed this to be a constant, following the law of conservation momentum. Now, researchers from the Georgia Institute of Technology have proven the opposite -- when bodies exist in curved spaces, it turns out that they can in fact move without pushing against something.

The findings were published in Proceedings of the National Academy of Sciences on July 28, 2022. In the paper, a team of researchers led by Zeb Rocklin, assistant professor in the School of Physics at Georgia Tech, created a robot confined to a spherical surface with unprecedented levels of isolation from its environment, so that these curvature-induced effects would predominate.

"We let our shape-changing object move on the simplest curved space, a sphere, to systematically study the motion in curved space," said Rocklin. "We learned that the predicted effect, which was so counter-intuitive it was dismissed by some physicists, indeed occurred: as the robot changed its shape, it inched forward around the sphere in a way that could not be attributed to environmental interactions."

Creating a Curved Path

The researchers set out to study how an object moved within a curved space. To confine the object on the sphere with minimal interaction or exchange of momentum with the environment in the curved space, they let a set of motors drive on curved tracks as moving masses. They then connected this system holistically to a rotating shaft so that the motors always move on a sphere. The shaft was supported by air bearings and bushings to minimize the friction, and the alignment of the shaft was adjusted with the Earth's gravity to minimize the residual force of gravity.

From there, as the robot continued to move, gravity and friction exerted slight forces on it. These forces hybridized with the curvature effects to produce a strange dynamic with properties neither could induce on their own. The research provides an important demonstration of how curved spaces can be attained and how it fundamentally challenges physical laws and intuition designed for flat space. Rocklin hopes the experimental techniques developed will allow other researchers to explore these curved spaces.

Applications in Space and Beyond


While the effects are small, as robotics becomes increasingly precise, understanding this curvature-induced effect may be of practical importance, just as the slight frequency shift induced by gravity became crucial to allow GPS systems to accurately convey their positions to orbital satellites. Ultimately, the principles of how a space's curvature can be harnessed for locomotion may allow spacecraft to navigate the highly curved space around a black hole.

Read more at Science Daily

More wolves, beavers needed as part of improving western United States habitats

Oregon State University scientists are proposing management changes on western federal lands that they say would result in more wolves and beavers and would re-establish ecological processes.

In a paper published today in BioScience, "Rewilding the American West," co-lead author William Ripple and 19 other authors suggest using portions of federal lands in 11 states to establish a network based on potential habitat for the gray wolf -- an apex predator able to trigger powerful, widespread ecological effects.

In those states the authors identified areas, each at least 5,000 square kilometers, of contiguous, federally managed lands containing prime wolf habitat. The states in the proposed Western Rewilding Network, which would cover nearly 500,000 square kilometers, are Oregon, Washington, California, Nevada, Idaho, Montana, Wyoming, Colorado, Arizona, New Mexico and Utah.

"It's an ambitious idea, but the American West is going through an unprecedented period of converging crises including extended drought and water scarcity, extreme heat waves, massive fires and loss of biodiversity," said Ripple, distinguished professor of ecology in the OSU College of Forestry.

Gray wolves were hunted to near extinction in the West but were reintroduced to parts of the northern Rocky Mountains and the Southwest starting in the 1990s through measures made possible by the Endangered Species Act.

"Still, the gray wolf's current range in those 11 states is only about 14% of its historical range," said co-lead author Christopher Wolf, a postdoctoral scholar in the College of Forestry. "They probably once numbered in the tens of thousands, but today there might only be 3,500 wolves across the entire West."

Beaver populations, once robust across the West, declined roughly 90% after settler colonialism and are now nonexistent in many streams, meaning ecosystem services are going unprovided, the authors say.

By felling trees and shrubs and constructing dams, beavers enrich fish habitat, increase water and sediment retention, maintain water flows during drought, improve water quality, increase carbon

sequestration and generally improve habitat for riparian plant and animal species.

"Beaver restoration is a cost-effective way to repair degraded riparian areas," said co-author Robert Beschta, professor emeritus in the OSU College of Forestry. "Riparian areas occupy less than 2% of the land in the West but provide habitat for up to 70% of wildlife species."

Similarly, wolf restoration offers significant ecological benefits by helping to naturally control native ungulates such as elk, according to the authors. They say wolves facilitate regrowth of vegetation species such as aspen, which supports diverse plant and animal communities and is declining in the West.

The paper includes a catalogue of 92 threatened and endangered plant and animal species that have at least 10% of their ranges within the proposed Western Rewilding Network; for each species, threats from human activity were analyzed.

The authors determined the most common threat was livestock grazing, which they say can cause stream and wetland degradation, affect fire regimes and make it harder for woody species, especially willow, to regenerate.

Nationally, about 2% of meat production results from federal grazing permits, the paper notes.

"We suggest the removal of grazing on federal allotments from approximately 285,000 square kilometers within the rewilding network, representing 29% of the total 985,000 square kilometers of federal lands in the 11 western states that are annually grazed," Beschta said. "That means we need an economically and socially just federal compensation program for those who give up their grazing permits. Rewilding will be most effective when participation concerns for all stakeholders are considered, including Indigenous people and their governments."

In addition to Beschta, Wolf and Ripple, authors from Oregon State include J. Boone Kauffman, Beverly Law and Michael Paul Nelson. Daniel Ashe, former director of the U.S. Fish and Wildlife Service and now the president of the Association of Zoos and Aquariums, is also a co-author.

Read more at Science Daily

Impact of climate change on human pathogenic diseases subject of new study by UH researchers

A comprehensive assessment of scientific literature has uncovered empirical evidence that more than 58% of human diseases caused by pathogens, such as dengue, hepatitis, pneumonia, malaria, Zika and more, have been -- at some point -- aggravated by climatic hazards. That eye-opening and startling finding is the topic of a research paper published on August 8 in Nature Climate Change by a team of researchers from the University of Hawaii at Manoa.

The researchers carried out a systemic search for empirical examples about the impacts of 10 climatic hazards sensitive to greenhouse gas (GHG) emissions on each known human pathogenic disease. These hazards included warming, drought, heatwaves, wildfires, extreme precipitation, floods, storms, sea level rise, ocean biogeochemical change, and land cover change.

Combining two authoritative lists of all known infections and pathogenic diseases that have affected humanity in recorded history, researchers then reviewed more than 70,000 scientific papers for empirical examples about each possible combination of a climatic hazard impacting each of the known diseases.

The research revealed that warming, precipitation, floods, drought, storm, land cover change, ocean climate change, fires, heatwaves and sea level changes were all found to influence diseases triggered by viruses, bacteria, animals, fungi, protozoans, plants and chromists. Pathogenic diseases were primarily transmitted by vectors, although case examples were also found for waterborne, airborne, direct contact and foodborne transmission pathways. Ultimately, the research found that more than 58%, or 218 out of 375, of known human pathogenic diseases had been affected at some point, by at least one climatic hazard, via 1,006 unique pathways.

"Given the extensive and pervasive consequences of the COVID 19 pandemic, it was truly scary to discover the massive health vulnerability resulting as a consequence of greenhouse gas emissions," said Camilo Mora, geography professor in the College of Social Sciences (CSS) and lead author of the study. "There are just too many diseases, and pathways of transmission, for us to think that we can truly adapt to climate change. It highlights the urgent need to reduce greenhouse gas emissions globally."

An interactive web-page showing each connection between a climatic hazard and a disease case was developed by the research team. The tool allows users to query specific hazards, pathways and disease groups, and see the available evidence.

The UH Manoa research team included experts from CSS, Department of Earth Sciences in the School of Ocean and Earth Science and Technology, Marine Biology Graduate Program in the School of Life Sciences, Department of Natural Resources and Environmental Management in the College of Tropical Agriculture and Human Resources, and Hawaii Institute of Marine Biology in SOEST.

Other key findings include:

  •     Climatic hazards are bringing pathogens closer to people. Numerous climatic hazards are increasing the area and duration of environmental suitability facilitating the spatial and temporal expansion of vectors and pathogens. Warming and precipitation changes, for instance, were associated with range expansion of vectors such as mosquitoes, ticks, fleas, birds and several mammals implicated in outbreaks by viruses, bacteria, animals and protozoans, including dengue, chikungunya, plague, Lyme disease, West Nile virus, Zika, trypanosomiasis, echinococcosis and malaria to name a few.
  •     Climatic hazards are bringing people closer to pathogens. Climatic hazards were also implicated with the forced displacement and migration of people causing or increasing new contacts with pathogens. Heatwaves, for instance, have been associated with rising cases of several waterborne diseases such as Vibrio (a kind of bacteria)-associated infections, primary amoebic meningoencephalitis and gastroenteritis. Storms, floods and sea level rise caused human displacements implicated in cases of leptospirosis, cryptosporidiosis, Lassa fever, giardiasis, gastroenteritis, Legionnaires' diseases, cholera, salmonellosis, shigellosis, pneumonia, typhoid, hepatitis, respiratory disease and skin diseases among others.
  •     Climatic hazards have enhanced specific aspects of pathogens, including improved climate suitability for reproduction, acceleration of the life cycle, increasing seasons/length of likely exposure, enhancing pathogen vector interactions (for example, by shortening incubations) and increased virulence. For instance, storms, heavy rainfall and floods created stagnant water, increasing breeding and growing grounds for mosquitoes and the array of pathogens that they transmit (for example, leishmaniasis, malaria, Rift Valley fever, yellow fever, St. Louis encephalitis, dengue and West Nile fever). Climatic hazards were also implicated in the increasing capacity of pathogens to cause more severe illness. For example, heatwaves were suggested as a natural selective pressure toward "heat resistant" viruses, whose spillover into human populations results in increased virulence as viruses can better cope with the human body's main defense, which is fever.
  •     Climatic hazards have also diminished human capacity to cope with pathogens by altering body condition; adding stress from exposure to hazardous conditions; forcing people into unsafe conditions; and damaging infrastructure, forcing exposure to pathogens and/or reducing access to medical care. Drought, for instance, was conducive to poor sanitation responsible for cases of trachoma, chlamydia, cholera, conjunctivitis, Cryptosporidium, diarrheal diseases, dysentery, Escherichia coli, Giardia, Salmonella, scabies and typhoid fever.


Researchers also found that, while the great majority of diseases were aggravated by climatic hazards, some were diminished (63 out of 286 diseases). Warming, for example, appears to have reduced the spread of viral diseases probably related to unsuitable conditions for the virus or because of a stronger immune system in warmer conditions. However, most diseases that were diminished by at least one hazard were at times aggravated by another and sometimes even the same hazard.

Read more at Science Daily

Into the brain of comb jellies: Scientists explore the evolution of neurons

Neurons, the specialized cells of the nervous system, are possibly the most complicated cell type ever to have evolved. In humans, these cells are capable of processing and transmitting vast sums of information. But how such complicated cells first came about remains a long-standing debate.

Now, scientists in Japan have revealed the type of messenger -- molecules that carry signals from one cell to another -- that likely functioned in the most ancestral nervous system.

The study, published 8th August in Nature Ecology and Evolution, also revealed key similarities between the nervous system of two early-diverging animal lineages -- the lineage of jellyfish and anemones (also called cnidarians) and that of comb jellies (ctenophores), reigniting an earlier hypothesis that neurons only evolved once.

Despite their supposed simplicity, very little is known about the nervous system of ancient animals. Out of the four animal lineages that branched off before the rise of more complex animals, only comb jellies (the first ancient lineage to diverge) and cnidarians (the last ancient lineage to diverge) are known to possess neurons. But the uniqueness of the comb jellies nervous system compared to that seen in cnidarians and more complex animals, and the absence of neurons in the two lineages that diverged in between, led some scientists to hypothesize that neurons evolved twice.

But Professor Watanabe, who leads the Evolutionary Neurobiology Unit at the Okinawa Institute of Science and Technology (OIST), remained unconvinced.

"Indeed, comb jellies lack a lot of neural proteins that we see in more evolved animal lineages," he said. "But for me, a lack of these proteins isn't enough evidence for two independent neuron origins."

In his study, Prof. Watanabe focused on an ancient and diverse group of neural messengers. Called neuropeptides, these short peptide chains are first synthesized in neurons as a long peptide chain, before being cleaved by digestive enzymes into many short peptides. They are the major form of messenger found in cnidarians, and also play a role in neural communication in humans, and other complex animals.

However, past research that has attempted to find similar neuropeptides in comb jellies has been unsuccessful. The main problem, explained Prof. Watanabe, is that the mature short peptides are encoded by only short sequences of DNA, and mutate frequently in these ancient lineages, making DNA comparisons too difficult. While artificial intelligence has identified potential peptides, these have not yet been experimentally validated.

So, Prof. Watanabe's research team approached the problem from a new direction. They extracted peptides from sponges, cnidarians and comb jellies and used mass spectrometry to search for short peptides. The team was able to find 28 short peptides in cnidarians and comb jellies and determine their amino acid sequences.

Now knowing their structures, the researchers visualized the short peptides under a fluorescent microscope, allowing them to see which cells they were produced in in both cnidarians and comb jellies.

In comb jellies, they found that one type of neuropeptide-expressing cell looked similar to classic neurons, with thin projections called neurites extending out from the cell.

But the short peptides were also produced in a second type of cell that lacked neurites. The researchers suspect these could be an early version of neuroendocrine cells -- cells which receive signals from neurons and then release signals, like hormones, to other organs in the body.

The researchers also compared what genes were expressed in cnidarian and comb jelly neurons. They found that as well as having some short neuropeptides in common, both neurons also expressed a similar array of other proteins essential for neuronal function.

"We already know that cnidarian peptide-expressing neurons are homologous to those seen in more complex animals. Now, comb jelly neurons have also been found to have a similar "genetic signature," suggesting that these neurons share the same evolutionary origin," said Prof. Watanabe. "In other words, it's most likely that neurons only evolved once."

This means, added Prof. Watanabe, that peptide-expressing neurons are probably the most ancestral form, with chemical neurotransmitters arising later. For Prof. Watanabe, these findings bring new, exciting questions to the forefront of his research.

Read more at Science Daily

Aug 8, 2022

Growing cereal crops with less fertilizer

Researchers at the University of California, Davis, have found a way to reduce the amount of nitrogen fertilizers needed to grow cereal crops. The discovery could save farmers in the United States billions of dollars annually in fertilizer costs while also benefiting the environment.

The research comes out of the lab of Eduardo Blumwald, a distinguished professor of plant sciences, who has found a new pathway for cereals to capture the nitrogen they need to grow.

The discovery could also help the environment by reducing nitrogen pollution, which can lead to contaminated water resources, increased greenhouse gas emissions and human health issues. The study was published in the journal Plant Biotechnology.

Nitrogen is key to plant growth, and agricultural operations depend on chemical fertilizers to increase productivity. But much of what is applied is lost, leaching into soils and groundwater. Blumwald's research could create a sustainable alternative.

"Nitrogen fertilizers are very, very expensive," Blumwald said. "Anything you can do to eliminate that cost is important. The problem is money on one side, but there are also the harmful effects of nitrogen on the environment."

A new pathway to natural fertilizer

Blumwald's research centers on increasing the conversion of nitrogen gas in the air into ammonium by soil bacteria -- a process known as nitrogen fixation.

Legumes such as peanuts and soybeans have root nodules that can use nitrogen-fixing bacteria to provide ammonium to the plants. Cereal plants like rice and wheat don't have that capability and must rely on taking in inorganic nitrogen, such as ammonia and nitrate, from fertilizers in the soil.

"If a plant can produce chemicals that make soil bacteria fix atmospheric nitrogen gas, we could modify the plants to produce more of these chemicals," Blumwald said. "These chemicals will induce soil bacterial nitrogen fixation and the plants will use the ammonium formed, reducing the amount of fertilizer used."

Blumwald's team used chemical screening and genomics to identify compounds in rice plants that enhanced the nitrogen-fixing activity of the bacteria.

Then they identified the pathways generating the chemicals and used gene editing technology to increase the production of compounds that stimulated the formation of biofilms. Those biofilms contain bacteria that enhanced nitrogen conversion. As a result, nitrogen-fixing activity of the bacteria increased, as did the amount of ammonium in the soil for the plants.

"Plants are incredible chemical factories," he said. "What this could do is provide a sustainable alternative agricultural practice that reduces the use of excessive nitrogen fertilizers."

The pathway could also be used by other plants. A patent application on the technique has been filed by the University of California and is pending.

Read more at Science Daily

A simple, cheap material for carbon capture, perhaps from tailpipes

Using an inexpensive polymer called melamine -- the main component of Formica -- chemists have created a cheap, easy and energy-efficient way to capture carbon dioxide from smokestacks, a key goal for the United States and other nations as they seek to reduce greenhouse gas emissions.

The process for synthesizing the melamine material, published this week in the journal Science Advances, could potentially be scaled down to capture emissions from vehicle exhaust or other movable sources of carbon dioxide. Carbon dioxide from fossil fuel burning makes up about 75% of all greenhouse gases produced in the U.S.

The new material is simple to make, requiring primarily off-the-shelf melamine powder -- which today costs about $40 per ton -- along with formaldehyde and cyanuric acid, a chemical that, among other uses, is added with chlorine to swimming pools.

"We wanted to think about a carbon capture material that was derived from sources that were really cheap and easy to get. And so, we decided to start with melamine," said Jeffrey Reimer, Professor of the Graduate School in the Department of Chemical and Biomolecular Engineering at the University of California, Berkeley, and one of the corresponding authors of the paper.

The so-called melamine porous network captures carbon dioxide with an efficiency comparable to early results for another relatively recent material for carbon capture, metal organic frameworks, or MOFs. UC Berkeley chemists created the first such carbon-capture MOF in 2015, and subsequent versions have proved even more efficient at removing carbon dioxide from flue gases, such as those from a coal-fired power plant.

But Haiyan Mao, a UC Berkeley postdoctoral fellow who is first author of the paper, said that melamine-based materials use much cheaper ingredients, are easier to make and are more energy efficient than most MOFs. The low cost of porous melamine means that the material could be deployed widely.

"In this study, we focused on cheaper material design for capture and storage and elucidating the interaction mechanism between CO2 and the material," Mao said. "This work creates a general industrialization method towards sustainable CO2 capture using porous networks. We hope we can design a future attachment for capturing car exhaust gas, or maybe an attachment to a building or even a coating on the surface of furniture."

The work is a collaboration among a group at UC Berkeley led by Reimer; a group at Stanford University led by Yi Cui, who is director of the Precourt Institute for Energy, the Somorjai Visiting Miller Professor at UC Berkeley, and a former UC Berkeley postdoctoral fellow; UC Berkeley Professor of the Graduate School Alexander Pines; and a group at Texas A&M University led by Hong-Cai Zhou. Jing Tang, a postdoctoral fellow at Stanford and the Stanford Linear Accelerator Center and a visiting scholar at UC Berkeley, is co-first author with Mao.

Carbon neutrality by 2050

While eliminating fossil fuel burning is essential to halting climate change, a major interim strategy is to capture emissions of carbon dioxide -- the main greenhouse gas -- and store the gas underground or turn CO2 into usable products. The U.S. Department of Energy has already announced projects totaling $3.18 billion to boost advanced and commercially scalable technologies for carbon capture, utilization and sequestration (CCUS) to reach an ambitious flue gas CO2 capture efficiency target of 90%. The ultimate U.S. goal is net zero carbon emissions by 2050.

But carbon capture is far from commercially viable. The best technique today involves piping flue gases through liquid amines, which bind CO2. But this requires large amounts of energy to release the carbon dioxide once it's bound to the amines, so that it can be concentrated and stored underground. The amine mixture must be heated to between 120 and 150 degrees Celsius (250-300 degrees Fahrenheit) to regenerate the CO2.

In contrast, the melamine porous network with DETA and cyanuric acid modification captures CO2 at about 40 degrees Celsius, slightly above room temperature, and releases it at 80 degrees Celsius, below the boiling point of water. The energy savings come from not having to heat the substance to high temperatures.

In its research, the Berkeley/Stanford/Texas team focused on the common polymer melamine, which is used not only in Formica but also inexpensive dinnerware and utensils, industrial coatings and other plastics. Treating melamine powder with formaldehyde -- which the researchers did in kilogram quantities -- creates nanoscale pores in the melamine that the researchers thought would absorb CO2.

Mao said that tests confirmed that formaldehyde-treated melamine adsorbed CO2 somewhat, but adsorption could be much improved by adding another amine-containing chemical, DETA (diethylenetriamine), to bind CO2. She and her colleagues subsequently found that adding cyanuric acid during the polymerization reaction increased the pore size dramatically and radically improved CO2 capture efficiency: Nearly all the carbon dioxide in a simulated flue gas mixture was absorbed within about 3 minutes.

The addition of cyanuric acid also allowed the material to be used over and over again.

Mao and her colleagues conducted solid-state nuclear magnetic resonance (NMR) studies to understand how cyanuric acid and DETA interacted to make carbon capture so efficient. The studies showed that cyanuric acid forms strong hydrogen bonds with the melamine network that helps stabilize DETA, preventing it from leaching out of the melamine pores during repeated cycles of carbon capture and regeneration.

"What Haiyan and her colleagues were able to show with these elegant techniques is exactly how these groups intermingle, exactly how CO2 reacts with them, and that in the presence of this pore-opening cyanuric acid, she's able to cycle CO2 on and off many times with capacity that's really quite good," Reimer said. "And the rate at which CO2 adsorbs is actually quite rapid, relative to some other materials. So, all the practical aspects at the laboratory scale of this material for CO2 capture have been met, and it's just incredibly cheap and easy to make."

"Utilizing solid-state nuclear magnetic resonance techniques, we systematically elucidated in unprecedented, atomic-level detail the mechanism of the reaction of the amorphous networks with CO2," Mao said. "For the energy and environmental community, this work creates a high-performance, solid-state network family together with a thorough understanding of the mechanisms, but also encourages the evolution of porous materials research from trial-and-error methods to rational, step-by-step, atomic-level modulation."

The Reimer and Cui groups are continuing to tweak the pore size and amine groups to improve the carbon capture efficiency of melamine porous networks, while maintaining the energy efficiency. This involves using a technique called dynamic combinatorial chemistry to vary the proportions of ingredients to achieve effective, scalable, recyclable and high-capacity CO2 capture.

Read more at Science Daily

No, the human brain did not shrink 3,000 years ago

Did the 12th century B.C.E. -- a time when humans were forging great empires and developing new forms of written text -- coincide with an evolutionary reduction in brain size? Think again, says a UNLV-led team of researchers who refute a hypothesis that's growing increasingly popular among the science community.

Last year, a group of scientists made headlines when they concluded that the human brain shrank during the transition to modern urban societies about 3,000 years ago because, they said, our ancestors' ability to store information externally in social groups decreased our need to maintain large brains. Their hypothesis, which explored decades-old ideas on the evolutionary reduction of modern human brain size, was based on a comparison to evolutionary patterns seen in ant colonies.

Not so fast, said UNLV anthropologist Brian Villmoare and Liverpool John Moores University scientist Mark Grabowski.

In a new paper published last week in Frontiers in Ecology and Evolution, the UNLV-led team analyzed the dataset that the research group from last year's study used and dismissed their findings.

"We were struck by the implications of a substantial reduction in modern human brain size at roughly 3,000 years ago, during an era of many important innovations and historical events -- the appearance of Egypt's New Kingdom, the development of Chinese script, the Trojan War, and the emergence of the Olmec civilization, among many others," Villmoare said.

"We re-examined the dataset from DeSilva et al. and found that human brain size has not changed in 30,000 years, and probably not in 300,000 years," Villmoare said. "In fact, based on this dataset, we can identify no reduction in brain size in modern humans over any time-period since the origins of our species."

Read more at Science Daily

Down on Vitamin D? It could be the cause of chronic inflammation

Inflammation is an essential part of the body's healing process. But when it persists, it can contribute to a wide range of complex diseases including type 2 diabetes, heart disease, and autoimmune diseases.

Now, world-first genetic research from the University of South Australia shows a direct link between low levels of vitamin D and high levels of inflammation, providing an important biomarker to identify people at higher risk of or severity of chronic illnesses with an inflammatory component.

The study examined the genetic data of 294,970 participants in the UK Biobank, using Mendelian randomization to show the association between vitamin D and C-reactive protein levels, an indicator of inflammation.

Lead researcher, UniSA's Dr Ang Zhou, says the findings suggest that boosting vitamin D in people with a deficiency may reduce chronic inflammation.

"Inflammation is your body's way of protecting your tissues if you've been injured or have an infection," Dr Zhou says.

"High levels of C-reactive protein are generated by the liver in response to inflammation, so when your body is experiencing chronic inflammation, it also shows higher levels of C-reactive protein.

"This study examined vitamin D and C-reactive proteins and found a one-way relationship between low levels of vitamin D and high levels of C-reactive protein, expressed as inflammation.

"Boosting vitamin D in people with deficiencies may reduce chronic inflammation, helping them avoid a number of related diseases."

Supported by the National Health and Medical Research Council and published in the International Journal of Epidemiology the study also raises the possibility that having adequate vitamin D concentrations may mitigate complications arising from obesity and reduce the risk or severity of chronic illnesses with an inflammatory component, such as CVDs, diabetes, and autoimmune diseases.

Senior investigator and Director of UniSA's Australian Centre for Precision Health, Professor Elina Hyppönen, says these results are important and provide an explanation for some of the controversies in reported associations with vitamin D.

"We have repeatedly seen evidence for health benefits for increasing vitamin D concentrations in individuals with very low levels, while for others, there appears to be little to no benefit." Prof Hyppönen says.

Read more at Science Daily

Aug 7, 2022

Volcanic super eruptions are millions of years in the making -- followed by swift surge

Researchers at the University of Bristol and Scottish Universities Environmental Research Centre have discovered that super-eruptions occur when huge accumulations of magma deep in the Earth's crust, formed over millions of years, move rapidly to the surface disrupting pre-existing rock.

Using a model for crustal flow, an international team of scientists were able to show that pre-existing plutons -- a body of intrusive rock made from solidified magna or lava -- were formed over a few million years prior to four known gigantic super eruptions and that the disruption of these plutons by newly emplaced magmas took place extraordinarily rapidly. While the magma supplying super eruptions takes place over a prolonged period of time, the magma disrupts the crust and then erupts in just a few decades.

The findings, published today in Nature, explain these extreme differences in time ranges for magma generation and eruption by flow of hot but solid crust in response to ascent of the magma, accounting for the infrequency of these eruptions and their huge volumes.

Professor Steve Sparks of Bristol's School of Earth Sciences explained: "The longevity of plutonic and related volcanic systems contrasts with short timescales to assemble shallow magma chambers prior to large-magnitude eruptions of molten rock. Crystals formed from earlier magma pulses, entrained within erupting magmas are stored at temperatures near or below the solidus for long periods prior to eruption and commonly have very short residence in host magmas for just decades or less."

This study casts doubt on the interpretation of prolonged storage of old crystals at temperatures high enough for some molten rocks to be present and indicates the crystals derived from previously emplaced and completely solidified plutons (granites).

Scientists have known that volcanic super-eruptions eject crystals derived from older rocks. However, before this, they were widely thought to have originated in hot environments above the melting points of rock. Previous studies that show the magma chambers for super-eruptions form very rapidly but there was no convincing explanation for this rapid process. While modelling suggested that super-volcanic eruptions would need to be preceded by very long periods of granite pluton emplacement in the upper crust, evidence for this inference was largely lacking.

Prof Sparks added: "By studying of the age and character of the tiny crystals erupted with molten rock, we can help understand how such eruptions happen.

"The research provides an advance in understanding the geological circumstances that enable super eruptions to take place. This will help identify volcanoes that have potential for future super-eruptions."

Such eruptions are very rare and Bristol scientists estimate only one of these types of eruptions occur on earth every 20,000 years. However such eruptions are highly destructive locally and can create global scale severe climate change that would have catastrophic consequences.

Read more at Science Daily

How measuring blood pressure in both arms can help reduce cardiovascular risk and hypertension

Blood pressure should be measured in both arms and the higher reading should be adopted to improve hypertension diagnosis and management, according to a new study.

The research, led by University of Exeter, analysed data from 53,172 participants in 23 studies worldwide to examine the implications of choosing the higher or lower arm pressure.

The study, published in Hypertension, found that using the higher arm blood pressure reading reclassified 12 per cent of people as having hypertension, who would have fallen below the threshold for diagnosis if the lower reading arm was used.

Although International guidelines advise checking blood pressure in both arms, the practice is currently not widely adopted in clinics.

Study lead Dr Christopher Clark, from the University of Exeter, said: "High blood pressure is a global issue and poor management can be fatal. This study shows that failure to measure both arms and use the higher reading arm will not only result in underdiagnosis and undertreatment of high blood pressure but also under-estimation of cardiovascular risks for millions of people worldwide."

The team found that using the higher arm measurement compared to using the lower arm resulted in reclassification of 6572 (12.4%) of participants' systolic blood pressures from below to above 130 mm Hg, and 6339 (11.9%) from below to above 140 mm Hg, moving them above commonly used diagnostic thresholds for hypertension.

Dr Clark continued: "It's impossible to predict the best arm for blood pressure measurement as some people have a higher reading in their left arm compared to right and equal numbers have the opposite. Therefore, it's important to check both arms as detecting high blood pressure correctly is a vital step towards giving the right treatment to the right people."

"Our study now provides the first evidence that the higher reading arm blood pressure is the better predictor of future cardiovascular risk."

The study also revealed that higher arm blood pressure readings better predicted all-cause mortality, cardiovascular mortality, and cardiovascular events, compared to the lower arm reading. The authors stressed the importance of assessing both arms in the diagnosis and management of hypertension and cardiovascular diseases.

From Science Daily