Nov 12, 2021

Tread lightly: ‘Eggshell planets’ possible around other stars

Strange 'eggshell planets' are among the rich variety of exoplanets possible, according to a study from Washington University in St. Louis. These rocky worlds have an ultra-thin outer brittle layer and little to no topography. Such worlds are unlikely to have plate tectonics, raising questions as to their habitability.

Only a small subset of extrasolar planets are likely eggshell planets. Planetary geologist Paul Byrne, first author of the new modeling study in the Journal of Geophysical Research: Planets, said at least three such worlds found during previous astronomical surveys may already be known. Scientists could use planned and future space telescopes to examine these exoplanets in greater detail and confirm their geological characteristics.

"Understanding whether you've got the possibility of plate tectonics is a really important thing to know about a world, because plate tectonics may be required for a large rocky planet to be habitable," said Byrne, associate professor in the Department of Earth and Planetary Sciences in Arts & Sciences and a faculty fellow of the university's McDonnell Center for the Space Sciences. "It's therefore especially important when we're talking about looking for Earth-like worlds around other stars and when we're characterizing planetary habitability generally."

"What we've laid out here is essentially a how-to guide, or handy manual," he said. "If you have a planet of a given size, at a given distance from its star and of a given mass, then with our results you can make some estimates for a variety of other features -- including whether it may have plate tectonics."

A new way to think about exoplanets

To date, exoplanets have largely been the domain of astronomers, because space scientists rely on astronomical techniques and instruments to detect exoplanets. More than 4,000 exoplanets have been discovered and are considered "confirmed." Byrne's study offers new and concrete ways that other scientists could identify eggshell planets, as well as other types of exoplanets that could be interesting because of their particular combinations of size, age and distance to their host star.

"We have imaged a few exoplanets, but they are splotches of light orbiting a star. We have no technical ability to actually see the surface of exoplanets yet," Byrne said. "This paper is one of a small but growing number of studies taking a geological or geophysical perspective to try and understand the worlds that we cannot directly measure right now."

Planets have certain qualities that are inherent to the planets themselves, like their size, interior temperature and the materials that they are made of. Other properties are more of a function of the planet's environment, like how far it is from the sun. The planets that humans know best are those in our own solar system -- but these truths are not necessarily universal for planets that orbit other stars.

"We know from published work that there are exoplanets that experience conditions in a more extreme way than what we see in our solar system," Byrne said. "They might be closer to their star, or they might be much larger, or have hotter surfaces, than the planets we see in our own system."

Byrne and his collaborators wanted to see which planetary and stellar parameters play the most important role in determining the thickness of a planet's outer brittle layer, which is known as the lithosphere.

This thickness helps determine whether, for example, a planet can support high topography such as mountains, or has the right balance between rigidity and flexibility for one part of the surface to dive down, or subduct, beneath another -- the hallmark of plate tectonics. It is this process that helps Earth regulate its temperature over geological timescales, and the reason why plate tectonics is thought to be an important component of planetary habitability.

For their modeling effort, the scientists chose a generic rocky world as a starting point. ("It was kind of Earth sized -- although we did consider size in there, too," he said).

"And then we spun the dials," Byrne said. "We literally ran thousands of models."

Perhaps similar to parts of Venus

They discovered that surface temperature is the primary control on the thickness of brittle exoplanet lithospheres, although planetary mass, distance to its star and even age all play a role. The new models predict that worlds that are small, old or far from their star likely have thick, rigid layers, but, in some circumstances, planets might have an outer brittle layer only a few kilometers thick -- these so-called eggshell planets.

Although we are a long way from directly imaging the surfaces of these eggshell planets, they might resemble the lowlands on Venus, Byrne noted. Those lowlands contain vast expanses of lavas but have little high-standing terrain, because the lithosphere there is thin as a result of searing surface temperatures.

"Our overall goal is more than just understanding the vagaries of exoplanets," Byrne said. "Ultimately we want to help contribute to identifying the properties that make a world habitable. And not just temporarily, but habitable for a long time, because we think life probably needs a while to get going and become sustainable."

Read more at Science Daily

Crushed resistance: Tectonic plate sinking into a subduction zone

The Earth's surface consists of a few large plates and numerous smaller ones that are continuously moving either away from or towards each other at an extremely slow pace. At the boundaries of two plates, the heavier oceanic plate sinks below the lighter continental plate in a process that experts call subduction. For a long time, though, those experts have been puzzling over what happens to the plate margin that dives into the Earth's mantle, known as the subducting slab. Some scientists assumed that the slab remains as rigid and strong as the plate itself and simply bends due to the gravity force and mechanical interaction with the Earth's mantle.

Heavily deformed plate margin

However, models of the Earth's interior constructed by scientists using seismic tomography revealed contradictory results: in the western United States, for example, the researchers observed anomalies at different depths on their tomographic images. These indicated that the slabs submerged beneath the Americas may be segmented. The scientists therefore concluded that the slabs in the mantle must be strongly deformed and are by no means rigid and immobile.

With the aid of computer models, other researchers, including ETH Professor Paul Tackley, confirmed that subducted slabs are indeed weak and deformable. And they formulated the subduction dichotomy hypothesis that can be expressed in simple terms: plates on the surface are rigid and strong (read: non-deformable), while the slabs in the mantle are soft and weak.

Seeking a plausible mechanism

"Until now, however, research has lacked a plausible mechanism to explain how this bending occurs and why sinking plate margins (slabs) become soft and weak," says Taras Gerya, Professor of Geophysics at ETH Zurich.

Observations revealed that numerous faults are found on the upper surface of a sinking plate where it meets the other plate. Seawater penetrates the plate through these faults and is in fact literally sucked in by suction forces. This weakens the plate on its upper side.

Yet this alone is not sufficient to explain the segmentation of the slab -- the anomalies observed on tomographic images. Another mechanism must also be at work to weaken the underside of the margin enough for segmentation to occur.

Gerya and his American colleagues David Bercovici and Thorsten Becker therefore suspected that compression of the underside of the plate at the point where it bends downward was "crushing" large and strong, millimetres size olivine crystals in the plate by forcing them to recrystallise into much weaker, micrometres size granular aggregate -- thereby reducing the plate's resistance and allowing it to bend.

Sinking plate margin divided into segments

Using a new two-dimensional computer model that integrated this grain reduction as a central mechanism, the three researchers then studied the process in silico. Their study was recently published in the journal Nature.

And indeed, the simulations revealed that sinking plates deform due to the massive reduction of olivine grains on their undersides, splitting into individual segments over time. These segments are rigid and stiff, but remain connected to each other by weak hinges made of ground grains.

In the simulations, parallel cracks appear at the segment boundaries on the plate's upper surface. Below these cracks are the zones with "crushed" mineral grains.

"Just imagine you're breaking a bar of chocolate," Gerya says with a grin. A bar of chocolate, too, can be divided into segments only along the specified weak points. The squares of chocolate are rigid, but the connecting pieces between them are weak. "That's why a sinking plate isn't uniformly bent or deformed, but segmented."

And here's how it might play out in reality: The heavier plate sinks under the lighter one. A weak spot with smaller mineral grains within the sinking plate allows it to bend. The bending stress causes the minerals to crumble in more places on the underside. The resulting weakness leads to a fracture, and a segment forms. As the plate margin sinks deeper and deeper into the mantle, it causes further segments to form at the bend. As a result, the slab eventually resembles a chain with rigid links and bendable connectors. At a depth of about 600 kilometres, the segmented plate margin slides onto what is known as the 670 km discontinuity in the Earth's mantle, from which point it moves horizontally.

Clues from nature support simulation

"The results of our simulations are consistent with observations in nature," Gerya explains. A great deal of research has been done on the natural situation along the Japan Trench, where the Pacific plate sinks below the Okhotsk plate. The pattern of faults found here is an exact match for the pattern produced in the simulations.

Researchers have also studied the seismic velocity structure of subducting Japan slab thoroughly using its recently produced high-resolution seismic tomography model. They found that the velocity of the seismic waves sent out by earthquakes was reduced at some nodes inside the slab. The pattern with which these nodes occur in reality coincides with that of the segment boundaries from the simulations. And both in nature and in the computer model, it is zones with very small crystals only micrometres across that are responsible for reducing the velocity of the seismic waves.

Read more at Science Daily

‘Dancing molecules’ successfully repair severe spinal cord injuries

Northwestern University researchers have developed a new injectable therapy that harnesses "dancing molecules" to reverse paralysis and repair tissue after severe spinal cord injuries.

In a new study, researchers administered a single injection to tissues surrounding the spinal cords of paralyzed mice. Just four weeks later, the animals regained the ability to walk.

The research will be published in the Nov. 12 issue of the journal Science.

By sending bioactive signals to trigger cells to repair and regenerate, the breakthrough therapy dramatically improved severely injured spinal cords in five key ways: (1) The severed extensions of neurons, called axons, regenerated; (2) scar tissue, which can create a physical barrier to regeneration and repair, significantly diminished; (3) myelin, the insulating layer of axons that is important in transmitting electrical signals efficiently, reformed around cells; (4) functional blood vessels formed to deliver nutrients to cells at the injury site; and (5) more motor neurons survived.

After the therapy performs its function, the materials biodegrade into nutrients for the cells within 12 weeks and then completely disappear from the body without noticeable side effects. This is the first study in which researchers controlled the collective motion of molecules through changes in chemical structure to increase a therapeutic's efficacy.

"Our research aims to find a therapy that can prevent individuals from becoming paralyzed after major trauma or disease," said Northwestern's Samuel I. Stupp, who led the study. "For decades, this has remained a major challenge for scientists because our body's central nervous system, which includes the brain and spinal cord, does not have any significant capacity to repair itself after injury or after the onset of a degenerative disease. We are going straight to the FDA to start the process of getting this new therapy approved for use in human patients, who currently have very few treatment options."

Stupp is Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering at Northwestern, where he is founding director of the Simpson Querrey Institute for BioNanotechnology (SQI) and its affiliated research center, the Center for Regenerative Nanomedicine. He has appointments in the McCormick School of Engineering, Weinberg College of Arts and Sciences and Feinberg School of Medicine.

Life expectancy has not improved since the 1980s

According to the National Spinal Cord Injury Statistical Center, nearly 300,000 people are currently living with a spinal cord injury in the United States. Life for these patients can be extraordinarily difficult. Less than 3% of people with complete injury ever recover basic physical functions. And approximately 30% are re-hospitalized at least once during any given year after the initial injury, costing millions of dollars in average lifetime health care costs per patient. Life expectancy for people with spinal cord injuries is significantly lower than people without spinal cord injuries and has not improved since the 1980s.

"Currently, there are no therapeutics that trigger spinal cord regeneration," said Stupp, an expert in regenerative medicine. "I wanted to make a difference on the outcomes of spinal cord injury and to tackle this problem, given the tremendous impact it could have on the lives of patients. Also, new science to address spinal cord injury could have impact on strategies for neurodegenerative diseases and stroke."

'Dancing molecules' hit moving targets


The secret behind Stupp's new breakthrough therapeutic is tuning the motion of molecules, so they can find and properly engage constantly moving cellular receptors. Injected as a liquid, the therapy immediately gels into a complex network of nanofibers that mimic the extracellular matrix of the spinal cord. By matching the matrix's structure, mimicking the motion of biological molecules and incorporating signals for receptors, the synthetic materials are able to communicate with cells.

"Receptors in neurons and other cells constantly move around," Stupp said. "The key innovation in our research, which has never been done before, is to control the collective motion of more than 100,000 molecules within our nanofibers. By making the molecules move, 'dance' or even leap temporarily out of these structures, known as supramolecular polymers, they are able to connect more effectively with receptors."

Stupp and his team found that fine-tuning the molecules' motion within the nanofiber network to make them more agile resulted in greater therapeutic efficacy in paralyzed mice. They also confirmed that formulations of their therapy with enhanced molecular motion performed better during in vitro tests with human cells, indicating increased bioactivity and cellular signaling.

"Given that cells themselves and their receptors are in constant motion, you can imagine that molecules moving more rapidly would encounter these receptors more often," Stupp said. "If the molecules are sluggish and not as 'social,' they may never come into contact with the cells."

One injection, two signals

Once connected to the receptors, the moving molecules trigger two cascading signals, both of which are critical to spinal cord repair. One signal prompts the long tails of neurons in the spinal cord, called axons, to regenerate. Similar to electrical cables, axons send signals between the brain and the rest of the body. Severing or damaging axons can result in the loss of feeling in the body or even paralysis. Repairing axons, on the other hand, increases communication between the body and brain.

The second signal helps neurons survive after injury because it causes other cell types to proliferate, promoting the regrowth of lost blood vessels that feed neurons and critical cells for tissue repair. The therapy also induces myelin to rebuild around axons and reduces glial scarring, which acts as a physical barrier that prevents the spinal cord from healing.

"The signals used in the study mimic the natural proteins that are needed to induce the desired biological responses. However, proteins have extremely short half-lives and are expensive to produce," said Zaida Álvarez, the study's first author and former research assistant professor in Stupp's laboratory. "Our synthetic signals are short, modified peptides that -- when bonded together by the thousands -- will survive for weeks to deliver bioactivity. The end result is a therapy that is less expensive to produce and lasts much longer."

Read more at Science Daily

Using mechanical tools improves our language skills, study finds

Our ability to understand the syntax of complex sentences is one of the most difficult language skills to acquire. In 2019, research had revealed a correlation between being particularly proficient in tool use and having good syntactic ability. A new study, by researchers from Inserm, CNRS, Université Claude Bernard Lyon 1 and Université Lumière Lyon 2 in collaboration with Karolinska Institutet in Sweden, has now shown that both skills rely on the same neurological resources, which are located in the same brain region. Furthermore, motor training using a tool improves our ability to understand the syntax of complex sentences and -- vice-versa -- syntactic training improves our proficiency in using tools. These findings could be applied clinically to support the rehabilitation of patients having lost some of their language skills.

This study is published in November 2021 in the journal Science.

Language has long been considered a very complex skill, mobilizing specific brain networks. However, in recent years, scientists have revisited this idea.

Research suggests that brain areas, which control certain linguistic functions, such as the processing of word meanings, are also involved in controlling fine motor skills. However, brain imaging had not provided evidence of such links between language and the use of tools. Paleo-neurobiology has also shown that the brain regions associated with language had increased in our ancestors during periods of technological boom, when the use of tools became more widespread.

When considering this data, research teams couldn't help wondering: what if the use of certain tools, which involves complex movements, relies on the same brain resources as those mobilized in complex linguistic functions such as syntax?

Syntax exercises and use of tongs

In 2019, Inserm researcher Claudio Brozzoli in collaboration with CNRS researcher Alice C. Roy and their team had shown that individuals who are particularly proficient in the use of tools were also generally better at handling the finer points of Swedish syntax.

In order to explore the subject in greater depth, the same team, in collaboration with CNRS researcher Véronique Boulenger, developed a series of experiments that relied on brain imaging techniques (functional magnetic resonance imaging or MRI) and behavioral measurements. The participants were asked to complete several tests consisting of motor training using 30 cm-long pliers and syntax exercises in French. This enabled the scientists to identify the brain networks specific to each task, but also common to both tasks.

They discovered for the first time that the handling of the tool and the syntax exercises produced brain activations in common areas, with the same spatial distribution, in a region called the "basal ganglia."

Cognitive training

Given that these two skill types use the same brain resources, is it possible to train one in order to improve the other? Does motor training with the mechanical tongs improve the understanding of complex phrases? In the second part of their study, the scientists looked at these issues and showed that this is indeed the case.

This time, the participants were asked to perform a syntactic comprehension task before and after 30 minutes of motor training with the pliers (see box for details of the experiment). With this, the researchers demonstrated that motor training with the tool leads to improved performance in syntactic comprehension exercises.

In addition, the findings show that the reverse is also true: training of language faculties, with exercises to understand sentences with complex structure, improved motor performance with the tool.

The scientists are now thinking about how to best apply these findings in the clinical setting."We are currently devising protocols that could be put in place to support the rehabilitation and recovery of language skills of patients with relatively preserved motor faculties, such as young people with developmental language disorders. Beyond these innovative applications, these findings also give us an insight into how language has evolved throughout history. When our ancestors began to develop and use tools, this proficiency profoundly changed the brain and imposed cognitive demands that may have led to the emergence of certain functions such as syntax," concludes Brozzoli.

Motor training and syntax exercises


The motor training involved using the pliers to insert small pegs into holes that matched their shape but with differing orientations.

The syntax exercises which were completed before and after this training consisted of reading sentences with a simple syntax, such as "The scientist who admires the poet writes an article" or with a more complex syntax, such as "The scientist whom the poet admires writes an article." Then the participants had to decide whether statements such as "The poet admires the scientist" were true or false. Sentences with the French object relative pronoun "que" are more difficult to process and therefore performance was generally poorer.

Read more at Science Daily

Nov 11, 2021

Black hole found hiding in star cluster outside our galaxy

Using the European Southern Observatory's Very Large Telescope (ESO's VLT), astronomers have discovered a small black hole outside the Milky Way by looking at how it influences the motion of a star in its close vicinity. This is the first time this detection method has been used to reveal the presence of a black hole outside of our galaxy. The method could be key to unveiling hidden black holes in the Milky Way and nearby galaxies, and to help shed light on how these mysterious objects form and evolve.

The newly found black hole was spotted lurking in NGC 1850, a cluster of thousands of stars roughly 160,000 light-years away in the Large Magellanic Cloud, a neighbour galaxy of the Milky Way.

"Similar to Sherlock Holmes tracking down a criminal gang from their missteps, we are looking at every single star in this cluster with a magnifying glass in one hand trying to find some evidence for the presence of black holes but without seeing them directly," says Sara Saracino from the Astrophysics Research Institute of Liverpool John Moores University in the UK, who led the research now accepted for publication in Monthly Notices of the Royal Astronomical Society. "The result shown here represents just one of the wanted criminals, but when you have found one, you are well on your way to discovering many others, in different clusters."

This first "criminal" tracked down by the team turned out to be roughly 11 times as massive as our Sun. The smoking gun that put the astronomers on the trail of this black hole was its gravitational influence on the five-solar-mass star orbiting it.

Astronomers have previously spotted such small, "stellar-mass" black holes in other galaxies by picking up the X-ray glow emitted as they swallow matter, or from the gravitational waves generated as black holes collide with one another or with neutron stars.

However, most stellar-mass black holes don't give away their presence through X-rays or gravitational waves. "The vast majority can only be unveiled dynamically," says Stefan Dreizler, a team member based at the University of Göttingen in Germany. "When they form a system with a star, they will affect its motion in a subtle but detectable way, so we can find them with sophisticated instruments."

This dynamical method used by Saracino and her team could allow astronomers to find many more black holes and help unlock their mysteries. "Every single detection we make will be important for our future understanding of stellar clusters and the black holes in them," says study co-author Mark Gieles from the University of Barcelona, Spain.

The detection in NGC 1850 marks the first time a black hole has been found in a young cluster of stars (the cluster is only around 100 million years old, a blink of an eye on astronomical scales). Using their dynamical method in similar star clusters could unveil even more young black holes and shed new light on how they evolve. By comparing them with larger, more mature black holes in older clusters, astronomers would be able to understand how these objects grow by feeding on stars or merging with other black holes. Furthermore, charting the demographics of black holes in star clusters improves our understanding of the origin of gravitational wave sources.

To carry out their search, the team used data collected over two years with the Multi Unit Spectroscopic Explorer (MUSE) mounted at ESO's VLT, located in the Chilean Atacama Desert. "MUSE allowed us to observe very crowded areas, like the innermost regions of stellar clusters, analysing the light of every single star in the vicinity. The net result is information about thousands of stars in one shot, at least 10 times more than with any other instrument," says co-author Sebastian Kamann, a long-time MUSE expert based at Liverpool's Astrophysics Research Institute. This allowed the team to spot the odd star out whose peculiar motion signalled the presence of the black hole. Data from the University of Warsaw's Optical Gravitational Lensing Experiment and from the NASA/ESA Hubble Space Telescope enabled them to measure the mass of the black hole and confirm their findings.

Read more at Science Daily

Near-earth asteroid might be a lost fragment of the moon

A near-Earth asteroid named Kamo`oalewa could be a fragment of our moon, according to a new paper published in Nature Communications Earth and Environment by a team of astronomers led by the University of Arizona.

Kamo`oalewa is a quasi-satellite -- a subcategory of near-Earth asteroids that orbit the sun but remain relatively close to Earth. Little is known about these objects because they are faint and difficult to observe. Kamo`oalewa was discovered by the PanSTARRS telescope in Hawaii in 2016, and the name -- found in a Hawaiian creation chant -- alludes to an offspring that travels on its own. The asteroid is roughly the size of a Ferris wheel -- between 150 and 190 feet in diameter -- and gets as close as about 9 million miles from Earth.

Due to its orbit, Kamo`oalewa can only be observed from Earth for a few weeks every April. Its relatively small size means that it can only be seen with one of the largest telescopes on Earth. Using the UArizona-managed Large Binocular Telescope on Mount Graham in southern Arizona, a team of astronomers led by planetary sciences graduate student Ben Sharkey found that Kamo`oalewa's pattern of reflected light, called a spectrum, matches lunar rocks from NASA's Apollo missions, suggesting it originated from the moon.

The team can't yet be sure how it may have broken loose. The reason, in part, is because there are no other known asteroids with lunar origins.

"I looked through every near-Earth asteroid spectrum we had access to, and nothing matched," said Sharkey, the paper's lead author.

The debate over Kamo`oalewa's origins between Sharkey and his adviser, UArizona associate professor Vishnu Reddy, led to another three years of hunting for a plausible explanation.

"We doubted ourselves to death," said Reddy, a co-author who started the project in 2016. After missing the chance to observe it in April 2020 due to a COVID-19 shutdown of the telescope, the team found the final piece of the puzzle in 2021.

"This spring, we got much needed follow-up observations and went, 'Wow it is real,'" Sharkey said. "It's easier to explain with the moon than other ideas."

Kamo`oalewa's orbit is another clue to its lunar origins. Its orbit is similar to the Earth's, but with the slightest tilt. Its orbit is also not typical of near-Earth asteroids, according to study co-author Renu Malhotra, a UArizona planetary sciences professor who led the orbit analysis portion of the study.

"It is very unlikely that a garden-variety near-Earth asteroid would spontaneously move into a quasi-satellite orbit like Kamo`oalewa's," she said. "It will not remain in this particular orbit for very long, only about 300 years in the future, and we estimate that it arrived in this orbit about 500 years ago," Malhotra said. Her lab is working on a paper to further investigate the asteroid's origins.

Kamo`oalewa is about 4 million times fainter than the faintest star the human eye can see in a dark sky.

"These challenging observations were enabled by the immense light gathering power, of the twin 8.4-meter telescopes of the Large Binocular Telescope," said study co-author Al Conrad, a staff scientist with the telescope.

Read more at Science Daily

Humans hastened the extinction of the woolly mammoth

New research shows that humans had a significant role in the extinction of woolly mammoths in Eurasia, occurring thousands of years later than previously thought.

An international team of scientists led by researchers from the University of Adelaide and University of Copenhagen, has revealed a 20,000-year pathway to extinction for the woolly mammoth.

"Our research shows that humans were a crucial and chronic driver of population declines of woolly mammoths, having an essential role in the timing and location of their extinction," said lead author Associate Professor Damien Fordham from the University of Adelaide's Environment Institute.

"Using computer models, fossils and ancient DNA we have identified the very mechanisms and threats that were integral in the initial decline and later extinction of the woolly mammoth."

Signatures of past changes in the distribution and demography of woolly mammoths identified from fossils and ancient DNA show that people hastened the extinction of woolly mammoths by up to 4,000 years in some regions.

"We know that humans exploited woolly mammoths for meat, skins, bones and ivory. However, until now it has been difficult to disentangle the exact roles that climate warming and human hunting had on its extinction," said Associate Professor Fordham.

The study also shows that woolly mammoths are likely to have survived in the Arctic for thousands of years longer than previously thought, existing in small areas of habitat with suitable climatic conditions and low densities of humans.

"Our finding of long-term persistence in Eurasia independently confirms recently published environmental DNA evidence that shows that woolly mammoths were roaming around Siberia 5,000 years ago," said Associate Professor Jeremey Austin from the University of Adelaide's Australian Centre for Ancient DNA.

Associate Professor David Nogues-Bravo from the University of Copenhagen was a co-author of the study which is published in the journal Ecology Letters.

"Our analyses strengthens and better resolves the case for human impacts as a driver of population declines and range collapses of megafauna in Eurasia during the late Pleistocene," he said.

"It also refutes a prevalent theory that climate change alone decimated woolly mammoth populations and that the role of humans was limited to hunters delivering the coup de grâce".

"And shows that species extinctions are usually the result of complex interactions between threatening processes."

Read more at Science Daily

Domestic cats drive spread of Toxoplasma parasite to wildlife

New UBC research suggests free-roaming cats are likely to blame in the spread of the potentially deadly Toxoplasma gondii parasite to wildlife in densely populated urban areas.

The study -- the first to analyze so many wildlife species over a global scale -- also highlights how healthy ecosystems can protect against these types of pathogens.

The researchers, led by UBC faculty of forestry adjunct professor Dr. Amy Wilson, examined 45,079 cases of toxoplasmosis in wild mammals -- a disease that has been linked to nervous system disorders, cancers and other debilitating chronic conditions -- using data from 202 global studies.

They found wildlife living near dense urban areas were more likely to be infected.

"As increasing human densities are associated with increased densities of domestic cats, our study suggests that free-roaming domestic cats -- whether pets or feral cats -- are the most likely cause of these infections," says Dr. Wilson.

"This finding is significant because by simply limiting free roaming of cats, we can reduce the impact of Toxoplasma on wildlife."

One infected cat can excrete as many as 500 million Toxoplasma oocysts (or eggs) in just two weeks. The oocysts can then live for years in soil and water with the potential to infect any bird or mammal, including humans. Toxoplasmosis is particularly dangerous for pregnant people.

If an animal is healthy, the parasite remains dormant and rarely causes direct harm. However, if an animal's immune system is compromised, the parasite can trigger illness and potentially death.

The study also highlights the way healthy forests, streams and other ecosystems can filter out dangerous pathogens like Toxoplasma, notes Dr. Wilson.

"We know that when wetlands are destroyed or streams are restricted, we are more likely to experience runoff that carries more pathogens into the waters where wild animals drink or live," she says. "And when their habitats are healthy, wildlife thrives and tends to be more disease-resistant."

Research results like these remind us that all ecosystems, forested or other, are intrinsically linked.

Read more at Science Daily

Striking difference between neurons of humans and other mammals

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

"If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes," says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT's McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a "building plan" that holds true for every species they looked at -- except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

"Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special," says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites -- tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain's cortex.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

"This building plan is consistent across nine different mammalian species," Harnett says. "What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels."

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

"We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species," Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

Read more at Science Daily

Nov 10, 2021

Life cycle assessment of carbon capture

In our efforts to reduce greenhouse gas emissions, carbon capture is mentioned as a possible technology. CO2 can, for example, be captured from large industrial companies and from incineration plants.

However, like all other technologies, carbon capture leaves its own imprint on the outside world. DTU Environment has therefore conducted a life cycle analysis, which has systematically assessed the impact from a possible carbon capture plant installed at the Amager Bakke incineration plant in Copenhagen. Not just from the pilot plant currently installed by DTU, but from a plant that would cover the entire Amager Bakke facility. The assessment has made it possible to examine the advantages and disadvantages of the carbon capture plant from the point of view of the climate impact.

Amager Bakke incineration plant burns, among other things, household waste that has not been sorted for recycling. The energy generated is used to produce electricity and heat. During incineration, CO2 is released from the waste, which includes food waste and textiles.

Energy production changing


The focus of the life cycle assessment has been to investigate the impact of the carbon capture plant on the energy generated by the incineration plant as well as other environmental impacts. The analysis looked at a number of waste composition scenarios.

"Carbon capture reduces CO2 emissions from the incineration plant. However, electricity production is reduced by approx. 50 per cent. For some incineration plants, this would have a considerable impact on their overall CO2 accounts, but at Amager Bakke, the steam from the carbon capture in fact increases the heat output utilized in the district heating system by 20 per cent. The overall net energy efficiency is thus not affected, but there is a shift from less electricity to more heat," explains Assistant Professor Valentina Bisinella, DTU Environment, who carried out the analysis.

Transport and storage may result in emissions


The other drawbacks for the climate highlighted by the analysis are primarily associated with the transport and storage of the captured CO2 in the subsoil. These activities may cause unintentional emissions of the greenhouse gas into the atmosphere, while sea transport also causes CO2 emissions.

"Even when factoring in the CO2 emissions that may occur both during transport and storage in the subsoil, carbon capture clearly results in net climate benefits," says Valentina Bisinella.

Read more at Science Daily

Fossil elephant cranium reveals key adaptations that enabled its species to thrive as grasslands spread across eastern Africa

A remarkably well-preserved fossil elephant cranium from Kenya is helping scientists understand how its species became the dominant elephant in eastern Africa several million years ago, a time when a cooler, drier climate allowed grasslands to spread and when habitually bipedal human ancestors first appeared on the landscape.

Dated to 4.5 million years ago and recovered from a site on the northeast side of Lake Turkana, it is the only well-preserved elephant cranium -- the portion of the skull that encloses the brain -- from that time. It isabout 85% intact and holds a wealth of previously unavailable anatomical detail, according to University of Michigan paleontologist William Sanders.

Known by its museum number, KNM-ER 63642, the roughly 2-ton cranium belonged to a massive adult male of the species Loxodonta adaurora, an extinct evolutionary cousin of modern African elephants but not a direct ancestor.

KNM-ER 63642 is both impressively immense and unexpectedly modern in aspect, displaying adaptations that likely gave L. adaurora an edge when competing with other large mammals for grasses, according to Sanders, lead author of a study published online Oct. 21 in the journal Palaeovertebrata. Co-authors include Meave and Louise Leakey, who led the recovery effort and who are best known for the discovery of early hominid specimens and artifacts from Lake Turkana and elsewhere.

The L. adaurora cranium is striking because it is raised and compressed from front to back, suggesting a novel alignment of chewing muscles well-suited for the efficient shearing of grasses. In addition, the animal's molars are higher-crowned and had thicker coatings of cementum than other early elephants, making the teeth more resistant to the wear common in animals that feed on grasses close to the ground.

"The evident synchronization of morphological adaptations and feeding behavior revealed by this study of Loxodonta adaurora may explain why it became the dominant elephant species of the early Pliocene," said Sanders, who has studied fossil elephants and their relatives for nearly 40 years in Africa and Arabia.

Eastern Africa was home to seven or eight known species of early elephants at the time, along with horses, antelope, rhinos, pigs and hippos. Many of these animals were becoming grazers and competing for the available grasses.

"The adaptations of L. adaurora put it at a great advantage over more primitive elephants, in that it could probably use less energy to chew more food and live longer to have more offspring," said Sanders, associate research scientist at the U-M Museum of Paleontology and in the Department of Anthropology.

Recovery, conservation, dating, description and identification of the elephant cranium involved collaborative work between researchers and techniciansfrom the Turkana Basin Institute, National Museums of Kenya, University of Michigan, Rutgers University, Smithsonian Institution and University of Utah.

KNM-ER 63642 was discovered in 2013 by a member of the Koobi Fora Research Project from a single molar that was visible at the surface.

Excavation revealed the presence of a nearly complete cranium. The tusks and the jawbone were missing, and no other remains from that individual were recovered. The adult male is estimated to have been 30 to 34 years old at death.

The fossilized cranium, together with the plaster jacket that protected it and some attached sediment, weighed about 2 tons. Based on a previous study of the skeleton from another L. adaurora adult male with a similar-sized skull, this individual likely weighed about 9 tons and probably stood about 12 feet at the shoulder -- bigger than average male elephants of modern times.

"In my opinion, this elephant skull is by far the most impressive specimen that we have in the Kenyan paleontological collection from Lake Turkana, both in its completeness and in its size," said paleontologist and study co-author Louise Leakey of the Koobi Fora Research Project. "When the teeth were seen on the surface, we had no idea that a complete cranium would be uncovered, and the excavation and recovery operation was both challenging and exciting."

KNM-ER 63642 is now permanently housed at the Turkana Basin Institute's facility in Ileret, Kenya. It is the only well-preserved elephant cranium from the interval beginning with the origin of elephants 8 million years ago and ending 3.5 million years ago, according to Sanders.

In addition to providing a trove of insights about the anatomy of early elephants, the newly described cranium also deepens our understanding of the connections between those creatures and our earliest human ancestors, the habitually bipedal australopithecines.

Loxodonta adaurora and other early elephants coexisted with two well-known australopithecine species in eastern Africa: Australopithecusanamensis, recovered by Meave Leakey in and nearby the Lake Turkana Basin, Kenya, and A. afarensis, found at sites in Hadar, Ethiopia, and Laetoli, Tanzania.

In the early Pliocene, as grassy woodlands and grasslands spread across eastern Africa, the australopithecines would have benefited from the presence of elephants. The animals' feeding activities helped keep grasses low to the ground, which would have allowed our upright ancestors to see over the vegetation and to watch for predators.

Elephants also disrupt closed woodlands and create open areas by knocking over trees, uprooting shrubs, and trampling paths through dense forest. And they spread nutrients and grass seed in their dung.

"The origins and early successes of our own biological family are tied to elephants," Sanders said. "Their presence on the landscape created more open conditions that favored the activities and adaptations of our first bipedal hominin ancestors.

"From this perspective, it is ironically tragic that current human activities of encroaching land use, poaching and human-driven climate change are now threatening the extinction of the mammal lineage that helped us to begin our own evolutionary journey."

Read more at Science Daily

Global river database documents 40 years of change

A first-ever database compiling movement of the largest rivers in the world over time could become a crucial tool for urban planners to better understand the deltas that are home to these rivers and a large portion of Earth's population.

The database, created by researchers at The University of Texas at Austin, uses publicly available remote sensing data to show how the river centerlines of the world's 48 most threatened deltas have moved during the past 40 years. The data can be used to predict how rivers will continue to move over time and help governments manage population density and future development.

"When we think about river management strategies, we have very little to no information about how rivers are moving over time," said Paola Passalacqua, an associate professor in the Cockrell School of Engineering's Department of Civil, Architectural and Environmental Engineering who leads the ongoing river analysis research.

The research was published today in Proceedings of the National Academy of Sciences.

The database includes three U.S. rivers, the Mississippi, the Colorado and the Rio Grande. Although some areas of these deltas are experiencing migration, overall, they are mostly stable, the data show. Aggressive containment strategies to keep those rivers in their place, especially near population centers, play a role in that, Passalacqua said.

Average migration rates for each river delta help identify which areas are stable and which are experiencing major river shifts. The researchers also published more extensive data online that includes information about how different segments of rivers have moved over time. It could help planners see what's going in rural areas vs. urban areas when making decisions about how to manage the rivers and what to do with development.

The researchers leaned on techniques from a variety of disciplines to compile the data and published their methods online. Machine learning and image processing software helped them examine decades' worth of images. The researchers worked with Alan Bovik of the Department of Electrical and Computer Engineering and doctoral student Leo Isikdogan to develop that technology. They also borrowed from fluid mechanics, using tools designed to monitor water particles in turbulence experiments to instead track changes to river locations over the years.

"We got the idea to use tools from fluid mechanics while attending a weekly department seminar where other researchers at the university share their work," said Tess Jarriel, a graduate research assistant in Passalacqua's lab and lead author of the paper. "It just goes to show how important it is to collaborate across disciplines."

Rivers that have high sediment flux and flood frequency move more as it is in their nature and part of an important tradeoff that underpins Passalacqua's research.

By knowing more about these river deltas where millions of people live, planners can have a better idea of how best to balance these tradeoffs. Passalacqua, as well as researchers in her lab, have recently published research about these tradeoffs between the need for river freedom and humanity's desire for stability.

Passalacqua has been working on this topic for more than eight years. The team and collaborators are in the process of publishing another paper as part of this work that expands beyond the centerlines of rivers and will also look at riverbanks. That additional information will give an even clearer picture about river movement over time, with more nuance, because sides of the river can move in different directions and at different speeds.

Read more at Science Daily

Scientists invent ‘smart’ window material that blocks rays without blocking views

An international research team led by scientists from Nanyang Technological University, Singapore (NTU Singapore) has invented a 'smart' window material that controls heat transmission without blocking views, which could help cut the energy required to cool and heat buildings.

Developed by NTU researchers, the new energy-saving material for electrochromic (EC) windows that operates at the flick of a switch is designed to block infrared radiation -- which is the major component of sunlight that emits heat.

The new material has a specifically designed nanostructure and comprises advanced materials like titanium dioxide (TiO2), tungsten trioxide (WO3), neodymium-Niobium (Nd-Nb), and tin (IV) oxide (SnO2). The composite material is intended to be coated onto glass window panels, and when activated by electricity, users would be able to 'switch on and off' the infrared radiation transmission through the window.

The invention, which featured alongside the front cover of the journal ACS Omega, could block up to 70 per cent of infrared radiation according to experimental simulations without compromising views through the window since it allows up to 90 per cent of visible light to pass through.

The material is also about 30 per cent more effective in regulating heat than commercially available electrochromic windows and is cheaper to make due to its durability.

An improvement over current electrochromic (EC) window

Electrochromic windows are a common feature in 'green' buildings today. They work by becoming tinted when in use, reducing light from entering the room.

Commercially available electrochromic windows usually have a layer of tungsten trioxide (WO3) coated on one side of the glass panel, and the other, without. When the window is switched on, an electric current moves lithium ions to the side containing WO3, and the window darkens or turns opaque. Once switched off, the ions migrate away from the coated glass, and the window becomes clear again.

However, current electrochromic windows are only effective in blocking visible light, not the infrared radiation, which means heat continues to pass through the window, warming up the room.

Another drawback of the current technology is its durability, as the performance of the electrochromic component tends to degrade in three to five years. In lab tests, NTU's electrochromic technology was put through rigorous on-off cycles to evaluate its durability Results showed the properties of the window retained excellent stability (blocked more than 65% of infrared radiation) demonstrating its superior performance, feasibility and costs saving potential for long-term use in sustainable buildings.

Lead author of the electrochromic window study, Associate Professor Alfred Tok of the NTU School of Materials Science and Engineering said, "By incorporating the specially designed nanostructure, we enabled the material to react in a 'selective' manner, blocking near infrared radiation while still allowing most of the visible light to pass through whenever our electrochromic window is switched on. The choice of advanced materials also helped improved the performance, stability and durability of the smart window."

The new electrochromic technology may help conserve energy that would be used for the heating and cooling of buildings and could contribute to the future design of sustainable green buildings, say the research team.

The study reflects the university's commitment to address humanity's grand challenges on sustainability as part of the NTU 2025 strategic plan, which seeks to accelerate the translation of research discoveries into innovations that mitigate human impact on the environment.

Next generation smart window: Controlling both infrared radiations and conduction heat

Seeking to improve the performance of their smart window technology, the NTU team, in a separate work to that reported in the journal, created a switch system that helps to control conducted heat, which is the heat from the external environment.

The patented NTU switch comprises magnetic carbon-based particles and thin films that are good conductors of heat. When the switch is turned off, conducted heat cannot transfer through the window. When switched on, the heat will be allowed to pass through the glass window.

Read more at Science Daily

Nov 9, 2021

Diet restricted size of hunter-gatherer societies

Short growing seasons limited the possible size of hunter-gatherer societies by forcing people to rely on meat, according to a recent study by a team of international reseachers including McGill University professor Eric Galbraith.

After looking at population size for the roughly 300 hunter-gatherer societies which existed until quite recently, the researchers found that many of these groups were much smaller than might have been expected from the local ecosystem productivity. In regions with short growing seasons, hunter-gatherer groups had smaller populations per square kilometre than groups who depended on abundant plant foods throughout the year.

Need for meat limited population size

"Basically, if people had to live through long dry or cold seasons when plant food was scarce, in order to survive they had to depend on hunting a very limited number of animals," explains Galbraith, a professor in McGill's Department of Earth and Planetary Sciences and at the ICTA-UAB (Institut de Ciència i Tecnologia Ambientals of the Autonomous University of Barcelona), and a senior author on the paper published recently in the journal Nature Ecology & Evolution.

"This led to a seasonal bottleneck in the amount of food available, which then set the overall limit on the population size, no matter how much food there was during the plentiful times."

The team developed a mathematical model that simulates daily human foraging activities (gathering and hunting) and the resultant carbon (energy) flows between vegetation, animals, and hunter-gatherers in a realistic global environment.

"We were struck by the fact that -- despite a long list of unknowns -- a very strong result emerged from the model equations," says Galbraith. "Wherever growing seasons were short, hunter gatherers required meat to make up a high percentage of their diets. And -- just as in the modern world -- it took much more land to produce the same amount of meat as plant-based food."

Read more at Science Daily

Why did glacial cycles intensify a million years ago?

Something big happened to the planet about a million years ago. There was a major shift in the response of Earth's climate system to variations in our orbit around the Sun. The shift is called the Mid-Pleistocene Transition. Before the MPT, cycles between glacial (colder) and interglacial (warmer) periods happened every 41,000 years. After the MPT, glacial periods became more intense -- intense enough to form ice sheets in the Northern Hemisphere that lasted 100,000 years. This gave Earth the regular ice-age cycles that have persisted into human time.

Scientists have long puzzled over what triggered this. A likely reason would be a phenomenon called Milankovitch cycles -- cyclic changes in Earth's orbit and orientation toward the Sun that affect the amount of energy that Earth absorbs. This, scientists agree, has been the main natural driver of alternating warm and cold periods for millions of years. However, research has shown that the Milankovitch cycles did not undergo any kind of big change a million years ago, so something else likely was at work.

Coinciding with the MPT, a large system of ocean currents that helps move heat around the globe experienced a severe weakening. That system, which sends heat north through the Atlantic Ocean, is the Atlantic Meridional Overturning Circulation (AMOC). Was this slowdown related to the shift in glacial periods? If so, how and why? These have been open questions. A new paper published today in the journal Proceedings of the National Academy of Sciences proposes an answer.

The researchers analyzed cores of deep-sea sediments taken in the south and north Atlantic, where ancient deep waters passed by and left chemical clues. "What we found is the North Atlantic, right before this crash, was acting very differently than the rest of the basin," said lead author Maayan Yehudai, who did the work as a PhD. student at Columbia University's Lamont-Doherty Earth Observatory.

Prior to that oceanic circulation crash, ice sheets in the Northern Hemisphere began to stick to their bedrock more effectively. This caused glaciers to grow thicker than they had before. This in turn led to a greater global cooling than before, and disrupted the Atlantic heat conveyor belt. This led to both stronger ice ages and the ice-age cycle shift, says Yehudai.

The research supports a long-debated hypothesis that the gradual removal of accumulated slippery continental soils during previous ice ages allowed ice sheets to cling more tightly to the older, harder crystalline bedrock underneath, and grew thicker and more stable. The findings indicate that this growth and stabilization just before the weakening of the AMOC shaped the global climate.

"Our research addresses one of the biggest questions about the largest climate change we had since the onset of the ice ages," said Yehudai. "It was one of the most substantial climate transitions and we don't fully understand it. Our discovery pins the origin of this change to the Northern Hemisphere and the ice sheets that evolved there as driving this shift towards the climate patterns we observe today. This is a very important step toward understanding what caused it and where it came from. It highlights the importance of the North Atlantic region and ocean circulation for present and future climate change."

Read more at Science Daily

'Cold bone': Researchers discover first dinosaur species that lived on Greenland 214 million years ago

The two-legged dinosaur Issi saaneq lived about 214 million years ago in what is now Greenland. It was a medium-sized, long-necked herbivore and a predecessor of the sauropods, the largest land animals ever to live. It was discovered by an international team of researchers from Portugal, Denmark and Germany, including the Martin Luther University Halle-Wittenberg (MLU). The name of the new dinosaur pays tribute to Greenland's Inuit language and means "cold bone." The team reports on its discovery in the journal Diversity.

The initial remains of the dinosaur -- two well-preserved skulls -- were first unearthed in 1994 during an excavation in East Greenland by palaeontologists from Harvard University. One of the specimens was originally thought to be from a Plateosaurus, a well-known long-necked dinosaur that lived in Germany, France and Switzerland during the Triassic Period. Only a few finds from East Greenland have been prepared and thoroughly documented. "It is exciting to discover a close relative of the well-known Plateosaurus, hundreds of which have already been found here in Germany," says co-author Dr Oliver Wings from MLU.

The team performed a micro-CT scan of the bones, which enabled them to create digital 3D models of the internal structures and the bones still covered by sediment. "The anatomy of the two skulls is unique in many respects, for example in the shape and proportions of the bones. These specimens certainly belong to a new species," says lead author Victor Beccari, who carried out the analyses at NOVA University Lisbon.

The plant-eating dinosaur Issi saaneq lived around 214 million years ago during the Late Triassic Period. It was at this time that the supercontinent Pangaea broke apart and the Atlantic Ocean began forming. "At the time, the Earth was experiencing climate changes that enabled the first plant-eating dinosaurs to reach Europe and beyond," explains Professor Lars Clemmensen from the University of Copenhagen.

The two skulls of the new species come from a juvenile and an almost adult individual. Apart from the size, the differences in bone structure are minor and only relate to proportions. The new Greenlandic dinosaur differs from all other sauropodomorphs discovered so far, however it does have similarities with dinosaurs found in Brazil, such as the Macrocollum and Unaysaurus, which are almost 15 million years older. Together with the Plateosaurus from Germany, they form the group of plateosaurids: relatively graceful bipeds that reached lengths of 3 to 10 metres.

The new findings are the first evidence of a distinct Greenlandic dinosaur species, which not only adds to the diverse range of dinosaurs from the Late Triassic (235-201 million years ago) but also allows us to better understand the evolutionary pathways and timeline of the iconic group of sauropods that inhabited the Earth for nearly 150 million years.

Read more at Science Daily

Giant leap taken in fighting antibiotic resistance

Scientists may have made a giant leap in fighting the biggest threat to human health by using supercomputing to keep pace with the impressive ability of diseases to evolve.

A new study by an international team, co-led by Dr Gerhard Koenig from the University of Portsmouth, tackled the problem of antibiotic resistance by redesigning existing antibiotics to overcome bacterial resistance mechanisms.

About 700,000 people are estimated to die every year because of antibiotic resistant bacteria, and that number is expected to rise to millions.

Without effective antibiotics, life expectancy is predicted to drop by 20 years.

The race has been on for many years to develop new antibiotics to fight disease faster than a disease can evolve.

Computers have been used in drug design for decades, but this is the first study to use a multi-pronged computer-guided strategy to make a new antibiotic from an existing one which bacteria have outwitted.

The research is published in PNAS.

Dr Koenig, a computational chemist and first author on the paper, said: "Antibiotics are one of the pillars of modern medicine and antibiotic resistance is one of the biggest threats to human health. There's an urgent need to develop new ways of fighting ever-evolving bacteria.

"Developing a new antibiotic usually involves finding a new target that is essential for the survival of a wide range of different bacteria. This is extremely difficult, and only very few new classes of antibiotics have been developed in recent times.

"We have taken a simpler approach by starting from an existing antibiotic, which is ineffective against new resistant strains, and modifying it so it's now able to overcome resistance mechanisms."

The team has shown that their best drug candidate, which is yet to undergo clinical trials, is up to 56 times more active for the tested bacterial strains than two antibiotics on the World Health Organisation's (WHO) list of essential medicines, erythromycin and clarithromycin.

Dr Koenig said: "Not only is our best candidate more effective against the tested targets, but it also shows activity against the three top ranked bacteria from the WHO priority list where the tested existing antibiotics don't work.

"It's only a matter of time until bacteria develop counterstrategies against our counterstrategies and become resistant to the new antibiotic, so we will have to keep on studying bacterial resistance mechanisms and develop new derivatives accordingly."

The hope of this new work lies in showing that the resistance mechanisms of bacteria can be addressed in a systematic way, allowing science to continually fight back with a computational evolution of new antibiotics.

Dr Koenig said: "Our computers are becoming faster with every year. So, there is some hope that we will be able to turn the tide.

"If computers can beat the world champion in chess, I don't see why they should not also be able to defeat bacteria."

The international team, including Nobel Prize laureate Ada Yonath, carried out the research at the Max-Planck-Institut für Kohlenforschung, the Weizmann Institute, and the universities of Duisburg-Essen, Bochum and Queensland.

They developed a strategy to simulate many aspects of a redesigned antibiotic at the same time, including how soluble it is, how effective it is at entering into the bacteria, and how efficient it is at blocking their protein production.

The computational work outlined in the research was done in a matter of weeks on one of the top supercomputers in Europe, but it took the international team several years to verify experimentally that their approach was indeed correct.

Read more at Science Daily

Anxiety effectively treated with exercise

Both moderate and strenuous exercise alleviate symptoms of anxiety, even when the disorder is chronic, a study led by researchers at the University of Gothenburg shows.

The study, now published in the Journal of Affective Disorders, is based on 286 patients with anxiety syndrome, recruited from primary care services in Gothenburg and the northern part of Halland County. Half of the patients had lived with anxiety for at least ten years. Their average age was 39 years, and 70 percent were women.

Through drawing of lots, participants were assigned to group exercise sessions, either moderate or strenuous, for 12 weeks. The results show that their anxiety symptoms were significantly alleviated even when the anxiety was a chronic condition, compared with a control group who received advice on physical activity according to public health recommendations.

Most individuals in the treatment groups went from a baseline level of moderate to high anxiety to a low anxiety level after the 12-week program. For those who exercised at relatively low intensity, the chance of improvement in terms of anxiety symptoms rose by a factor of 3.62. The corresponding factor for those who exercised at higher intensity was 4.88. Participants had no knowledge of the physical training or counseling people outside their own group were receiving.

"There was a significant intensity trend for improvement -- that is, the more intensely they exercised, the more their anxiety symptoms improved," states Malin Henriksson, doctoral student at Sahlgrenska Academy at the University of Gothenburg, specialist in general medicine in the Halland Region, and the study's first author.

Importance of strenuous exercise

Previous studies of physical exercise in depression have shown clear symptom improvements. However, a clear picture of how people with anxiety are affected by exercise has been lacking up to now. The present study is described as one of the largest to date.

Both treatment groups had 60-minute training sessions three times a week, under a physical therapist's guidance. The sessions included both cardio (aerobic) and strength training. A warmup was followed by circle training around 12 stations for 45 minutes, and sessions ended with cooldown and stretching.

Members of the group that exercised at a moderate level were intended to reach some 60 percent of their maximum heart rate -- a degree of exertion rated as light or moderate. In the group that trained more intensively, the aim was to attain 75 percent of maximum heart rate, and this degree of exertion was perceived as high.

The levels were regularly validated using the Borg scale, an established rating scale for perceived physical exertion, and confirmed with heart rate monitors.

New, simple treatments needed

Today's standard treatments for anxiety are cognitive behavioral therapy (CBT) and psychotropic drugs. However, these drugs commonly have side effects, and patients with anxiety disorders frequently do not respond to medical treatment. Long waiting times for CBT can also worsen the prognosis.

The present study was led by Maria Åberg, associate professor at the University of Gothenburg's Sahlgrenska Academy, specialist in general medicine in Region Västra Götaland's primary healthcare organization, and corresponding author.

Read more at Science Daily

Nov 8, 2021

Thinnest X-ray detector ever created

Scientists in Australia have used tin mono-sulfide (SnS) nanosheets to create the thinnest X-ray detector ever made, potentially enabling real-time imaging of cellular biology.

X-ray detectors are tools that allow energy transported by radiation to be recognised visually or electronically, like medical imaging or Geiger counters.

SnS has already shown great promise as a material for use in photovoltaics, field effect transistors and catalysis.

Now, members of the ARC Centre of Excellence in Exciton Science, based at Monash Universityand RMIT University, have shown that SnS nanosheets are also excellent candidates for use as soft X-ray detectors.

Their research, published in the journal Advanced Functional Materials, indicates that SnS nanosheets possess high photon absorption coefficients, allowing them to be used in making ultrathin soft X-ray detectors with high sensitivity and a rapid response time.

These materials were found to be even more sensitive than another emerging candidate (metal halide perovskites), boasting a faster response time than established detectors and are tuneable for sensitivity across the soft X-ray region.

The SnS X-ray detectors created by the team are less than 10 nanometres thick. To put things in perspective, a sheet of paper is about 100,000 nanometres thick, and your fingernails grow about one nanometre every second. Previously, the thinnest X-ray detectors created were between 20 and 50 nanometres.

Considerable work remains to explore the full potential of the SnS X-ray detectors, but Professor Jacek Jasieniak of Monash's Department of Materials Science and Engineering, the senior author of the paper, believes it's possible this could one day lead to real-time imaging of cellular processes.

"The SnS nanosheets respond very quickly, within milliseconds," he said.

"You can scan something and get an image almost instantaneously. The sensing time dictates the time resolution. In principle, given the high sensitivity and high time resolution, you could be able to see things in real time.

"You might be able to use this to see cells as they interact. You're not just producing a static image, you could see proteins and cells evolving and moving using X-rays."

Why are such sensitive and responsive detectors important? X-rays can be broadly divided into two types: 'Hard' X-rays are the kind used by hospitals to scan the body for broken bones and other illnesses.

Perhaps less well known but just as important are 'soft' X-rays, which have a lower photon energy and can be used to study wet proteins and living cells, a crucial component of cellular biology.

Some of these measurements take place in the 'water window', a region of the electromagnetic spectrum in which water is transparent to soft X-rays.

Soft X-ray detection can be conducted using a Synchrotron, a particle accelerator like the Large Hadron Collider in Switzerland, but access to this type of hugely expensive infrastructure is difficult to secure.

Recent advances in non-synchrotron soft X-ray laser sources may allow lower cost, portable detection systems to be designed, providing an accessible alternative to Synchrotrons for researchers around the world.

But for this approach to work, we will need soft X-ray detector materials that are highly sensitive to low energy X-rays, provide excellent spatial resolution, and are cost effective.

Some existing soft X-ray detectors use an indirect mechanism, in which ionizing radiation is converted into visible photons. This approach allows for multiple energy ranges and frame rates to be studied, but is difficult to prepare and offers limited resolutions.

Direct detection methods are easier to prepare and offer better resolutions, because the detector material can be thinner than indirect approaches.

Good candidate materials need a high X-ray absorption coefficient, which is calculated using the atomic number of the absorbing atoms, X-ray incident energy, density and atomic mass of an atom.

High atomic mass and low energy X-rays favour high absorption, and soft X-rays are more strongly absorbed in thin materials compared to hard X-rays.

Nanocrystal films and ferromagnetic flakes have shown promise as certain types of soft X-ray detectors, but they are not well equipped to handle the water region.

That's where the SnS nanosheets come in.

One of the lead authors, Dr Nasir Mahmood of RMIT University, said the sensitivity and efficiency of SnS nanosheets depends greatly on their thickness and lateral dimensions, which are not possible to control through traditional fabrication methods.

Using a liquid metal-based exfoliation method allowed the researchers to produce high quality, large area sheets with controlled thickness, which can efficiently detect soft X-ray photons in the water region. Their sensitivity can be further enhanced by a process of stacking the ultrathin layers.

They represent major improvements in sensitivity and response time compared to existing direct soft X-ray detectors.

The researchers hope their findings will open new avenues for the development of next-generation, highly sensitive X-ray detectors based on ultrathin materials.

Read more at Science Daily

Ecosystems worldwide are disrupted by lack of large wild herbivores – except in Africa

Biological research has repeatedly demonstrated that the relationship between the producer and the consumer is governed by a scaling law. An international research team has now looked into whether this law of nature can be reproduced in the relationship between the production of plants in an area and the number of large herbivores that graze on them. The study reveals that Africa is the only continent where the scaling law holds true.

June 2021 saw the start of the United Nations Decade on Ecosystem Restoration. A total of 115 countries have committed themselves to restoring up to a billion hectares of nature worldwide.

According to a group of researchers from Aarhus University and the University of Sussex, one of biggest challenges will be restoring the historical and prehistoric grazing of large mammals. What level of restoration should we aim for? How many large herbivores will we need? And how are we going to co-exist with these large animals?

A baseline in Africa

The researchers examined the current low densities of large herbivores in a scientific article in the Journal of Applied Ecology. In the article, they calculated a baseline for large animals based on the ratio between producer and consumer, i.e. plants and herbivores, in nature reserves in Africa.

They stress that this relationship between producers and consumers applies across ecosystems and biomes implying a close correlation between the biomass produced and the biomass of dependent consumers.

However, after investigating the density of large herbivores in nature reserves throughout the world, the researchers were only able to find such a close correlation on one continent: Africa. On the other continents, they found strong indications of impoverished fauna, even in protected natural areas.

"African ecosystems have species-rich mammal fauna and a large biomass of big herbivores that are significantly linked to plant productivity," says Camilla Fløjgaard from the Department of Ecoscience at Aarhus University and head of the research group.

"But we can't find this pattern on other continents, and in general the large herbivore biomass is much lower than we would expect considering the level of productivity," she adds.

Far fewer animals in European nature


The survey includes data from protected areas, reserves and several rewilding projects in Europe. The researchers found significant differences, as the biomass for large herbivores in natural areas was less than one-tenth of the biomass observed in fenced rewilding areas with restored herbivore fauna.

"It's thought-provoking that, even in many protected areas, the number of large herbivores is only a fraction of what the areas can actually sustain," says Camilla Fløjgaard.

Another and lower baseline


In the article, the researchers argue that large herbivores are still being displaced, hunted and eradicated, and that there is a widespread perception, even among game managers, that there are plenty of herbivores in the wild, perhaps even too many. This perception is not supported by the new study.

On the contrary, efforts to decrease populations of large herbivores can reflect a shifting baseline.

"Even though large herbivores have been wandering the landscape for millions of years, it seems that we have become accustomed to landscapes almost completely devoid of them, and we have come to accept this as the natural state of things," says Camilla Fløjgaard.

Large animals are 'troublesome'


In the EU alone, there is a plan is to allocate 30% of marine and land areas to the restoration of natural areas and ecosystems.

Read more at Science Daily

Study of 18000+ US and Australian older people reveals moderate drinking protective against heart disease, more than for tea totalers

A landmark study by Monash University researchers has found that moderate drinking of alcohol is associated with a reduced risk of cardiovascular disease and a lowering of mortality from all causes -- when compared to zero alcohol consumption. The study in more than 18,000 people in the US and Australia over the age of 70 is the first to look at the heart health implications of alcohol intake.

Excess alcohol consumption is a leading contributor to the global burden of disease and a major risk factor for mortality. Yet, prior studies suggested that moderate alcohol consumption may be associated with a lower risk of cardiovascular disease (CVD) events.

This Monash University study, published in the European Journal of Preventive Cardiology is the first to investigate the risk of CVD events and mortality, from all causes, associated with alcohol consumption in initially healthy, older individuals.

Populations around the world are ageing. The Monash University- led ASPirin in Reducing Events in the Elderly (ASPREE) clinical trial was a large-scale, long-term multi-centre, bi-national study of aspirin and health in older adults, with the purpose to discover ways to maintain health, quality of life and independence as we age.

This study, led by Dr Johannes Neumann, from the Monash University School of Public Health and Preventive Medicine, analysed data from almost 18,000 ASPREE participants -- Australians and Americans mostly aged 70 years and older.

Participants in the study did not have prior CVD events, diagnosed dementia or independence-limiting physical disability. CVD events included coronary heart disease death, non-fatal myocardial infarction, fatal and non-fatal stroke, non-coronary cardiac or vascular death, and hospitalisation for heart failure. Information on alcohol consumption (days of drinking per week and average standard drinks per day) was assessed by self-reported questionnaire at baseline. The study excluded former alcohol consumers who may have stopped alcohol consumption for various health reasons, possibly introducing bias from reverse causality.

Based on this information, the alcohol intake was calculated as grams per week -- for US participants a standard drink was equivalent to 14 g and 10 g for Australian participants.

In the study, alcohol consumption was categorised as 0 (never drinks) and those who drink 1-50; 51-100; 101-150, and >150g/week. For Australians that is up to 5; 5-10; 10-14 and over 15 standard drinks per week. For Americans -- that is up to 3.5; 3.5-7; 7-10 and over 10 standard drinks per week. Of the almost 18,000 eligible participants with median age 74 years:
 

  • 57% were female
  • 43.3% were current or former smokers and
  • mean BMI was 28.1 kg/m2


The participants reported that
 

  • 18.6% ingested no alcohol every week
  • 37.3% reported 1-50 g/week
  • 19.7%reported 51-100 g/week
  • 15.6% reported 101-150 g/week
  • 8.9% reported >150 g/week


The participants were followed for an average of 4.7 years and the study found that there was a reduced risk of CVD events for individuals consuming alcohol of 51-100, 101-150, and >150 g/week, compared to never consuming alcohol, regardless of gender.

Consumption of 51-100 g/week was also associated with a reduced risk of all-cause mortality.

Lead author, Dr Neumann, says the findings need to be interpreted with caution, as study participants were all initially healthy without prior CVD or other severe diseases, and may have been more physically and socially active than the wider ageing population.

Furthermore, prior evidence showed that excess alcohol consumption increases the risk of other chronic diseases, such as cancer, liver disease or pancreatitis.

Read more at Science Daily

COVID-19: The older you are, the more antibodies you have, study finds

With the emergence of SARS-CoV-2 variants worldwide, the pandemic's spread is accelerating. A research team led by Joelle Pelletier and Jean-François Masson, both professors in Université de Montréal's Department of Chemistry, wanted to find out whether natural infection or vaccination led to more protective antibodies being generated.

In their study published today in Scientific Reports, they observe that those who received the Pfizer BioNTech or AstraZeneca vaccine had antibody levels that were significantly higher than infected individuals. These antibodies were also effective against the Delta variant, which wasn't present in Quebec when the samples were collected in 2020.

Masson, a biomedical instruments specialist, and Pelletier, a protein chemistry expert, were interested in an understudied group: people who have been infected by SARS-CoV-2 but were not hospitalized as a result of the infection.

32 non-hospitalized COVID-19 positive Canadian adults

Consequently, 32 non-hospitalized COVID-19 positive Canadian adults were recruited by the Centre hospitalier de l'Université Laval 14 to 21 days after being diagnosed through PCR testing. This was in 2020, before the Beta, Delta and Gamma variants emerged.

"Everyone who had been infected produced antibodies, but older people produced more than adults under 50 years of age," said Masson. "In addition, antibodies were still present in their bloodstream 16 weeks after their diagnosis."

Antibodies produced after an infection by the original, "native" strain of the virus also reacted to SARS-CoV-2 variants that emerged in subsequent waves, namely Beta (South Africa), Delta (India) and Gamma (Brazil), but to a lesser extent: a reduction of 30 to 50 per cent.

A surprising reaction to the Delta variant

"But the result that surprised us the most was that antibodies produced by naturally infected individuals 50 and older provided a greater degree of protection than adults below 50, " said Pelletier.

"This was determined by measuring the antibodies' capacity to inhibit the interaction of the Delta variant's spike protein with the ACE-2 receptor in human cells, which is how we become infected," he added. "We didn't observe the same phenomenon with the other variants."

When someone who has had a mild case of COVID is vaccinated, the antibody level in their blood doubles compared to an unvaccinated person who has been infected by the virus. Their antibodies are also better able to prevent spike-ACE-2 interaction.

"But what's even more interesting," said Masson, "is that we have samples from an individual younger than 49 whose infection didn't produce antibodies inhibiting spike-ACE-2 interaction, unlike vaccination. This suggests that vaccination increases protection against the Delta variant among people previously infected by the native strain."

Both scientists believe more research should be conducted to determine the best combination for maintaining the most effective level of antibodies reactive to all variants of the virus.

Read more at Science Daily

Nov 7, 2021

Jet from giant galaxy M87: Computer modelling explains black hole observations

The galaxy Messier 87 (M87) is located 55 million light years away from Earth in the Virgo constellation. It is a giant galaxy with 12,000 globular clusters, making the Milky Way's 200 globular clusters appear modest in comparison. A black hole of six and a half billion sun masses is harboured at the centre of M87. It is the first black hole for which an image exists, created in 2019 by the international research collaboration Event Horizon Telescope.

This black hole (M87*) shoots a jet of plasma at near the speed of light, a so-called relativistic jet, on a scale of 6,000 light years. The tremendous energy needed to power this jet probably originates from the gravitational pull of the black hole, but how a jet like this comes about and what keeps it stable across the enormous distance is not yet fully understood.

The black hole M87* attracts matter that rotates in a disc in ever smaller orbits until it is swallowed by the black hole. The jet is launched from the centre of the accretion disc surrounding M87, and theoretical physicists at Goethe University, together with scientists from Europe, USA and China, have now modelled this region in great detail.

They used highly sophisticated three-dimensional supercomputer simulations that use the staggering amount of a million CPU hours per simulation and had to simultaneously solve the equations of general relativity by Albert Einstein, the equations of electromagnetism by James Maxwell, and the equations of fluid dynamics by Leonhard Euler.

The result was a model in which the values calculated for the temperatures, the matter densities and the magnetic fields correspond remarkably well with what deduced from the astronomical observations. On this basis, scientists were able to track the complex motion of photons in the curved spacetime of the innermost region of the jet and translate this into radio images. They were then able to compare these computer modelled images with the observations made using numerous radio telescopes and satellites over the past three decades.

Dr Alejandro Cruz-Osorio, lead author of the study, comments: "Our theoretical model of the electromagnetic emission and of the jet morphology of M87 matches surprisingly well with the observations in the radio, optical and infrared spectra. This tells us that the supermassive black hole M87* is probably highly rotating and that the plasma is strongly magnetized in the jet, accelerating particles out to scales of thousands of light years."

Read more at Science Daily

Low-gravity simulator design offers new avenues for space research and mission training

As humanity continues its exploration of the universe, the low-gravity environment of space presents unusual challenges for scientists and engineers.

Researchers at the FAMU-FSU College of Engineering and the Florida State University-headquartered National High Magnetic Field Laboratory have developed a new tool to help meet that challenge -- a novel design for a low-gravity simulator that promises to break new ground for future space research and habitation.

Their new design for a magnetic levitation-based low-gravity simulator can create an area of low gravity with a volume about 1,000 times larger than existing simulators of the same type. The work was published in the journal npj Microgravity.

"Low gravity has a profound effect on the behaviors of biological systems and also affects many physical processes from the dynamics and heat transfer of fluids to the growth and self-organization of materials," said Wei Guo, associate professor in mechanical engineering and lead scientist on the study. "However, spaceflight experiments are often limited by the high cost and the small payload size and mass. Therefore, developing ground-based low-gravity simulators is important."

Existing simulators, such as drop towers and parabolic aircraft, use free fall to generate near-zero gravity. But these facilities typically have short low-gravity durations, i.e., several seconds to a few minutes, which makes them unsuitable for experiments that require long observation times. On the other hand, magnetic levitation-based simulators (MLS) can offer unique advantages, including low cost, easy accessibility, adjustable gravity, and practically unlimited operation time.

But a conventional MLS can only create a small volume of low gravity. When a typical simulator mimics an environment that is about 1 percent of Earth's gravity, the functional volume is only a few micro-liters, too small for practical space research and applications.

In order to increase the functional volume of an MLS, the researchers needed a magnet that would allow a uniform levitation force to be generated that would balance the gravitational force in a large volume. They found that they could achieve this by integrating a superconducting magnet with a gradient Maxwell coil -- a coil configuration that was first proposed in the 1800s by physicist James Clark Maxwell.

Read more at Science Daily