Dec 17, 2020

Saturn moon, Enceladus, could support life in its subsurface ocean

 Using data from NASA's Cassini spacecraft, scientists at Southwest Research Institute (SwRI) modeled chemical processes in the subsurface ocean of Saturn's moon Enceladus. The studies indicate the possibility that a varied metabolic menu could support a potentially diverse microbial community in the liquid water ocean beneath the moon's icy facade.

Prior to its deorbit in September of 2017, Cassini sampled the plume of ice grains and water vapor erupting from cracks on the icy surface of Enceladus, discovering molecular hydrogen, a potential food source for microbes. A new paper published in the planetary science journal Icarus explores other potential energy sources.

"The detection of molecular hydrogen (H2) in the plume indicated that there is free energy available in the ocean of Enceladus," said lead author Christine Ray, who works part time at SwRI as she pursues a Ph.D. in physics from The University of Texas at San Antonio. "On Earth, aerobic, or oxygen-breathing, creatures consume energy in organic matter such as glucose and oxygen to create carbon dioxide and water. Anaerobic microbes can metabolize hydrogen to create methane. All life can be distilled to similar chemical reactions associated with a disequilibrium between oxidant and reductant compounds."

This disequilibrium creates a potential energy gradient, where redox chemistry transfers electrons between chemical species, most often with one species undergoing oxidation while another species undergoes reduction. These processes are vital to many basic functions of life, including photosynthesis and respiration. For example, hydrogen is a source of chemical energy supporting anaerobic microbes that live in the Earth's oceans near hydrothermal vents. At Earth's ocean floor, hydrothermal vents emit hot, energy-rich, mineral-laden fluids that allow unique ecosystems teeming with unusual creatures to thrive. Previous research found growing evidence of hydrothermal vents and chemical disequilibrium on Enceladus, which hints at habitable conditions in its subsurface ocean.

"We wondered if other types of metabolic pathways could also provide sources of energy in Enceladus' ocean," Ray said. "Because that would require a different set of oxidants that we have not yet detected in the plume of Enceladus, we performed chemical modeling to determine if the conditions in the ocean and the rocky core could support these chemical processes."

For example, the authors looked at how ionizing radiation from space could create the oxidants O2 and H2O2, and how abiotic geochemistry in the ocean and rocky core could contribute to chemical disequilibria that might support metabolic processes. The team considered whether these oxidants could accumulate over time if reductants are not present in appreciable amounts. They also considered how aqueous reductants or seafloor minerals could convert these oxidants into sulfates and iron oxides.

"We compared our free energy estimates to ecosystems on Earth and determined that, overall, our values for both aerobic and anaerobic metabolisms meet or exceed minimum requirements," Ray said. "These results indicate that oxidant production and oxidation chemistry could contribute to supporting possible life and a metabolically diverse microbial community on Enceladus."

"Now that we've identified potential food sources for microbes, the next question to ask is 'what is the nature of the complex organics that are coming out of the ocean?'" said SwRI Program Director Dr. Hunter Waite, a coauthor of the new paper, referencing an online Nature paper authored by Postberg et al. in 2018. "This new paper is another step in understanding how a small moon can sustain life in ways that completely exceed our expectations!"

The paper's findings also have great significance for the next generation of exploration.

Read more at Science Daily

Dark storm on Neptune reverses direction, possibly shedding a fragment

 Astronomers using NASA's Hubble Space Telescope watched a mysterious dark vortex on Neptune abruptly steer away from a likely death on the giant blue planet.

The storm, which is wider than the Atlantic Ocean, was born in the planet's northern hemisphere and discovered by Hubble in 2018. Observations a year later showed that it began drifting southward toward the equator, where such storms are expected to vanish from sight. To the surprise of observers, Hubble spotted the vortex change direction by August 2020, doubling back to the north. Though Hubble has tracked similar dark spots over the past 30 years, this unpredictable atmospheric behavior is something new to see.

Equally as puzzling, the storm was not alone. Hubble spotted another, smaller dark spot in January this year that temporarily appeared near its larger cousin. It might possibly have been a piece of the giant vortex that broke off, drifted away, and then disappeared in subsequent observations.

"We are excited about these observations because this smaller dark fragment is potentially part of the dark spot's disruption process," said Michael H. Wong of the University of California at Berkeley. "This is a process that's never been observed. We have seen some other dark spots fading away, and they're gone, but we've never seen anything disrupt, even though it's predicted in computer simulations."

The large storm, which is 4,600 miles across, is the fourth dark spot Hubble has observed on Neptune since 1993. Two other dark storms were discovered by the Voyager 2 spacecraft in 1989 as it flew by the distant planet, but they had disappeared before Hubble could observe them. Since then, only Hubble has had the sharpness and sensitivity in visible light to track these elusive features, which have sequentially appeared and then faded away over a duration of about two years each. Hubble uncovered this latest storm in September 2018.

Wicked Weather

Neptune's dark vortices are high-pressure systems that can form at mid-latitudes and may then migrate toward the equator. They start out remaining stable due to Coriolis forces, which cause northern hemisphere storms to rotate clockwise, due to the planet's rotation. (These storms are unlike hurricanes on Earth, which rotate counterclockwise because they are low-pressure systems.) However, as a storm drifts toward the equator, the Coriolis effect weakens and the storm disintegrates. In computer simulations by several different teams, these storms follow a more-or-less straight path to the equator, until there is no Coriolis effect to hold them together. Unlike the simulations, the latest giant storm didn't migrate into the equatorial "kill zone."

"It was really exciting to see this one act like it's supposed to act and then all of a sudden it just stops and swings back," Wong said. "That was surprising."

Dark Spot Jr.

The Hubble observations also revealed that the dark vortex's puzzling path reversal occurred at the same time that a new spot, informally deemed "dark spot jr.," appeared. The newest spot was slightly smaller than its cousin, measuring about 3,900 miles across. It was near the side of the main dark spot that faces the equator -- the location that some simulations show a disruption would occur.

However, the timing of the smaller spot's emergence was unusual. "When I first saw the small spot, I thought the bigger one was being disrupted," Wong said. "I didn't think another vortex was forming because the small one is farther towards the equator. So it's within this unstable region. But we can't prove the two are related. It remains a complete mystery.

"It was also in January that the dark vortex stopped its motion and started moving northward again," Wong added. "Maybe by shedding that fragment, that was enough to stop it from moving towards the equator."

The researchers are continuing to analyze more data to determine whether remnants of dark spot jr. persisted through the rest of 2020.

Dark Storms Still Puzzling

It's still a mystery how these storms form, but this latest giant dark vortex is the best studied so far. The storm's dark appearance may be due to an elevated dark cloud layer, and it could be telling astronomers about the storm's vertical structure.

Another unusual feature of the dark spot is the absence of bright companion clouds around it, which were present in Hubble images taken when the vortex was discovered in 2018. Apparently, the clouds disappeared when the vortex halted its southward journey. The bright clouds form when the flow of air is perturbed and diverted upward over the vortex, causing gases to likely freeze into methane ice crystals. The lack of clouds could be revealing information on how spots evolve, say researchers.

Weather Eye on the Outer Planets

Hubble snapped many of the images of the dark spots as part of the Outer Planet Atmospheres Legacy (OPAL) program, a long-term Hubble project, led by Amy Simon of NASA's Goddard Space Flight Center in Greenbelt, Maryland, that annually captures global maps of our solar system's outer planets when they are closest to Earth in their orbits.

OPAL's key goals are to study long-term seasonal changes, as well as capture comparatively transitory events, such as the appearance of dark spots on Neptune or potentially Uranus. These dark storms may be so fleeting that in the past some of them may have appeared and faded during multi-year gaps in Hubble's observations of Neptune. The OPAL program ensures that astronomers won't miss another one.

Read more at Science Daily

Genes could be key to new COVID-19 treatments, study finds

 Potential treatments for Covid-19 have been identified after the discovery of five genes associated with the most severe form of the disease.

Genetic evidence is second only to clinical trials as a way to tell which treatments will be effective in a disease. Existing drugs that target the actions of the genes reveal which drugs should be repurposed to treat Covid-19 in clinical trials, experts say.

Genes involved in two molecular processes -- antiviral immunity and lung inflammation -- were pinpointed. The breakthrough will help doctors understand how Covid-19 damages lungs at a molecular level.

Researchers from the University of Edinburgh made the discovery by studying the DNA of 2,700 patients in 208 intensive care units (ICUs) in the UK.

Researchers from the GenOMICC consortium -- a global collaboration to study genetics in critical illness -- compared the genetic information of Covid-19 patients in ICU with samples provided by healthy volunteers from other studies, such as UK Biobank, Generation Scotland and 100,000 Genomes.

The team found key differences in five genes of the ICU patients compared with samples provided by healthy volunteers. The genes -- IFNAR2, TYK2, OAS1, DPP9 and CCR2 -- partially explain why some people become desperately sick with Covid-19, while others are not affected.

Having highlighted the genes, the team were then able to predict the effect of drug treatments on patients, because some genetic variants respond in a similar way to particular drugs.

For example, they showed that a reduction in the activity of the TYK2 gene protects against Covid-19. A class of anti-inflammatory drugs called JAK inhibitors, which includes the drug baricitinib, produces this effect.

They also discovered that a boost in the activity of the gene INFAR2 is also likely to create protection, because it is likely to mimic the effect of treatment with interferon -- proteins released by cells of the immune system to defend against viruses. However, experts caution that to be effective, patients might need the treatment early in disease.

Based on the findings published in Nature, the researchers say that clinical trials should focus on drugs that target these specific antiviral and anti-inflammatory pathways.

Dr Kenneth Baillie, the project's chief investigator and Academic Consultant in Critical Care Medicine and Senior Research Fellow at University of Edinburgh's Roslin Institute, said: "This is a stunning realisation of the promise of human genetics to help understand critical illness. Just like in sepsis and influenza, in Covid-19, damage to the lungs is caused by our own immune system, rather than the virus itself. Our genetic results provide a roadmap through the complexity of immune signals, showing the route to key drug targets.

"Our results immediately highlight which drugs should be at the top of the list for clinical testing. We can only test a few drugs at a time, so making the right choices will save thousands of lives.

"This work is only possible because of the generous contribution of the patients themselves and their families, research teams in NHS hospitals across the country, and the generous funding we've received from the public and organisations."

GenOMICC (Genetics of Susceptibility and Mortality in Critical Care) started in 2015 as an open, global consortium of intensive care clinicians dedicated to understanding genetic factors that influence outcomes in intensive care from diseases such as SARS, influenza and sepsis. Throughout 2020 it has been focused on Covid-19 research in partnership with Genomics England.

This study is one of a number of COVID-19 studies that have been given urgent public health research status by the Chief Medical Officer and Deputy Chief Medical Officer for England.

Read more at Science Daily

New type of atomic clock keeps time even more precisely

 

Clock and abstract background.
Atomic clocks are the most precise timekeepers in the world. These exquisite instruments use lasers to measure the vibrations of atoms, which oscillate at a constant frequency, like many microscopic pendulums swinging in sync. The best atomic clocks in the world keep time with such precision that, if they had been running since the beginning of the universe, they would only be off by about half a second today.

Still, they could be even more precise. If atomic clocks could more accurately measure atomic vibrations, they would be sensitive enough to detect phenomena such as dark matter and gravitational waves. With better atomic clocks, scientists could also start to answer some mind-bending questions, such as what effect gravity might have on the passage of time and whether time itself changes as the universe ages.

Now a new kind of atomic clock designed by MIT physicists may enable scientists explore such questions and possibly reveal new physics.

The researchers report in the journal Nature that they have built an atomic clock that measures not a cloud of randomly oscillating atoms, as state-of-the-art designs measure now, but instead atoms that have been quantumly entangled. The atoms are correlated in a way that is impossible according to the laws of classical physics, and that allows the scientists to measure the atoms' vibrations more accurately.

The new setup can achieve the same precision four times faster than clocks without entanglement.

"Entanglement-enhanced optical atomic clocks will have the potential to reach a better precision in one second than current state-of-the-art optical clocks," says lead author Edwin Pedrozo-Peñafiel, a postdoc in MIT's Research Laboratory of Electronics.

If state-of-the-art atomic clocks were adapted to measure entangled atoms the way the MIT team's setup does, their timing would improve such that, over the entire age of the universe, the clocks would be less than 100 milliseconds off.

The paper's other co-authors from MIT are Simone Colombo, Chi Shu, Albert Adiyatullin, Zeyang Li, Enrique Mendez, Boris Braverman, Akio Kawasaki, Saisuke Akamatsu, Yanhong Xiao, and Vladan Vuletic, the Lester Wolfe Professor of Physics.

Time limit

Since humans began tracking the passage of time, they have done so using periodic phenomena, such as the motion of the sun across the sky. Today, vibrations in atoms are the most stable periodic events that scientists can observe. Furthermore, one cesium atom will oscillate at exactly the same frequency as another cesium atom.

To keep perfect time, clocks would ideally track the oscillations of a single atom. But at that scale, an atom is so small that it behaves according to the mysterious rules of quantum mechanics: When measured, it behaves like a flipped coin that only when averaged over many flips gives the correct probabilities. This limitation is what physicists refer to as the Standard Quantum Limit.

"When you increase the number of atoms, the average given by all these atoms goes toward something that gives the correct value," says Colombo.

This is why today's atomic clocks are designed to measure a gas composed of thousands of the same type of atom, in order to get an estimate of their average oscillations. A typical atomic clock does this by first using a system of lasers to corral a gas of ultracooled atoms into a trap formed by a laser. A second, very stable laser, with a frequency close to that of the atoms' vibrations, is sent to probe the atomic oscillation and thereby keep track of time.

And yet, the Standard Quantum Limit is still at work, meaning there is still some uncertainty, even among thousands of atoms, regarding their exact individual frequencies. This is where Vuletic and his group have shown that quantum entanglement may help. In general, quantum entanglement describes a nonclassical physical state, in which atoms in a group show correlated measurement results, even though each individual atom behaves like the random toss of a coin.

The team reasoned that if atoms are entangled, their individual oscillations would tighten up around a common frequency, with less deviation than if they were not entangled. The average oscillations that an atomic clock would measure, therefore, would have a precision beyond the Standard Quantum Limit.

Entangled clocks

In their new atomic clock, Vuletic and his colleagues entangle around 350 atoms of ytterbium, which oscillates at the same very high frequency as visible light, meaning any one atom vibrates 100,000 times more often in one second than cesium. If ytterbium's oscillations can be tracked precisely, scientists can use the atoms to distinguish ever smaller intervals of time.

The group used standard techniques to cool the atoms and trap them in an optical cavity formed by two mirrors. They then sent a laser through the optical cavity, where it ping-ponged between the mirrors, interacting with the atoms thousands of times.

"It's like the light serves as a communication link between atoms," Shu explains. "The first atom that sees this light will modify the light slightly, and that light also modifies the second atom, and the third atom, and through many cycles, the atoms collectively know each other and start behaving similarly."

In this way, the researchers quantumly entangle the atoms, and then use another laser, similar to existing atomic clocks, to measure their average frequency. When the team ran a similar experiment without entangling atoms, they found that the atomic clock with entangled atoms reached a desired precision four times faster.

"You can always make the clock more accurate by measuring longer," Vuletic says. "The question is, how long do you need to reach a certain precision. Many phenomena need to be measured on fast timescales."

He says if today's state-of-the-art atomic clocks can be adapted to measure quantumly entangled atoms, they would not only keep better time, but they could help decipher signals in the universe such as dark matter and gravitational waves, and start to answer some age-old questions.

Read more at Science Daily

Dec 16, 2020

Device mimics life's first steps in outer space

 A device developed by scientists at the CY Cergy Paris University and Paris Observatory promises insight into how the building blocks of life form in outer space.

In an article published in Review of Scientific Instruments, by AIP Publishing, the scientists detail how VENUS -- an acronym of the French phrase "Vers de Nouvelles Syntheses," which means "toward new syntheses" -- mimics how molecules come together in the freezing darkness of interstellar space.

"We try to simulate how complex organic molecules are formed in such a harsh environment," said Emanuele Congiu, one of the authors and an astrophysicist at the observatory. "Observatories can see a lot of molecules in space. What we do not understand yet, or fully, is how they formed in this harsh environment."

VENUS has a chamber designed to replicate the strong vacuum of space, while holding a frigid temperature that is set lower than minus 400 degrees Fahrenheit (10 kelvins). It uses up to five beams to deliver atoms or molecules onto a tiny sliver of ice without disturbing that environment.

That process, Congiu said, replicates how molecules form on the ice that sits atop tiny dust particles found inside interstellar clouds. VENUS is the first device to do the replication with more than three beams, which lets researchers simulate more complicated interactions.

Over the past 50 years, nearly 200 different molecular species have been discovered in the star-forming regions of space. Some of them, the so-called "prebiotic species," are believed by scientists to be involved in the processes that lead to the early forms of life.

A key use of the VENUS device will be working in concert with scientists who discover molecular reactions in space but need a fuller understanding of what they have observed. It specifically mentions NASA's launch of the James Webb Space Telescope, which is scheduled for 2021. The largest and most powerful space telescope ever launched, it is expected to dramatically expand scientists' knowledge of the universe.

"What we can do in the lab in one day takes thousands of years in space," Congiu said. "Our work in the lab can complement the wealth of data that comes from the space observatories. Otherwise, astronomers would not be able to interpret all of their observations. Researchers who make observations can ask us to simulate a certain reaction, to see if what they think they see is real or not."

From Science Daily

A pair of lonely planet-like objects born like stars

 Star-forming processes sometimes create mysterious astronomical objects called brown dwarfs, which are smaller and colder than stars, and can have masses and temperatures down to those of exoplanets in the most extreme cases. Just like stars, brown dwarfs often wander alone through space, but can also be seen in binary systems, where two brown dwarfs orbit one another and travel together in the galaxy.

Researchers led by Clémence Fontanive from the Center for Space and Habitability (CSH) and the NCCR PlanetS discovered a curious starless binary system of brown dwarfs. The system CFHTWIR-Oph 98 (or Oph 98 for short) consists of the two very low-mass objects Oph 98 A and Oph 98 B. It is located 450 light years away from Earth in the stellar association Ophiuchus. The researchers were surprised by the fact that Oph 98 A and B are orbiting each other from a strikingly large distance, about 5 times the distance between Pluto and the Sun, which corresponds to 200 times the distance between the Earth and the Sun. The study has just been published in The Astrophysical Journal Letters.

Extremely low masses and a very large separation

The pair is a rare example of two objects similar in many aspects to extra-solar giant planets, orbiting around each other with no parent star. The more massive component, Oph 98 A, is a young brown dwarf with a mass of 15 times that of Jupiter, which is almost exactly on the boundary separating brown dwarfs from planets. Its companion, Oph 98 B, is only 8 times heavier than Jupiter.

Components of binary systems are tied by an invisible link called gravitational binding energy, and this bond gets stronger when objects are more massive or closer to one another. With extremely low masses and a very large separation, Oph 98 has the weakest binding energy of any binary system known to date.

Discovery thanks to data from Hubble

Clémence Fontanive and her colleagues discovered the companion to Oph 98 A using images from the Hubble Space Telescope. Fontanive says: "Low-mass brown dwarfs are very cold and emit very little light, only through infrared thermal radiation. This heat glow is extremely faint and red, and brown dwarfs are hence only visible in infrared light." Furthermore, the stellar association in which the binary is located, Ophiuchus, is embedded in a dense, dusty cloud which scatters visible light. "Infrared observations are the only way to see through this dust," explains the lead researcher. "Detecting a system like Oph 98 also requires a camera with a very high resolution, as the angle separating Oph 98 A and B is a thousand times smaller than the size of the moon in the sky," she adds. The Hubble Space Telescope is among the few telescopes capable of observing objects as faint as these brown dwarfs, and able to resolve such tight angles.

Because brown dwarfs are cold enough, water vapor forms in their atmospheres, creating prominent features in the infrared that are commonly used to identify brown dwarfs. However, these water signatures cannot be easily detected from the surface of the Earth. Located above the atmosphere in the vacuum of space, Hubble allows to probe the existence of water vapor in astronomical objects. Fontanive explains: "Both objects looked very red and showed clear signs of water molecules. This immediately confirmed that the faint source we saw next to Oph 98 A was very likely to also be a cold brown dwarf, rather than a random star that happened to be aligned with the brown dwarf in the sky."

The team also found images in which the binary was visible, collected 14 years ago with the Canada-France-Hawaii Telescope (CFHT) in Hawaii. "We observed the system again this summer from another Hawaiian observatory, the United Kingdom Infra-Red Telescope. Using these data, we were able to confirm that Oph 98 A and B are moving together across the sky over time, relative to other stars located behind them, which is evidence that they are bound to each other in a binary pair," explains Fontanive.

An atypical result of star formation

The Oph 98 binary system formed only 3 million years ago in the nearby Ophiuchus stellar nursery, making it a newborn on astronomical timescales. The age of the system is much shorter than the typical time needed to build planets. Brown dwarfs like Oph 98 A are formed by the same mechanisms as stars. Despite Oph 98 B being the right size for a planet, the host Oph 98 A is too small to have a sufficiently large reservoir of material to build a planet that big. "This tells us that Oph 98 B, like its host, must have formed through the same mechanisms that produce stars and shows that the processes that create binary stars operate on scale-down versions all the way down to these planetary masses," comments Clémence Fontanive.

With the discovery of two planet-like worlds -- already uncommon products of star formation -- bound to each other in such an extreme configuration, "we are really witnessing an incredibly rare output of stellar formation processes," as Fontanive describes.

Bernese space exploration: With the world's elite since the first moon landing

When the second man, "Buzz" Aldrin, stepped out of the lunar module on July 21, 1969, the first task he did was to set up the Bernese Solar Wind Composition experiment (SWC) also known as the "solar wind sail" by planting it in the ground of the moon, even before the American flag. This experiment, which was planned and the results analysed by Prof. Dr. Johannes Geiss and his team from the Physics Institute of the University of Bern, was the first great highlight in the history of Bernese space exploration.

Read more at Science Daily

The DNA regions in our brain that contribute to make us human

 With only 1% difference, the human and chimpanzee protein-coding genomes are remarkably similar. Understanding the biological features that make us human is part of a fascinating and intensely debated line of research. Researchers at the SIB Swiss Institute of Bioinformatics and the University of Lausanne have developed a new approach to pinpoint, for the first time, adaptive human-specific changes in the way genes are regulated in the brain. These results open new perspectives in the study of human evolution, developmental biology and neurosciences. The paper is published in Science Advances.

Gene expression, not gene sequence

To explain what sets human apart from their ape relatives, researchers have long hypothesized that it is not so much the DNA sequence, but rather the regulation of the genes (i.e. when, where and how strongly the gene is expressed), that plays the key role. However, precisely pinpointing the regulatory elements which act as 'gene dimmers' and are positively selected is a challenging task that has thus far defeated researchers (see box).

Marc Robinson-Rechavi, Group Leader at SIB and study co-author says: "To be able to answer such tantalizing questions, one has to be able identify the parts in the genome that have been under so called 'positive' selection [see box]. The answer is of great interest in addressing evolutionary questions, but also, ultimately, could help biomedical research as it offers a mechanistic view of how genes function."

A high proportion of the regulatory elements in the human brain have been positively selected

Researchers at SIB and the University of Lausanne have developed a new method which has enabled them to identify a large set of gene regulatory regions in the brain, selected throughout human evolution. Jialin Liu, Postdoctoral researcher and lead author of the study explains: "We show for the first time that the human brain has experienced a particularly high level of positive selection, as compared to the stomach or heart for instance. This is exciting, because we now have a way to identify genomic regions that might have contributed to the evolution of our cognitive abilities!"

To reach their conclusions, the two researchers combined machine learning models with experimental data on how strongly proteins involved in gene regulation bind to their regulatory sequences in different tissues, and then performed evolutionary comparisons between human, chimpanzee and gorilla. "We now know which are the positively selected regions controlling gene expression in the human brain. And the more we learn about the genes they are controlling, the more complete our understanding of cognition and evolution, and the more scope there will be to act on that understanding," concludes Marc Robinson-Rechavi.

Read more at Science Daily

The human helpers of SARS-CoV-2

 Like all viruses, the novel coronavirus is dependent on help from the human host cell. Proteins are the functional units of the cell and enable the virus to enter the host cell or help the virus to replicate. Scientists from Charité -- Universitätsmedizin Berlin and from the Berlin Institute of Health (BIH), along with colleagues from the United Kingdom, Germany and the United States, have examined the corresponding genes of the helper proteins in a large study. They discovered numerous variants that influence the amount or function of the proteins as well as their ability to support the virus. The gene variants reveal potential target structures for new drugs. The researchers have now published their results in the journal Nature Communications.

An infection of the novel coronavirus SARS-CoV-2, just like any other viral infection, follows a specific pattern: The viruses first bind to receptor proteins on the surface of the human host cells in the throat, nose or lungs before entering the cell, where they replicate with the help of the host cell machinery. The newly formed virus particles cause the infected cell to burst and infect other cells. As soon as the immune system notices what is happening, a defense mechanism is activated with the goal of destroying and removing both the viruses and virus-infected cells. Under normal circumstances, the infection is over within two weeks at the most. For all these processes, however, the virus is dependent on human or host proteins.

"In severe courses of COVID-19, this regulated process gets out of control and the immune system causes an excessive inflammatory response that attacks not only virus-infected cells but also healthy tissue," says Prof. Dr. Claudia Langenberg, BIH Professor of Computational Medicine and head of the study now being published. "Naturally occurring variations in the genes that make up the blueprint for these human proteins can alter their concentration or function and may thus be responsible for the different course of the disease." The team is well versed in the discovery of genetic variants that not only affect specific proteins but also common complex diseases. "As molecular epidemiologists, we study the diversity of genes -- that is, the building instructions for proteins -- of entire population groups in order to uncover susceptibilities to diseases whose cause lies in the interaction of many small deviations," explains the epidemiologist, who joined the BIH from the Medical Research Council Epidemiology Unit at the University of Cambridge in September. "We wanted to use these experiences and data sets for the COVID-19 epidemic and make them available to the scientific community."

"We examined 179 proteins known to be involved in SARS-CoV-2 infection for their naturally occurring variants," reports Dr. Maik Pietzner, the study's lead author and a scientist in Prof. Langenberg's lab. "We were able to draw on findings which were based on samples of the first COVID-19 patients at Charité. " This was possible due to close collaboration with the research group led by Professor Dr. Markus Ralser, Director of the Institute of Biochemistry at Charité, which had previously reported these findings. The team of Prof. Langenberg was able to use data from the MRC Fenland Cohort, a large population study that contains information from more than 10,000 individuals. They discovered 38 targets for existing drugs as well as evidence that certain proteins that interact with the virus influence the immune system. "Our findings also help to better understand risk factors for severe courses of COVID-19. We were able to show that blood coagulation proteins are influenced by the same genetic variant that increases the risk of contracting COVID-19 and that determines blood group 0," reports Dr. Pietzner.

Read more at Science Daily

Astronomers detect possible radio emission from exoplanet

 

Illustration of a 'hot Jupiter' exoplanet orbiting nearby star.
By monitoring the cosmos with a radio telescope array, an international team of scientists has detected radio bursts emanating from the constellation Boötes -- that could be the first radio emission collected from a planet beyond our solar system.

The team, led by Cornell postdoctoral researcher Jake D. Turner, Philippe Zarka of the Observatoire de Paris -- Paris Sciences et Lettres University and Jean-Mathias Griessmeier of the Université d'Orléans will publish their findings in the forthcoming research section of Astronomy & Astrophysics, on Dec. 16.

"We present one of the first hints of detecting an exoplanet in the radio realm," Turner said. "The signal is from the Tau Boötes system, which contains a binary star and an exoplanet. We make the case for an emission by the planet itself. From the strength and polarization of the radio signal and the planet's magnetic field, it is compatible with theoretical predictions."

Among the co-authors is Turner's postdoctoral advisor Ray Jayawardhana, the Harold Tanner Dean of the College of Arts and Sciences, and a professor of astronomy.

"If confirmed through follow-up observations," Jayawardhana said, "this radio detection opens up a new window on exoplanets, giving us a novel way to examine alien worlds that are tens of light-years away."

Using the Low Frequency Array (LOFAR), a radio telescope in the Netherlands, Turner and his colleagues uncovered emission bursts from a star-system hosting a so-called hot Jupiter, a gaseous giant planet that is very close to its own sun. The group also observed other potential exoplanetary radio-emission candidates in the 55 Cancri (in the constellation Cancer) and Upsilon Andromedae systems. Only the Tau Boötes exoplanet system -- about 51 light-years away -- exhibited a significant radio signature, a unique potential window on the planet's magnetic field.

Observing an exoplanet's magnetic field helps astronomers decipher a planet's interior and atmospheric properties, as well as the physics of star-planet interactions, said Turner, a member of Cornell's Carl Sagan Institute.

Earth's magnetic field protects it from solar wind dangers, keeping the planet habitable. "The magnetic field of Earth-like exoplanets may contribute to their possible habitability," Turner said, "by shielding their own atmospheres from solar wind and cosmic rays, and protecting the planet from atmospheric loss."

Two years ago, Turner and his colleagues examined the radio emission signature of Jupiter and scaled those emissions to mimic the possible signatures from a distant Jupiter-like exoplanet. Those results became the template for searching radio emission from exoplanets 40 to 100 light-years away.

After poring over nearly 100-hours of radio observations, the researchers were able to find the expected hot Jupiter signature in Tau Boötes. "We learned from our own Jupiter what this kind of detection looks like. We went searching for it and we found it," Turner said.

The signature, though, is weak. "There remains some uncertainty that the detected radio signal is from the planet. The need for follow-up observations is critical," he said.

Turner and his team have already begun a campaign using multiple radio telescopes to follow up on the signal from Tau Boötes.

Read more at Science Daily

Dec 15, 2020

The farthest galaxy in the universe

 A team of astronomers used the Keck I telescope to measure the distance to an ancient galaxy. They deduced the target galaxy GN-z11 is not only the oldest galaxy but also the most distant. It's so distant it defines the very boundary of the observable universe itself. The team hopes this study can shed light on a period of cosmological history when the universe was only a few hundred million years old.

We've all asked ourselves the big questions at times: "How big is the universe?" or "How and when did galaxies form?" Astronomers take these questions very seriously, and use fantastic tools that push the boundaries of technology to try and answer them. Professor Nobunari Kashikawa from the Department of Astronomy at the University of Tokyo is driven by his curiosity about galaxies. In particular, he sought the most distant one we can observe in order to find out how and when it came to be.

"From previous studies, the galaxy GN-z11 seems to be the farthest detectable galaxy from us, at 13.4 billion light years, or 134 nonillion kilometers (that's 134 followed by 30 zeros)," said Kashikawa. "But measuring and verifying such a distance is not an easy task."

Kashikawa and his team measured what's known as the redshift of GN-z11; this refers to the way light stretches out, becomes redder, the farther it travels. Certain chemical signatures, called emission lines, imprint distinct patterns in the light from distant objects. By measuring how stretched these telltale signatures are, astronomers can deduce how far the light must have traveled, thus giving away the distance from the target galaxy.

"We looked at ultraviolet light specifically, as that is the area of the electromagnetic spectrum we expected to find the redshifted chemical signatures," said Kashikawa. "The Hubble Space Telescope detected the signature multiple times in the spectrum of GN-z11. However, even the Hubble cannot resolve ultraviolet emission lines to the degree we needed. So we turned to a more up-to-date ground-based spectrograph, an instrument to measure emission lines, called MOSFIRE, which is mounted to the Keck I telescope in Hawaii."

The MOSFIRE captured the emission lines from GN-z11 in detail, which allowed the team to make a much better estimation on its distance than was possible from previous data. When working with distances at these scales, it is not sensible to use our familiar units of kilometers or even multiples of them; instead, astronomers use a value known as the redshift number denoted by z. Kashikawa and his team improved the accuracy of the galaxy's z value by a factor of 100. If subsequent observations can confirm this, then the astronomers can confidently say GN-z11 is the farthest galaxy ever detected in the universe.

From Science Daily

Researchers identify where giant jets from black holes discharge their energy

 The supermassive black holes at the centers of galaxies are the most massive objects in the universe. They range from about 1 million to upwards of 10 billion times the mass of the Sun. Some of these black holes also blast out gigantic, super-heated jets of plasma at nearly the speed of light. The primary way that the jets discharge this powerful motion energy is by converting it into extremely high-energy gamma rays. However, UMBC physics Ph.D. candidate Adam Leah Harvey says, "How exactly this radiation is created is an open question."

The jet has to discharge its energy somewhere, and previous work doesn't agree where. The prime candidates are two regions made of gas and light that encircle black holes, called the broad-line region and the molecular torus.

A black hole's jet has the potential to convert visible and infrared light in either region to high-energy gamma rays by giving away some of its energy. Harvey's new NASA-funded research sheds light on this controversy by offering strong evidence that the jets mostly release energy in the molecular torus, and not in the broad-line region. The study was published in Nature Communications and co-authored by UMBC physicists Markos Georganopoulos and Eileen Meyer.

Far out

The broad-line region is closer to the center of a black hole, at a distance of about 0.3 light-years. The molecular torus is much farther out -- more than 3 light-years. While all of these distances seem huge to a non-astronomer, the new work "tells us that we're getting energy dissipation far away from the black hole at the relevant scales," Harvey explains.

"The implications are extremely important for our understanding of jets launched by black holes," Harvey says. Which region primarily absorbs the jet's energy offers clues to how the jets initially form, pick up speed, and become column-shaped. For example, "It indicates that the jet is not accelerated enough at smaller scales to start to dissipate energy," Harvey says.

Other researchers have proposed contradictory ideas about the jets' structure and behavior. Because of the trusted methods Harvey used in their new work, however, they expect the results to be broadly accepted in the scientific community. "The results basically help to constrain those possibilities -- those different models -- of jet formation."

On solid footing

To come to their conclusions, Harvey applied a standard statistical technique called "bootstrapping" to data from 62 observations of black hole jets. "A lot of what came before this paper has been very model-dependent. Other papers have made a lot of very specific assumptions, whereas our method is extremely general," Harvey explains. "There isn't much to undermine the analysis. It's well-understood methods, and just using observational data. So the result should be correct."

A quantity called the seed factor was central to the analysis. The seed factor indicates where the light waves that the jet converts to gamma rays come from. If the conversion happens at the molecular torus, one seed factor is expected. If it happens at the broad-line region, the seed factor will be different.

Georganopolous, associate professor of physics and one of Harvey's advisors, originally developed the seed factor concept, but "applying the idea of the seed factor had to wait for someone with a lot of perseverance, and this someone was Adam Leah," Georganopolous says.

Harvey calculated the seed factors for all 62 observations. They found that the seed factors fell in a normal distribution aligned almost perfectly around the expected value for the molecular torus. That result strongly suggests that the energy from the jet is discharging into light waves in the molecular torus, and not in the broad-line region.

Tangents and searches

Harvey shares that the support of their mentors, Georganopoulos and Meyer, assistant professor of physics, was instrumental to the project's success. "I think that without them letting me go off on a lot of tangents and searches of how to do things, this would have never gotten to the level that it's at," Harvey says. "Because they allowed me to really dig into it, I was able to pull out a lot more from this project."

Harvey identifies as an "observational astronomer," but adds, "I'm really more of a data scientist and a statistician than I am a physicist." And the statistics has been the most exciting part of this work, they say.

Read more at Science Daily

To the brain, reading computer code is not the same as reading language

 In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

"Understanding computer code seems to be its own thing. It's not the same as language, and it's not the same as math and logic," says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

A major focus of Fedorenko's research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain's language network, which includes Broca's area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

"Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn't be any hardwired mechanisms that make us good programmers," Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability -- Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

"It does pretty much anything that's cognitively challenging, that makes you think hard," Ivanova says.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn't identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

"It's possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system," Fedorenko says. "In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn't seem like you see any specialization yet."

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn't a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that's because learning to program may draw on both language and multiple demand systems, even if -- once learned -- programming doesn't rely on the language regions, the researchers say.

"There have been claims from both camps -- it has to be together with math, it has to be together with language," Ivanova says. "But it looks like computer science educators will have to develop their own approaches for teaching code most effectively."

Read more at Science Daily

Drug may boost vaccine responses in older adults

 A drug that boosts the removal of cellular debris in immune cells may increase the protective effects of vaccines in older adults, a study published today in eLife shows.

The results may lead to new approaches to protect older individuals from viruses such as the one causing the current COVID-19 pandemic and influenza.

"Older adults are at high risk of being severely affected by infectious diseases, but unfortunately most vaccines in this age group are less efficient than in younger adults," explains lead author Ghada Alsaleh, a postdoctoral researcher at the Kennedy Institute of Rheumatology, University of Oxford, UK.

Previously, Alsaleh and colleagues showed that in older mice immune cells may become less efficient at removing cellular debris, a process called autophagy, and this leads to a poorer immune response in the animals. In the current study, they looked at samples from young and older people participating in clinical trials for vaccines against the respiratory syncytial virus and the hepatitis C virus to see if the same event happens in human immune cells called T cells. They found that autophagy increases in T cells from younger people after receiving vaccines, but this response is blunted in older people.

When they examined T cells from the older individuals in the laboratory, the team found that these cells have less of a natural compound called spermidine. Spermidine ramps up autophagy and boosts T-cell function. Supplementing these older immune cells with spermidine in the laboratory restored autophagy to the same levels seen in T cells from younger people. "Our work suggests that boosting autophagy during vaccination may help make vaccines more effective for older people," Alsaleh says.

A small clinical trial recently tested whether giving spermidine to older adults would improve their cognitive function. As the results were positive, and spermidine did not appear to have any harmful effects, this provides some evidence that it would be safe to test whether spermidine might also be helpful for boosting the immune response of older people to vaccines.

"Our findings will inform vaccine trials in which autophagy-boosting agents, such as spermidine, are given in a controlled environment to older participants," concludes senior author Anna Katharina Simon, Professor of Immunology at the University of Oxford. "It will be interesting to see whether these agents can enhance vaccination efficiency and help protect older people from viral infections."

From Science Daily

Type and abundance of mouth bacteria linked to lung cancer risk in non-smokers

 The type and abundance of bacteria found in the mouth may be linked to lung cancer risk in non-smokers, finds the first study of its kind, published online in the journal Thorax.

Fewer species and high numbers of particular types of bacteria seem to be linked to heightened risk, the findings indicate.

Around one in four cases of lung cancer occurs in non-smokers and known risk factors, such as second hand tobacco smoke, background radon exposure, air pollution, and family history of lung cancer don't fully explain these figures, say the researchers.

The type and volume of bacteria (microbiome), found in the mouth has been associated with a heightened risk of various cancers including those of the gullet, head and neck, and pancreas.

And the researchers wanted to find out if this association might also hold for lung cancer, given that the mouth is the entry point for bacteria to the lungs.

They drew on participants in The Shanghai Women's Health Study and the Shanghai Men's Health Study, all of whom were lifelong non-smokers, and whose health was monitored every 2-3 years after entry to the study between 1996 and 2006.

At enrolment, participants rinsed out their mouths to provide a profile of the resident bacteria, and information was obtained on lifestyle, diet, medical history and other environmental and workplace factors that might influence their disease risk.

In all, 90 of the women and 24 of the men developed lung cancer within around 7 years, on average.

These cases were matched with 114 non-smokers of the same age and sex, who also provided a mouth rinse sample. This comparison group didn't have lung cancer but they had similar levels of education and family histories of lung cancer.

Comparison of both sets of rinse samples showed that the microbiome differed between the two groups. A wider range of bacterial species was associated with a lower risk of developing lung cancer. And a larger volume of particular types of species was also associated with lung cancer risk.

A larger volume of Bacteroidetes and Spirochaetes species was associated with lower risk while a larger volume of Firmicutes species was associated with heightened risk.

Specifically, within the Spirochaetes species, a greater abundance of Spirochaetia was associated with lower risk; and within the Firmicutes species, a larger volume of organisms from the Lactobacillales order of microbes was associated with a heightened risk.

The associations remained when the analysis was restricted to those participants who had not taken any antibiotics in the 7 days before sample collection and after excluding those diagnosed with lung cancer within 2 years of sample provision.

This is an observational study, and therefore can't establish cause. And the researchers acknowledge several limitations. "While our study provides evidence that variation in the oral microbiome plays a role in lung cancer risk, the interpretation of our study must be done while considering the caveat that our findings are from a single time point in a single geographical location," they write.

In a linked editorial, Dr David Christiani, of Harvard University, suggests that mouth bacteria may provoke chronic inflammation, boost cell proliferation and inhibit cell death, prompt DNA changes, and switch on cancer genes and their blood supply, which would help to explain the findings.

The study findings raise several questions, he says. "First, how stable is the human oral microbiome over time? Second, if the human oral microbiome varies over time, what determines that variability? Third, how does the ambient environment such as exposure to air pollutants, affect the oral (and lung) microbiome?"

Read more at Science Daily

Dec 14, 2020

Invasive harlequin ladybird causes severe decline of two-spotted ladybird, new study shows

 CABI scientists have led an 11-year study which shows how the invasive harlequin ladybird (Harmonia axyridis) caused the severe decline of the two-spotted ladybird (Adalia bipunctata) on broadleaved trees and shrubs in northern Switzerland.

Lead author Dr Marc Kenis, Head of Risk Analysis and Invasion Ecology based at CABI's Swiss Centre in Delémont, of the research -- published in the journal Insects -- said the two-spotted ladybird was the most abundant ladybird at the 40 sites surveyed before the harlequin ladybird took hold between 2006 and 2017.

The scientists discovered that the harlequin ladybird -- which is a predator native to Central and East Asia and whose presence was confirmed in Switzerland in 2004 -- quickly dominated the broadleaved hedges representing 60-80% of all specimens collected in this habitat.

However, while the harlequin ladybird was the second most abundant species in pine stands it was not abundant in meadows and spruce stands. Furthermore, the total number of ladybirds feeding on aphids did not decline during the study period -- suggesting that the arrival of the harlequin ladybird did not affect the predation pressure on aphids.

The harlequin ladybird is considered a human nuisance when it aggregates in buildings in autumn and can taint wine when harvested and crushed with grapes. Of most concern, however, is its impact on biodiversity.

Due to its predatory and competitive abilities, H. axyridis may affect many native species, including non-pest aphids and aphidophagous insects. In particular, native ladybirds may suffer from competition for resources and intra-guild predation (IGP) on larvae and eggs.

In North America, several studies showed that H. axyridis is displacing native ladybirds. Similar observations were made in Chile, where it is also invasive. In Europe, first analyses made a few years after the establishment of H. axyridis in UK, Belgium and Switzerland suggested that several native ladybird species had started declining.

Dr Kenis said, "Our long-term survey of ladybirds in north-western Switzerland showed that, on broadleaved trees and shrubs, H. axyridis has become by far the most abundant species just a few years after its arrival in Switzerland. Similar levels of dominance on broadleaved trees were also found recently in other European countries such as England, Czech Republic and Italy.

"Although it is known that ladybird populations can vary greatly from year to year, the fact that A. bipunctata has almost disappeared from our records since 2010 strongly suggests that this decline in populations is not due to natural fluctuations in populations but more probably to the presence of H. axyridis."

The researchers argue that Adalia bipunctata, a Holarctic species, is one of the species that shows the strongest decline in Eastern North America following the invasion of H. axyridis and three other exotic ladybird species.

However, they suggest that in none of the European and American studies previously carried out was the decline as strong as in their surveys in north-western Switzerland. For example, in England, A. bipunctata abundance in 2016 was reduced to approximately 16% of the total from the first surveys in 2006.

Co-author Dr René Eschen said, "Our long-term monitoring of ladybird populations in north-western Switzerland clearly showed that H. axyridis quickly became the dominant species on broadleaved trees and shrubs just a few years after its arrival, but not yet on conifers and grasses.

"Only one native species, A. bipunctata, clearly declined following the invasion of H. axyridis, but this once dominant species almost disappeared from the region. The severe decline of A. bipunctata deserves further investigations."

The long-term field trial was set up in north-western Switzerland in the cantons Jura, Basel-Landschaft, Basel-Stadt and Aargau, based on 40 permanent sites established within a 40 km distance from the town of Delémont.

Read more at Science Daily

Unexpected insights into early dinosaur's brain, eating habits and agility

 A pioneering reconstruction of the brain belonging to one of the earliest dinosaurs to roam the Earth has shed new light on its possible diet and ability to move fast.

Research, led by the University of Bristol, used advanced imaging and 3-D modelling techniques to digitally rebuild the brain of Thecodontosaurus, better known as the Bristol dinosaur due to its origins in the UK city. The palaeontologists found Thecodontosaurus may have eaten meat, unlike its giant long-necked later relatives including Diplodocus and Brontosaurus, which only fed on plants.

Antonio Ballell, lead author of the study published today in Zoological Journal of the Linnean Society, said: "Our analysis of Thecodontosaurus' brain uncovered many fascinating features, some of which were quite surprising. Whereas its later relatives moved around ponderously on all fours, our findings suggest this species may have walked on two legs and been occasionally carnivorous."

Thecodontosaurus lived in the late Triassic age some 205 million years ago and was the size of a large dog. Although its fossils were discovered in the 1800s, many of which are carefully preserved at the University of Bristol, scientists have only very recently been able to deploy imaging software to extract new information without destroying them. 3-D models were generated from CT scans by digitally extracting the bone from the rock, identifying anatomical details about its brain and inner ear previously unseen in the fossil.

"Even though the actual brain is long gone, the software allows us to recreate brain and inner ear shape via the dimensions of the cavities left behind. The braincase of Thecodontosaurus is beautifully preserved so we compared it to other dinosaurs, identifying common features and some that are specific to Thecodontosaurus," Antonio said. "Its brain cast even showed the detail of the floccular lobes, located at the back of the brain, which are important for balance. Their large size indicate it was bipedal. This structure is also associated with the control of balance and eye and neck movements, suggesting Thecodontosaurus was relatively agile and could keep a stable gaze while moving fast."

Although Thecodontosaurus is known for being relatively small and agile, its diet has been debated.

Antonio, a PhD student at the University of Bristol's School of Earth Sciences, said: "Our analysis showed parts of the brain associated with keeping the head stable and eyes and gaze steady during movement were well-developed. This could also mean Thecodontosaurus could occasionally catch prey, although its tooth morphology suggests plants were the main component of its diet. It's possible it adopted omnivorous habits."

The researchers were also able to reconstruct the inner ears, allowing them estimate how well it could hear compared to other dinosaurs. Its hearing frequency was relatively high, pointing towards some sort of social complexity -- an ability to recognise varied squeaks and honks from different animals.

Professor Mike Benton, study co-author, said: "It's great to see how new technologies are allowing us to find out even more about how this little dinosaur lived more than 200 million years ago.

Read more at Science Daily

Chance played a major role in keeping Earth fit for life

 A study by the University of Southampton gives a new perspective on why our planet has managed to stay habitable for billions of years -- concluding it is almost certainly due, at least in part, to luck. The research suggests this may shorten the odds of finding life on so-called 'twin-Earths' in the Universe.

The research, published in the Nature journal Communications Earth & Environment, involved conducting the first ever simulation of climate evolution on thousands of randomly generated planets.

Geological data demonstrate that Earth's climate has remained continuously habitable for more than three billion years. However, it has been precariously balanced, with the potential to rapidly deteriorate to deep-frozen or intolerably hot conditions causing planet-wide sterility.

Professor Toby Tyrrell, a specialist in Earth System Science at the University of Southampton, explains: "A continuously stable and habitable climate on Earth is quite puzzling. Our neighbours, Mars and Venus, do not have habitable temperatures, even though Mars once did. Earth not only has a habitable temperature today, but has kept this at all times across three to four billion years -- an extraordinary span of geological time."

Many events can threaten the continuous stability of a planet -- asteroid impacts, solar flares and major geological events, such as eruptions of supervolcanoes. Indeed, an asteroid which hit the Earth 66 million years ago caused the extinction of more than 75 per cent of all species, killing off the dinosaurs along with many other species.

Previous computer modelling work on Earth habitability has involved modelling a single planet: Earth. But, inspired by discoveries of exoplanets (those outside of our solar system) that reveal that there are billions of Earth-like planets in our galaxy alone, a Southampton scientist took a novel approach to investigating a big question: what has led Earth to remain life-sustaining for so long?

To explore this, Professor Tyrrell tapped into the power of the University of Southampton's Iridis supercomputing facility to run simulations looking at how 100,000 randomly different planets responded to random climate-altering events spread out across three billion years, until they reached a point where they lost their habitability. Each planet was simulated 100 times, with different random events each time.

Having accrued a vast set of results, he then looked to see whether habitability persistence was restricted to just a few planets which were always capable of sustaining life for three billion years, or instead was spread around many different planets, each of which only sometimes stayed habitable for this period.

The results of the simulation were very clear. Most of those planets which remained life-sustaining throughout the three billion year period only had a probability, not a certainty, of staying habitable. Many instances were of planets which usually failed in the simulations and only occasionally remained habitable. Out of a total population of 100,000 planets, nine percent (8,700) were successful at least once -- of those, nearly all (about 8,000) were successful fewer than 50 times out of 100 and most (about 4,500) were successful fewer than 10 times out of 100.

The study results suggest chance is a major factor in determining whether planets, such as Earth, can continue to nurture life over billions of years. Professor Tyrrell concludes: "We can now understand that Earth stayed suitable for life for so long due, at least in part, to luck. For instance, if a slightly larger asteroid had hit Earth, or had done so at a different time, then Earth may have lost its habitability altogether.

"To put it another way, if an intelligent observer had been present on the early Earth as life first evolved, and was able to calculate the chances of the planet staying habitable for the next several billion years, the calculation may well have revealed very poor odds."

Read more at Science Daily

One's trash, another's treasure: Fertilizer made from urine could enable space agriculture

 In extreme environments, even the most ordinary tasks can seem like unsurmountable challenges. Because of such difficulties, humanity has, for the most part, settled on grounds that were favorable for harvesting crops, herding cattle, and building shelters. But as we seek to expand the limits of human exploration, both on earth and in space, the people pioneering this search will undoubtedly face conditions that, for all intents and purposes, are not conducive to human habitation.

One of the foremost challenges facing any intended long-term settlement, be it in the Antarctic or on Mars (perhaps in the near future), is achieving some degree of autonomy, to enable isolated colonies to survive even in the event of a catastrophic failure in provisioning. And the key to achieving this autonomy is ensuring food sufficiency and self-sustenance. Unsurprisingly, therefore, space agricultural technology is one of the research topics currently being undertaken by the Research Center for Space Colony at Tokyo University of Science. The researchers here hope to spearhead the technological development for safe and sustainable space agriculture -- with the aim of sustaining humans for a long time in an extremely closed environment such as a space station.

To this end, an innovative study was conducted by a team of Japanese researchers led by Junior Associate Professor Norihiro Suzuki from Tokyo University of Science and published in the New Journal of Chemistry of the Royal Society of Chemistry. In this study, Dr. Suzuki and his team aimed to address the problem of food production in closed environments, such as those in a space station.

Realizing that farmers have used animal waste as fertilizer for thousands of years, as a rich source of nitrogen, Dr. Suzuki and his team have been investigating the possibility of manufacturing it from urea (the main component of urine), to make a liquid fertilizer. This would also simultaneously address the problem of human waste treatment or management in space! As Dr. Suzuki explains, "This process is of interest from the perspective of making a useful product, i.e., ammonia, from a waste product, i.e., urine, using common equipment at atmospheric pressure and room temperature."

The research team -- which also includes Akihiro Okazaki, Kai Takagi, and Izumi Serizawa from ORC Manufacturing Co. Ltd., Japan -- devised an "electrochemical" process to derive ammonium ions (commonly found in standard fertilizers) from an artificial urine sample. Their experimental setup was simple: on one side, there was a "reaction" cell, with a "boron-doped diamond" (BDD) electrode and a light-inducible catalyst or "photocatalyst" material made of titanium dioxide. On the other, there was a "counter" cell with a simple platinum electrode. As current is passed into the reaction cell, urea is oxidized, forming ammonium ions. Dr. Suzuki describes this breakthrough as follows, "I joined the 'Space Agriteam' involved in food production, and my research specialization is in physical chemistry; therefore, I came up with the idea of 'electrochemically' making a liquid fertilizer."

The research team then examined whether the cell would be more efficient in the presence of the photocatalyst, by comparing the reaction of the cell with and without it. They found that while the initial depletion of urea was more or less the same, the nitrogen-based ions produced varied both in time and distribution when the photocatalyst was introduced. Notably, the concentration of nitrite and nitrate ions was not as elevated in the presence of the photocatalyst. This suggests that the presence of the photocatalyst promoted ammonium ion formation.

Dr. Suzuki states, "We are planning to perform the experiment with actual urine samples, because it contains not only primary elements (phosphorus, nitrogen, potassium) but also secondary elements (sulfur, calcium, magnesium) that are vital for plant nutrition! Therefore, Dr. Suzuki and his team are optimistic that this method provides a solid basis for the manufacture of liquid fertilizer in enclosed spaces, and, as. Dr. Suzuki observes, "It will turn out to be useful for sustaining long-term stay in extremely closed spaces such as space stations."

Read more at Science Daily

The moon controls the release of methane in Arctic Ocean

 It may not be very well known, but the Arctic Ocean leaks enormous amounts of the potent greenhouse gas methane. These leaks have been ongoing for thousands of years but could be intensified by a future warmer ocean. The potential for this gas to escape the ocean, and contribute to the greenhouse gas budget in the atmosphere, is an important mystery that scientists are trying to solve.

The total amount of methane in the atmosphere has increased immensely over the past decades, and while some of the increase can be ascribed to human activity, other sources are not very well constrained.

A recent paper in Nature Communications even implies that the moon has a role to play.

Small pressure changes affect methane release

The moon controls one of the most formidable forces in nature -- the tides that shape our coastlines. Tides, in turn, significantly affect the intensity of methane emissions from the Arctic Ocean seafloor.

"We noticed that gas accumulations, which are in the sediments within a meter from the seafloor, are vulnerable to even slight pressure changes in the water column. Low tide means less of such hydrostatic pressure and higher intensity of methane release. High tide equals high pressure and lower intensity of the release" says co-author of the paper Andreia Plaza Faverola.

"It is the first time that this observation has been made in the Arctic Ocean. It means that slight pressure changes can release significant amounts of methane. This is a game-changer and the highest impact of the study." Says another co-author, Jochen Knies.

New methods reveal unknown release sites

Plaza Faverola points out that the observations were made by placing a tool called a piezometer in the sediments and leaving it there for four days.

It measured the pressure and temperature of the water inside the pores of the sediment. Hourly changes in the measured pressure and temperature revealed the presence of gas close to the seafloor that ascends and descends as the tides change. The measurements were made in an area of the Arctic Ocean where no methane release has previously been observed but where massive gas hydrate concentrations have been sampled.

"This tells us that gas release from the seafloor is more widespread than we can see using traditional sonar surveys. We saw no bubbles or columns of gas in the water. Gas burps that have a periodicity of several hours won't be identified unless there is a permanent monitoring tool in place, such as the piezometer." Says Plaza Faverola

These observations imply that the quantification of present-day gas emissions in the Arctic may be underestimated. High tides, however, seem to influence gas emissions by reducing their height and volume.

"What we found was unexpected and the implications are big. This is a deep-water site. Small changes in pressure can increase the gas emissions but the methane will still stay in the ocean due to the water depth. But what happens in shallower sites? This approach needs to be done in shallow Arctic waters as well, over a longer period. In shallow water, the possibility that methane will reach the atmosphere is greater." Says Knies.

May counteract the temperature effects

High sea-level seems thus to influence gas emissions by potentially reducing their height and volume. The question remains whether sea-level rise due to global warming might partially counterbalance the effect of temperature on submarine methane emissions.

"Earth systems are interconnected in ways that we are still deciphering, and our study reveals one of such interconnections in the Arctic: The moon causes tidal forces, the tides generate pressure changes, and bottom currents that in turn shape the seafloor and impact submarine methane emissions. Fascinating!" says Andreia Plaza Faverola.

Read more at Science Daily

Dec 13, 2020

Exoplanet around distant star resembles reputed 'Planet Nine' in our solar system

 

Distant planet far from star, concept illustration.
Astronomers are still searching for a hypothetical "Planet Nine" in the distant reaches of our solar system, but an exoplanet 336 light years from Earth is looking more and more like the Planet Nine of its star system .

Planet Nine, potentially 10 times the size of Earth and orbiting far beyond Neptune in a highly eccentric orbit about the sun, was proposed in 2012 to explain perturbations in the orbits of dwarf planets just beyond Neptune's orbit, so-called detached Kuiper Belt objects. It has yet to be found, if it exists.

A similarly weird extrasolar planet was discovered far from the star HD 106906 in 2013, the only such wide-separation planet known. While much heavier than the predicted mass of Planet Nine -- perhaps 11 times the mass of Jupiter, or 3,500 times the mass of Earth -- it, too, was sitting in a very unexpected location, far above the dust plane of the planetary system and tilted at an angle of about 21 degrees.

The big question, until now, has been whether the planet, called HD 106906 b, is in an orbit perpetually bound to the binary star -- which is a mere 15 million years old compared to the 4.5 billion-year age of our sun -- or whether it's on its way out of the planetary system, never to return.

In a paper appearing Dec. 10 in the Astronomical Journal, astronomers finally answer that question. By precisely tracking the planet's position over 14 years, they determined that it is likely bound to the star in a 15,000-year, highly eccentric orbit, making it a distant cousin of Planet Nine.

If it is in a highly eccentric orbit around the binary, "This raises the question of how did these planets get out there to such large separations," said Meiji Nguyen, a recent UC Berkeley graduate and first author of the paper. "Were they scattered from the inner solar system? Or, did they form out there?"

According to senior author Paul Kalas, University of California, Berkeley, adjunct professor of astronomy, the resemblance to the orbit of the proposed Planet Nine shows that such distant planets can really exist, and that they may form within the first tens of millions of years of a star's life. And based on the team's other recent discoveries about HD 106906, the planet seems to favor a scenario where passing stars also play a role.

"Something happens very early that starts kicking planets and comets outward, and then you have passing stars that stabilize their orbits," he said. "We are slowly accumulating the evidence needed to understand the diversity of extrasolar planets and how that relates to the puzzling aspects of our own solar system."

A young, dusty star with a weird planet

HD 106906 is a binary star system located in the direction of the constellation Crux. Astronomers have studied it extensively for the past 15 years because of its prominent disk of dust, which could be birthing planets. Our solar system may have looked like HD 106906 about 4.5 billion years ago as the planets formed in the swirling disk of debris left over from the formation of the sun.

Surprisingly, images of the star taken in 2013 by the Magellan Telescopes in Chile revealed a planet glowing from its own internal heat and sitting at an unusually large distance from the binary: 737 times farther from the binary than Earth is from the sun (737 astronomical units, or AU). That's 25 times farther from the star than Neptune is from the sun.

Kalas, who searches for planets and dust disks around young stars, co-led a team that used the Gemini Planet Imager on the Gemini South Telescope to obtain the first images of the star's debris disk. In 2015, these observations provided evidence that led theorists to propose that the planet formed close to the binary star and was kicked out because of gravitational interactions with the binary. The evidence: The stars' outer dust disk and inner comet belt are lopsided, suggesting that something -- the planet -- perturbed their symmetry.

"The idea is that every time the planet comes to its closest approach to the binary star, it stirs up the material in the disk," said team member Robert De Rosa of the European Southern Observatory in Santiago, Chile, who is a former UC Berkeley postdoctoral fellow. "So, every time the planet comes through, it truncates the disk and pushes it up on one side. This scenario has been tested with simulations of this system with the planet on a similar orbit -- this was before we knew what the orbit of the planet was."

The problem, as pointed out by those simulating such planet interactions, is that a planet would normally be kicked out of the system entirely, becoming a rogue planet. Some other interaction, perhaps with a passing star, would be necessary to stabilize the orbit of an eccentric planet like HD 106906 b.

A similar scenario has been proposed for the formation of Planet Nine: that its interaction with our giant planets early in our solar system's history kicked it out of the inner solar system, after which passing stars in our local cluster stabilized its orbit.

Kalas went looking for such a fly-by star for HD 106906 b, and last year he and De Rosa, then at Stanford University, reported finding several nearby stars that would have zipped by the planetary system 3 million years earlier, perhaps providing the nudge needed to stabilize the planet's orbit.

Now, with precise measurements of the planet's orbit between 2004 and 2018, Nguyen, de Rosa and Kalas present evidence that the planet is most likely in a stable, but very elliptical, orbit around its binary star.

"Though it's only been 14 years of observations, we were still able to, surprisingly, get a constraint on the orbit for the first time, confirming our suspicion that it was very misaligned and also that the planet is on an approximately 15,000-year orbit." Nguyen said. "The fact that our results are consistent with predictions is, I think, a strong piece of evidence that this planet is, indeed, bound. In the future, a radial velocity measurement is needed to confirm our findings."

The science team's orbital measurements came from comparing astrometric data from the European Space Agency's Gaia observatory, which accurately maps the positions of billions of stars, and images from the Hubble Space Telescope. Because Hubble must obscure the glare from the binary star to see the dimmer debris disk, astronomers were unable to determine the exact position of the star relative to HD 106906 b. Gaia data allowed the team to determine the binary's position more precisely, and thus chart the movement of the planet relative to the binary between 2004 and 2018, less than one-thousandth of its orbital period.

"We can harness the extremely precise astrometry from Gaia to infer where the primary star should be in our Hubble images, and then measuring the position of the companion is rather trivial," Nguyen said.

In addition to confirming the planet's 15,000-year orbit, the team found that the orbit is actually tilted much more severely to the plane of the disk: between 36 and 44 degrees. At its closest approach to the binary, its elliptical orbit would take it no closer than about 500 AU from the stars, implying that it has no effect on inner planets also suspected to be part of the system. That is also the case with Planet Nine, which has no observed effect on any of the sun's eight planets.

Read more at Science Daily

Obesity impairs immune cell function, accelerates tumor growth

 

Cancer definition
Obesity has been linked to increased risk for over a dozen different types of cancer, as well as worse prognosis and survival. Over the years, scientists have identified obesity-related processes that drive tumor growth, such as metabolic changes and chronic inflammation, but a detailed understanding of the interplay between obesity and cancer has remained elusive.

Now, in a study in mice, Harvard Medical School researchers have uncovered a new piece of this puzzle, with surprising implications for cancer immunotherapy: Obesity allows cancer cells to outcompete tumor-killing immune cells in a battle for fuel.

Reporting in Cell on Dec. 9, the research team shows that a high-fat diet reduces the numbers and antitumor activity of CD8+ T cells, a critical type of immune cell, inside tumors. This occurs because cancer cells reprogram their metabolism in response to increased fat availability to better gobble up energy-rich fat molecules, depriving T cells of fuel and accelerating tumor growth.

"Putting the same tumor in obese and nonobese settings reveals that cancer cells rewire their metabolism in response to a high fat diet," said Marcia Haigis, professor of cell biology in the Blavatnik Institute at HMS and co-senior author of the study. "This finding suggests that a therapy that would potentially work in one setting might not be as effective in another, which needs to be better understood given the obesity epidemic in our society."

The team found that blocking this fat-related metabolic reprogramming significantly reduced tumor volume in mice on high-fat diets. Because CD8+ T cells are the main weapon used by immunotherapies that activate the immune system against cancer, the study results suggest new strategies for improving such therapies.

"Cancer immunotherapies are making an enormous impact on patients' lives, but they do not benefit everyone," said co-senior author Arlene Sharpe, the HMS George Fabyan Professor of Comparative Pathology and chair of the Department of Immunology in the Blavatnik Institute.

"We now know there is a metabolic tug-of-war between T cells and tumor cells that changes with obesity," Sharpe said. "Our study provides a roadmap to explore this interplay, which can help us to start thinking about cancer immunotherapies and combination therapies in new ways."

Haigis, Sharpe and colleagues investigated the effects of obesity on mouse models of different types of cancer, including colorectal, breast, melanoma and lung. Led by study co-first authors Alison Ringel and Jefte Drijvers, the team gave mice normal or high-fat diets, the latter leading to increased body weight and other obesity-related changes. They then looked at different cell types and molecules inside and around tumors, together called the tumor microenvironment.

Fatty paradox

The researchers found that tumors grew much more rapidly in animals on high-fat diets compared to those on normal diets. But this occurred only in cancer types that are immunogenic, which can contain high numbers of immune cells; are more easily recognized by the immune system; and are more likely to provoke an immune response.

Experiments revealed that diet-related differences in tumor growth depended specifically on the activity of CD8+ T cells, immune cells that can target and kill cancer cells. Diet did not affect tumor growth rate if CD8+ T cells were eliminated experimentally in mice.

Strikingly, high-fat diets reduced the presence of CD8+ T cells in the tumor microenvironment, but not elsewhere in the body. Those remaining in the tumor were less robust -- they divided more slowly and had markers of decreased activity. But when these cells were isolated and grown in a lab, they had normal activity, suggesting something in the tumor impaired these cells' function.

The team also encountered an apparent paradox. In obese animals, the tumor microenvironment was depleted of key free fatty acids, a major cellular fuel source, even though the rest of the body was enriched in fats, as expected in obesity.

These clues pushed the researchers to craft a comprehensive atlas of the metabolic profiles of different cell types in tumors under normal and high-fat diet conditions.

The analyses revealed that cancer cells adapted in response to changes in fat availability. Under a high-fat diet, cancer cells were able to reprogram their metabolism to increase fat uptake and utilization, while CD8+ T cells did not. This ultimately depleted the tumor microenvironment of certain fatty acids, leaving T cells starved for this essential fuel.

"The paradoxical depletion of fatty acids was one of the most surprising findings of this study. It really blew us away and it was the launch pad for our analyses," said Ringel, a postdoctoral fellow in the Haigis lab. "That obesity and whole-body metabolism can change how different cells in tumors utilize fuel was an exciting discovery, and our metabolic atlas now allows us to dissect and better understand these processes."

Hot and cold

Through several different approaches, including single-cell gene expression analyses, large-scale protein surveys and high-resolution imaging, the team identified numerous diet-related changes to metabolic pathways of both cancer and immune cells in the tumor microenvironment.

Of particular interest was PHD3, a protein that in normal cells has been shown to act as a brake on excessive fat metabolism. Cancer cells in an obese environment had significantly lower expression of PHD3 compared to in a normal environment. When the researchers forced tumor cells to overexpress PHD, they found that this diminished a tumor's ability to take up fat in obese mice. It also restored the availability of key free fatty acids in the tumor microenvironment.

Increased PHD3 expression largely reversed the negative effects of a high-fat diet on immune cell function in tumors. Tumors with high PHD3 grew slower in obese mice compared to tumors with low PHD3. This was a direct result of increased CD8+ T cell activity. In obese mice lacking CD8+ T cells, tumor growth was unaffected by differences in PHD3 expression.

The team also analyzed human tumor databases and found that low PHD3 expression was associated with immunologically "cold" tumors, defined by fewer numbers of immune cells. This association suggested that tumor fat metabolism plays a role in human disease, and that obesity reduces antitumor immunity in multiple cancer types, the authors said.

"CD8+ T cells are the central focus of many promising precision cancer therapies, including vaccines and cell therapies such as CAR-T," Sharpe said. "These approaches need T cells to have sufficient energy to kill cancer cells, but at the same time we don't want tumors to have fuel to grow. We now have amazingly comprehensive data for studying this dynamic and determining mechanisms that prevent T cells from functioning as they should."

More broadly, the results serve as a foundation for efforts to better understand how obesity affects cancer and the impact of patient metabolism on therapeutic outcomes, the authors said. While it's too early to tell if PHD3 is the best therapeutic target, the findings open the door for new strategies to combat cancer through its metabolic vulnerabilities, they said.

"We're interested in identifying pathways that we could use as potential targets to prevent cancer growth and to increase immune antitumor function," Haigis said. "Our study provides a high-resolution metabolic atlas to mine for insights into obesity, tumor immunity and the crosstalk and competition between immune and tumor cells. There are likely many other cell types involved and many more pathways to be explored."

Read more at Science Daily