Apr 17, 2021

Uncovering the secrets of some of the world's first color photographs

It is often said that before air travel our skies were bluer yet how, in the 21st century, could we ever know what light and colors were like one hundred years ago? Recently, a group of researchers from EPFL's Audiovisual Communications Laboratory, in the School of Computer and Communication Sciences (IC), had a unique opportunity to try to find out.

Normally hidden treasures locked away in the vaults of a handful of museums, the researchers were offered access to some of the original photographic plates and images of the scientist and inventor Gabriel Lippmann, who won the 1908 Nobel Prize in physics for his method of reproducing colors in photography.

In a paper just published in the Proceedings of the National Academy of Sciences (PNAS) the authors explain that most photographic techniques take just three measurements, for red, green and blue, however they discovered that Lippmann's historical approach typically captured 26 to 64 spectral samples of information in the visible region. His technique, based on the same interference principles that recently enabled gravitational waves to be detected and which is the foundation of holography and much of modern interferometric imaging, has been almost completely forgotten today.

"These are the earliest multi-spectral light measurements on record so we wondered whether it would be possible to accurately recreate the original light of these historical scenes," said Gilles Baechler, one of the paper's authors, "but the way the photographs were constructed was very particular so we were also really interested in whether we could create digital copies and understand how the technique worked."

The researchers found that the multi-spectral images reflected from a Lippmann plate contained distortions, although the reproduced colors looked accurate to the eye. When they examined the full spectrum reflected from a Lippmann plate, and compared it to the original, they measured a number of inconsistencies, many of which have never been documented, even in modern studies.

"We ended up modeling the full process from the multi-spectral image that you capture, all the way to recording it into the photograph. We were able to capture the light reflected back from it and measure how it differed from the original," explained Baechler. So, could the team replicate century old light?

"With the historic plates there are factors in the process that we just cannot know but because we understood how the light differed, we could create an algorithm to get back the original light that was captured. We were able to study invertibility, that is, given a spectrum produced by a Lippmann photograph we know it is possible to undo the distortions and reconstruct the original input spectrum. When we got our hands dirty and made our own plates using the historical process, we were able to verify that the modeling was correct," he continued.

While fully modeling a Nobel-prize-winning imaging technique is of significant interest in its own right, the researchers believe that revisiting Lippmann's photographic technique can inspire new technological developments this century.

Read more at Science Daily

Coronavirus does not infect the brain but still inflicts damage, study finds

SARS-CoV-2, the virus that causes COVID-19, likely does not directly infect the brain but can still inflict significant neurological damage, according to a new study from neuropathologists, neurologists, and neuroradiologists at Columbia University Vagelos College of Physicians and Surgeons.

"There's been considerable debate about whether this virus infects the brain, but we were unable to find any signs of virus inside brain cells of more than 40 COVID-19 patients," says James E. Goldman, MD, PhD, professor of pathology & cell biology (in psychiatry), who led the study with Peter D. Canoll, MD, PhD, professor of pathology & cell biology, and Kiran T. Thakur, MD, the Winifred Mercer Pitkin Assistant Professor of Neurology.

"At the same time, we observed many pathological changes in these brains, which could explain why severely ill patients experience confusion and delirium and other serious neurological effects -- and why those with mild cases may experience 'brain fog' for weeks and months."

The study, published in the journal Brain, is the largest and most detailed COVID-19 brain autopsy report published to date, suggests that the neurological changes often seen in these patients may result from inflammation triggered by the virus in other parts of the body or in the brain's blood vessels.

No Virus in Brain Cells

The study examined the brains of 41 patients with COVID-19 who succumbed to the disease during their hospitalization. The patients ranged in age from 38 to 97; about half had been intubated and all had lung damage caused by the virus. Many of the patients were of Hispanic ethnicity. There was a wide range of hospital length with some patients dying soon after arrival to the emergency room while others remained in the hospital for months. All of the patients had extensive clinical and laboratory investigations, and some had brain MRI and CT scans.

To detect any virus in the neurons and glia cells of the brain, the researchers used multiple methods including RNA in situ hybridization, which can detect viral RNA within intact cells; antibodies that can detect viral proteins within cells; and RT-PCR, a sensitive technique for detecting viral RNA.

Despite their intensive search, the researchers found no evidence of the virus in the patients' brain cells. Though they did detect very low levels of viral RNA by RT-PCR, this was likely due to virus in blood vessels or leptomeninges covering the brain.

"We've looked at more brains than other studies, and we've used more techniques to search for the virus. The bottom line is that we find no evidence of viral RNA or protein in brain cells," Goldman says. "Though there are some papers that claim to have found virus in neurons or glia, we think that those result from contamination, and any virus in the brain is contained within the brain's blood vessels." "If there's any virus present in the brain tissue, it has to be in very small amounts and does not correlate with the distribution or abundance of neuropathological findings," Canoll says.

The tests were conducted on more than two dozen brain regions, including the olfactory bulb, which was searched because some reports have speculated that the coronavirus can travel from the nasal cavity into the brain via the olfactory nerve. "Even there, we didn't find any viral protein or RNA," Goldman says, "though we found viral RNA and protein in the patients' nasal mucosa and in the olfactory mucosa high in the nasal cavity." (The latter finding appears in an unpublished study, currently on BioRxiv, led by Jonathan Overdevest, MD, PhD, assistant professor of otolaryngology, and Stavros Lomvardas, PhD, professor of biochemistry & molecular biophysics and neuroscience.)

Hypoxic Damage and Signs of Neuronal Death

Despite the absence of virus in the brain, in every patient the researchers found significant brain pathology, which mostly fell into two categories.

"The first thing we noticed was a lot of areas with damage from a lack of oxygen," Goldman says. "They all had severe lung disease, so it's not surprising that there's hypoxic damage in the brain."

Some of these were large areas caused by strokes, but most were very small and only detectable with a microscope. Based on other features, the researchers believe these small areas of hypoxic damage were caused by blood clots, common in patients with severe COVID-19, that temporarily stopped the supply of oxygen to that area.

A more surprising finding, Goldman says, was the large number of activated microglia they found in the brains of most patients. Microglia are immune cells that reside in the brain and can be activated by pathogens.

"We found clusters of microglia attacking neurons, a process called 'neuronophagia,'" says Canoll. Since no virus was found in the brain, it's possible the microglia may have been activated by inflammatory cytokines, such as Interleukin-6, associated with SARS-CoV-2 infection.

"At the same time, hypoxia can induce the expression of 'eat me' signals on the surface of neurons, making hypoxic neurons more vulnerable to activated microglia," Canoll says, "so even without directly infecting brain cells, COVID-19 can cause damage to the brain."

The group found this pattern of pathology in one of their first autopsies, described by Osama Al-Dalahmah, MD, PhD, instructor in pathology & cell biology, in a case report published last March in Acta Neuropathologica Communications. Over the next few months, as the neuropathologists did many more COVID brain autopsies, they saw similar findings over and over again and realized that this is a prominent and common neuropathological finding in patients who die of COVID.

The activated microglia were found predominantly in the lower brain stem, which regulates heart and breathing rhythms, as well as levels of consciousness, and in the hippocampus, which is involved in memory and mood.

"We know the microglia activity will lead to loss of neurons, and that loss is permanent," Goldman says. "Is there enough loss of neurons in the hippocampus to cause memory problems? Or in other parts of the brain that help direct our attention? It's possible, but we really don't know at this point."

Persistent Neurological Problems in Survivors

Goldman says that more research is needed to understand the reasons why some post-COVID-19 patients continue to experience symptoms.

The researchers are now examining autopsies on patients who died several months after recovering from COVID-19 to learn more.

They are also examining the brains from patients who were critically ill with acute respiratory distress syndrome (ARDS) before the COVID-19 pandemic to see how much of COVID-19 brain pathology is a result of the severe lung disease.

Read more at Science Daily

Apr 16, 2021

The light-bending dance of binary black holes

 A pair of orbiting black holes millions of times the Sun's mass perform a hypnotic pas de deux in a new NASA visualization. The movie traces how the black holes distort and redirect light emanating from the maelstrom of hot gas -- called an accretion disk -- that surrounds each one.

Viewed from near the orbital plane, each accretion disk takes on a characteristic double-humped look. But as one passes in front of the other, the gravity of the foreground black hole transforms its partner into a rapidly changing sequence of arcs. These distortions play out as light from both disks navigates the tangled fabric of space and time near the black holes.

"We're seeing two supermassive black holes, a larger one with 200 million solar masses and a smaller companion weighing half as much," said Jeremy Schnittman, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who created the visualization. "These are the kinds of black hole binary systems where we think both members could maintain accretion disks lasting millions of years."

The accretion disks have different colors, red and blue, to make it easier to track the light sources, but the choice also reflects reality. Hotter gas gives off light closer to the blue end of the spectrum, and material orbiting smaller black holes experiences stronger gravitational effects that produce higher temperatures. For these masses, both accretion disks would actually emit most of their light in the UV, with the blue disk reaching a slightly higher temperature.

Visualizations like this help scientists picture the fascinating consequences of extreme gravity's funhouse mirror. The new video doubles down on an earlier one Schnittman produced showing a solitary black hole from various angles.

Video: https://www.youtube.com/watch?v=rQcKIN9vj3U&t=1s

Seen nearly edgewise, the accretion disks look noticeably brighter on one side. Gravitational distortion alters the paths of light coming from different parts of the disks, producing the warped image. The rapid motion of gas near the black hole modifies the disk's luminosity through a phenomenon called Doppler boosting -- an effect of Einstein's relativity theory that brightens the side rotating toward the viewer and dims the side spinning away.

The visualization also shows a more subtle phenomenon called relativistic aberration. The black holes appear smaller as they approach the viewer and larger when moving away.

These effects disappear when viewing the system from above, but new features emerge. Both black holes produce small images of their partners that circle around them each orbit. Looking closer, it's clear that these images are actually edge-on views. To produce them, light from the black holes must be redirected by 90 degrees, which means we're observing the black holes from two different perspectives -- face on and edge on -- at the same time.

"A striking aspect of this new visualization is the self-similar nature of the images produced by gravitational lensing," Schnittman explained. "Zooming into each black hole reveals multiple, increasingly distorted images of its partner."

Schnittman created the visualization by computing the path taken by light rays from the accretion disks as they made their way through the warped space-time around the black holes. On a modern desktop computer, the calculations needed to make the movie frames would have taken about a decade. So Schnittman teamed up with Goddard data scientist Brian P. Powell to use the Discover supercomputer at the NASA Center for Climate Simulation. Using just 2% of Discover's 129,000 processors, these computations took about a day.

Read more at Science Daily

Fast radio bursts shown to include lower frequency radio waves than previously detected

Since fast radio bursts (FRBs) were first discovered over a decade ago, scientists have puzzled over what could be generating these intense flashes of radio waves from outside of our galaxy. In a gradual process of elimination, the field of possible explanations has narrowed as new pieces of information are gathered about FRBs -- how long they last, the frequencies of the radio waves detected, and so on.

Now, a team led by McGill University researchers and members of Canada's CHIME Fast Radio Burst collaboration has established that FRBs include radio waves at frequencies lower than ever detected before, a discovery that redraws the boundaries for theoretical astrophysicists trying to put their finger on the source of FRBs.

"We detected fast radio bursts down to 110 MHz where before these bursts were only known to exist down to 300 MHz," explained Ziggy Pleunis, a postdoctoral researcher in McGill's Department of Physics and lead author of the research recently published in the Astrophysical Journal Letters. "This tells us that the region around the source of the bursts must be transparent to low-frequency emission, whereas some theories suggested that all low-frequency emission would be absorbed right away and could never be detected."

The study focussed on an FRB source first detected in 2018 by the CHIME radio telescope in British Columbia. Known as FRB 20180916B, the source has attracted particular attention because of its relative proximity to Earth and the fact that it emits FRBs at regular intervals.

The research team combined the capacities of CHIME with those of another radio telescope, LOFAR, or Low Frequency Array, in the Netherlands. The joint effort not only enabled the detection of the remarkably low FRB frequencies, but also revealed a consistent delay of around three days between the higher frequencies being picked up by CHIME and the lower ones reaching LOFAR.

Read more at Science Daily

Triangular-shaped spikes key to coronavirus transmission, finds new study

COVID-19 needs no introduction. Last year, the disease, which is caused by the virus SARS-CoV-2, reached every continent across the globe. By the end of March 2021, there had been an estimated 128 million cases recorded with almost three million of these being fatal. As scientists' race to develop vaccines and politicians coordinate their distribution, fundamental research on what makes this virus so successful is also being carried out.

Within the Mathematics, Mechanics, and Materials Unit at the Okinawa Institute of Science and Technology Graduate University (OIST), postdoctoral researcher, Dr. Vikash Chaurasia, and Professor Eliot Fried have been using energy minimization techniques to look at charged proteins on biological particles. Previously they researched cholesterol molecules but when the pandemic hit, they realized that with the methods they had developed could be applied to the new virus. They collaborated with researchers Mona Kanso and Professor Jeffrey Giacomin, from Queen's University in Canada, to take a close look at SARS-CoV-2 and see how the shape of the virus' 'spikes' (which are officially called peplomers) aid its success at spreading so prolifically. Their study was recently published in Physics of Fluids.

"When one envisions a single coronavirus particle, it is common to think of a sphere with many spikes or smaller spheres distributed across its surface," said Dr. Chaurasia. "This is the way the virus was originally modeled. But this model is a rough sketch and over the last year, we've come to learn much more about what the virus looks like."

Instead, Dr. Chaurasia pointed out, the 'spikes' of the coronavirus particle are actually shaped like three small spheres stacked together to form a triangular shape. This is an important consideration because the shape of a viral particle can influence its ability to disperse.

To understand this, imagine a ball moving through space. The ball will follow a curve but, as it does this, it will also rotate. The speed at which the ball rotates is called its rotational diffusivity. A particle of SARS-CoV-2 moves in a similar way to this ball although its suspended in fluid (specifically, tiny droplets of saliva). The rotational diffusivity of the particle impacts how well it can align with and attach itself to objects (such as a person's tissues or cells) and this has been key in its ability to successfully spread from person to person so quickly. A higher rotational diffusivity will mean that the particle shakes and jitters as it follows a trajectory -- and thus may have difficulty attaching to objects or efficiently bouncing off an object to continue to move through the air. Whereas a lower rotational diffusivity has the opposite effect.

Another consideration was the charge of each spike. The researchers assumed that each is equally charged. The same charges always repel each other so if there are only two spikes on a particle and they have equal charges, they'll be situated at either pole (as far away from each other as possible). As more equally charged spikes are added, they become evenly distributed across the surface of the sphere. This provided the researchers with a geometrical arrangement from which they could calculate the rotational diffusivity.

Previously, the researchers looked at a viral particle with 74 spikes. For this new study, they used the same particle but switched out the single-bead spikes for the three-bead triangles. When they did this, the rotational diffusivity of the particle was found to decrease by 39%. Moreover, this trend was found to continue with the addition of more spikes.

This was an important finding -- having a lower rotational diffusivity means that the virus particles can better align and attach themselves to objects and people. Thus, this study suggests that the triangular shaped spikes have contributed to the success of SARS-CoV-2.

"We know it's more complicated than this," explained Dr. Chaurasia. "The spikes might not be equally charged. Or they might be flexible and able to twist themselves. Also, the 'body' of the particle might not be a sphere. So, we plan to do more research in this area."

An additional interesting feature of this research is its connection to a question asked more than a century ago by physicist J. J. Thomson, who explored how a set number of charges will be distributed across a sphere.

"I find it fascinating that a problem considered more than 100 years ago has such relevance for the situation we're in today," said Professor Eliot Fried. "Although this question was first posed primarily from a standpoint of curiosity and intellectual interest, it has turned out to be applicable in unexpected ways. This shows why we mustn't lose site of the importance of fundamental research."

Read more at Science Daily

New study explains why you should look at your food before casting judgment

 The order in which your senses interact with food has a tremendous impact on how much you like it. That's the premise of a new study led by the University of South Florida (USF). The findings published in the Journal of Consumer Psychology show that food tastes better if you see it before smelling it.

Researchers came to this conclusion following four experiments involving cookies, fruit snacks and lemonade. In the first study, nearly 200 participants interacted with the food, each item wrapped in an opaque versus a transparent package. The team administered each item in different orders: visual before scent, scent before visual, only visual and only scent. Despite being the same product, participants rated the strawberry-flavored fruit snacks packaged in an envelope as tasting better when they could see the item before smelling it compared to their counterparts who smelled the item before seeing it. Researchers experienced the same results when they tested taste perception of the cookies.

"This is because being able to see a food item before smelling it helps in processing the scent cue with greater ease, which in turn enhances the food taste perception," said Dipayan Biswas, Frank Harvey Endowed Professor of Marketing at USF. "Basically, scents play a very critical role in influencing taste perceptions; however, interestingly, people can process a scent better in their brains when the scent is preceded by a corresponding visual cue, such as color."

The research team, which includes collaborators from Columbia University and the University of Rhode Island, experienced the same results when it focused on beverages. Researchers poured the same, yellow-colored lemonade into lidded clear plastic cups and lidded solid-colored plastic cups that were splashed with artificial lemon-scented oil. Similarly, participants preferred the drink that they could see before smelling and they drank more of it. Researchers tested consumption by purposely leaving the drinks in front of participants as they undertook an unrelated task. Additionally, the researchers provided the same drinks with the addition of odorless purple food coloring, a color typically not associated with lemon flavor. In this case, it had a negative effect on taste perception, as the color contradicted expectations.

"We tested this to get a better understanding of how the human sensory processing system evaluates a sequence of visual and scent-related cues," Biswas said.

These findings are highly beneficial to supermarkets and Biswas suggests they consider installing more glass cases to help facilitate a customer's ability to see a food item at a distance before smelling it. He suggests strategic displays with photos or samples be visible prior to entering a business, helping strengthen taste perceptions of food items, which can increase sales and overall impression of the business. Biswas emphasizes that the theory also applies to pantry food items, such as potato chips, which may attract more interest if they were sold in transparent packaging.

Read more at Science Daily

Brain regions responsible for intoxicating effects of alcohol

The slurred speech, poor coordination, and sedative effects of drinking too much alcohol may actually be caused by the breakdown of alcohol products produced in the brain, not in the liver as scientists currently think. That is the finding of a new study led by researchers from the University of Maryland School of Medicine (UMSOM) and the National Institute on Alcohol Abuse and Alcoholism. It was published recently in the journal Nature Metabolism and provides new insights into how alcohol may affect the brain and the potential for new treatments to treat alcohol misuse.

It is well known that the liver is the major organ that metabolizes alcohol, using the enzyme alcohol dehydrogenase to convert alcohol into a compound called acetaldehyde. Acetaldehyde, which has toxic effects, is quickly broken down into a more benign substance called acetate. This occurs through a different enzyme called acetaldehyde dehydrogenase 2 (ALDH2). Until now, alcohol and acetaldehyde, produced by the liver, have been considered important players in triggering the cognitive impairment associated with imbibing. Acetate, on the other hand, was considered relatively unimportant in producing effects like motor impairment, confusion, and slurred speech. Researchers also did not know which brain region or particular brain cells were most important for alcohol metabolism.

To learn more about the role played by the brain in alcohol metabolism, the researchers measured the distribution of ALDH2 enzyme in the cerebellum, using magnetic resonance (MR) scanners in both mice and in human tissue. They observed that ALDH2 was expressed in the cerebellum, in a type of nerve cell called an astrocyte, in both human brain tissue and in living mice.

The researchers found that this enzyme controlled the conversion of acetaldehyde into acetate in the brain. They also found alcohol-induced cellular and behavioral effects in specific regions of the brain where this enzyme was expressed. Acetate was found to interact with the brain messenger chemical called GABA, which is known to decrease activity in the nervous system. This decreased activity can lead to drowsiness, impair coordination, and lower normal feelings of inhibition.

"We found ALDH2 was expressed in cells known as astrocytes in the cerebellum, a brain region that controls balance and motor coordination," said Qi Cao, PhD, Assistant Professor of Diagnostic Radiology and Nuclear Medicine at the University of Maryland School of Medicine. "We also found that when ALDH2 was removed from these cells, the mice were resistant to motor impairment inducted by alcohol consumption."

Su Xu, PhDHe and his team also found the enzyme ALDH2 in other brain regions responsible for emotional regulation and decision-making (both impaired by excess alcohol consumption), including in the hippocampus, amydala, and prefrontal cortex.

These findings suggest that certain brain regions are important for alcohol metabolism and that abnormalities in the enzyme production in these brain regions can lead to detrimental effects associated with alcohol misuse. They also suggest that acetate produced in the brain and in the liver differ in their ability to affect motor and cognitive function.

Read more at Science Daily

Apr 15, 2021

Satellite map of human pressure on land provides insight on sustainable development

 The coronavirus pandemic has led researchers to switch gears or temporarily abandon projects due to health protocols or not being able to travel. But for Patrick Keys and Elizabeth Barnes, husband and wife scientists at Colorado State University, this past year led to a productive research collaboration.

They teamed up with Neil Carter, assistant professor at the University of Michigan, on a paper published in Environmental Research Letters that outlines a satellite-based map of human pressure on lands around the world.

Keys, lead author and a research scientist in CSU's School of Global Environmental Sustainability, said the team used machine learning to produce the map, which reveals where abrupt changes in the landscape have taken place around the world. The map shows a near-present snapshot of effects from deforestation, mining, expanding road networks, urbanization and increasing agriculture.

"The map we've developed can help people understand important challenges in biodiversity conservation and sustainability in general," said Keys.

This type of a map could be used to monitor progress for the United Nations Sustainable Development Goal 15 (SDG15), "Life on Land," which aims to foster sustainable development while conserving biodiversity.

Eight algorithms to encompass data from around the world


Barnes, an associate professor in CSU's Department of Atmospheric Science, did the heavy lifting on the data side of the project.

While staggering parenting duties with Keys, she wrote code like never before, working with trillions of data points and training up to eight separate algorithms to cover different parts of the world. She then merged the algorithms to provide a seamless classification for the whole planet.

At first, the two researchers had to learn to speak the other's work language.

"Pat initially had an idea for this research, and I said, 'Machine learning doesn't work that way,'" said Barnes.

She then sketched out the components with him: The input is something we want to be able to see from space, like a satellite image; and the output is some measure of what humans are doing on Earth. The middle part of the equation was machine learning.

Keys said what Barnes designed is a convolutional neural network, which is commonly used for interpreting images. It's similar to how Facebook works when the site suggests tagging friends in a photo.

"It's like our eyes and our brains," he said.

In developing the algorithm, they used existing data that classified human impacts on the planet, factors like roads and buildings, and grazing lands for livestock and deforestation. Then, the convolutional neural network learned how to accurately interpret satellite imagery, based on this existing data.

From an analysis of one country, to the world

The researchers started with Indonesia, a country that has experienced rapid change over the last 20 years. By the end of the summer, after they were confident about what they identified in Indonesia using machine learning, Keys suggested that they look at the entire globe.

"I remember telling him it's not possible," said Barnes. "He knows whenever I say that, I will go back and try and make it work. A week later, we had the whole globe figured out."

Barnes said using machine learning is not fool-proof, and it requires some follow-up to ensure that data are accurate.

"Machine learning will always provide an answer, whether it's garbage or not," she explained. "Our job as scientists is to determine if it is useful."

Keys spent many nights on Google Earth reviewing over 2,000 places on the globe in the year 2000 and then compared those sites with 2019. He noted changes and confirmed the data with Barnes.

The research team also did a deeper dive into three countries -- Guyana, Morocco and Gambia -to better understand what they found.

In the future, when new satellite data is available, Keys said the team can quickly generate a new map.

"We can plug that data into this now-trained neural network and generate a new map," he said. "If we do that every year, we'll have this sequential data that shows how human pressure on the landscape is changing."

Keys said the research project helped lift his spirits over the last year.

Read more at Science Daily

Reliably measuring oxygen deficiency in rivers or lakes

 When wastewater from villages and cities flows into rivers and lakes, large quantities of fats, proteins, sugars and other carbon-containing, organic substances wind up in nature together with the fecal matter. These organic substances are broken down by bacteria that consume oxygen. The larger the volume of wastewater, the better the bacteria thrive. This, however, means the oxygen content of the water continues to decrease until finally the fish, muscles or worms literally run out of air. This has created low-oxygen death zones in many rivers and lakes around the world.

No gold standard for measurements until now

In order to measure how heavily the waters are polluted with organic matter from feces, government bodies and environmental researchers regularly take water samples. One widely used measurement method uses a chemical reaction to determine the content of organic substances. As an international team of scientists now shows, this established method provides values from which the actual degree of the water pollution can hardly be derived. Prof. Helmuth Thomas, Director of Hereon's Institute of Carbon Cycles is also a contributor to the study, which has now been published in the scientific journal Science Advances. "In the paper, we are therefore also introducing a new method for making the measurements much more reliable in the future," he says.

Using the conventional measurement method, water samples are mixed with the chemicals permanganate or dichromate. These are especially reactive and break down all organic substances in a short time. The quantity of consumed permanganates or dichromates can then be used to determine how much organic substance was contained in the water sample. Experts refer to this measurement as "chemical oxygen demand," COD. The problem with the COD measurements is that they do not differentiate between the organic substances that wind up in the water with the sewage, and those that arise naturally -- such as lignin and humic acids -- which are released when wood decays. This means that the water pollution can hardly be distinguished from the natural content of organic substances. "For the Han River in South Korea, for example, we have shown that the pollution with organic substances from wastewater in the past twenty-five years has decreased. The COD measurements, however, still show high values as they were before," says Helmuth Thomas, "because here the natural substances make up a large portion of the organic matter in the water."

Complicated biological analysis

But how can the actual pollution be measured more reliably? A biological measurement method has been established here for decades, but it is much more complex than the COD method and is therefore used more seldomly by government bodies and research institutions. In this case, a water sample is taken from the river or lake and the oxygen content of the water is measured as an initial value. Another "parallel sample" is immediately sealed airtight. Then this water sample rests for five days. During this time, the bacteria break down the organic substance, whereby they gradually consume the oxygen in the water. After five days, the container is opened and the oxygen is measured. If the water contains a great deal of organic matter, then the bacteria were particularly active. The oxygen consumption was then correspondingly high. Experts refer to the "biological oxygen demand" (BOD) in this measurement. "The BOD measurement is far more precise than the COD because the bacteria preferentially break down the small organic molecules from the wastewater but leave the natural ones, such as lignin, untouched," says Thomas. Nevertheless, the BOD measurement has its disadvantages, too. On the one hand, the BOD measurement takes five days, while the COD value is available after a few minutes. On the other, while filling, storing and measuring the water samples, meticulous care must be taken to ensure that no oxygen from the ambient air winds up in the sample and falsifies the measurement value. "Only a few people with a great deal of laboratory experience have mastered how to entirely handle the BOD measurement," says Thomas. "Therefore, government bodies and researchers even today still prefer the COD despite its greater uncertainties."

Read more at Science Daily

Mindfulness can make you selfish

 Mindfulness is big business. Downloads of mindfulness apps generate billions of dollars annually in the U.S., and their popularity continues to rise. In addition to what individual practitioners might have on their phones, schools and prisons along with 1 in 5 employers currently offer some form of mindfulness training.

Mindfulness and meditation are associated with reducing stress and anxiety, while increasing emotional well-being. Plenty of scholarship supports these benefits. But how does mindfulness affect the range of human behaviors -- so-called prosocial behaviors -- that can potentially help or benefit other people? What happens when the research looks outwardly at social effects of mindfulness rather than inwardly at its personal effects?

It's within the area of prosocial behaviors that a new paper by University at Buffalo researchers demonstrates the surprising downsides of mindfulness, while offering easy ways to minimize those consequences -- both of which have practical implications for mindfulness training.

"Mindfulness can make you selfish," says Michael Poulin, PhD, an associate professor of psychology in the UB College of Arts and Sciences and the paper's lead author. "It's a qualified fact, but it's also accurate.

"Mindfulness increased prosocial actions for people who tend to view themselves as more interdependent. However, for people who tend to view themselves as more independent, mindfulness actually decreased prosocial behavior."

The results sound contradictory given the pop culture toehold of mindfulness as an unequivocal positive mental state. But the message here isn't one that dismantles the effectiveness of mindfulness.

"That would be an oversimplification," says Poulin, an expert in stress, coping and prosocial engagement. "Research suggests that mindfulness works, but this study shows that it's a tool, not a prescription, which requires more than a plug-and-play approach if practitioners are to avoid its potential pitfalls."

The findings will appear in a forthcoming issue of the journal Psychological Science.

Poulin says independent versus interdependent mindsets represent an overarching theme in social psychology. Some people think of themselves in singular or independent terms: "I do this." While others think of themselves in plural or interdependent terms: "We do this."

There are also cultural differences layered on top of these perspectives. People in Western nations most often think of themselves as independent, whereas people in East Asian countries more often think of themselves as interdependent. Mindfulness practices originated in East Asian countries, and Poulin speculates that mindfulness may be more clearly prosocial in those contexts. Practicing mindfulness in Western countries removes that context.

"Despite these individual and cultural differences, there is also variability within each person, and any individual at different points in time can think of themselves either way, in singular or plural terms," says Poulin.

The researchers, which included Shira Gabriel, PhD, a UB associate professor of psychology, C. Dale Morrison and Esha Naidu, both UB graduate students, and Lauren M. Ministero, PhD, a UB graduate student at the time of the research who is now a senior behavioral scientist at the MITRE Corporation, used a two-experiment series for their study.

First, they measured 366 participants' characteristic levels of independence versus interdependence, before providing mindfulness instruction or a mind wandering exercise to the control group. Before leaving, participants were told about volunteer opportunities stuffing envelopes for a charitable organization.

In this experiment, mindfulness led to decreased prosocial behavior among those who tended to be independent.

In the next experiment, instead of having a trait simply measured, 325 participants were encouraged to lean one way or the other by engaging in a brief but effective exercise that tends to make people think of themselves in independent or interdependent terms.

The mindfulness training and control procedures were the same as the first experiment, but in this case, participants afterwards were asked if they would sign up to chat online with potential donors to help raise money for a charitable organization.

Mindfulness made those primed for independence 33% less likely to volunteer, but it led to a 40% increase in the likelihood of volunteering to the same organization among those primed for interdependence. The results suggest that pairing mindfulness with instructions explaining how to make people think of themselves in terms of their relationships and communities as they're engaging in mindfulness exercises may allow them to see both positive personal and social outcomes.

Read more at Science Daily

Telescopes unite in unprecedented observations of famous black hole

 In April 2019, scientists released the first image of a black hole in galaxy M87 using the Event Horizon Telescope (EHT). However, that remarkable achievement was just the beginning of the science story to be told.

Data from 19 observatories released today promise to give unparalleled insight into this black hole and the system it powers, and to improve tests of Einstein's General Theory of Relativity.

"We knew that the first direct image of a black hole would be groundbreaking," says Kazuhiro Hada of the National Astronomical Observatory of Japan, a co-author of a new study published in The Astrophysical Journal Letters that describes the large set of data. "But to get the most out of this remarkable image, we need to know everything we can about the black hole's behavior at that time by observing over the entire electromagnetic spectrum."

The immense gravitational pull of a supermassive black hole can power jets of particles that travel at almost the speed of light across vast distances. M87's jets produce light spanning the entire electromagnetic spectrum, from radio waves to visible light to gamma rays. This pattern is different for each black hole. Identifying this pattern gives crucial insight into a black hole's properties -- for example, its spin and energy output -- but is a challenge because the pattern changes with time.

Scientists compensated for this variability by coordinating observations with many of the world's most powerful telescopes on the ground and in space, collecting light from across the spectrum. These 2017 observations were the largest simultaneous observing campaign ever undertaken on a supermassive black hole with jets.

Three observatories managed by the Center for Astrophysics | Harvard & Smithsonian participated in the landmark campaign: the Submillimeter Array (SMA) in Hilo, Hawaii; the space-based Chandra X-ray Observatory; and the Very Energetic Radiation Imaging Telescope Array System (VERITAS) in southern Arizona.

Beginning with the EHT's now iconic image of M87, a new video takes viewers on a journey through the data from each telescope. Each consecutive frame shows data across many factors of ten in scale, both of wavelengths of light and physical size.

The sequence begins with the April 2019 image of the black hole. It then moves through images from other radio telescope arrays from around the globe (SMA), moving outward in the field of view during each step. Next, the view changes to telescopes that detect visible light, ultraviolet light, and X-rays (Chandra). The screen splits to show how these images, which cover the same amount of the sky at the same time, compare to one another. The sequence finishes by showing what gamma-ray telescopes on the ground (VERITAS), and Fermi in space, detect from this black hole and its jet.

Each telescope delivers different information about the behavior and impact of the 6.5-billion-solar-mass black hole at the center of M87, which is located about 55 million light-years from Earth.

"There are multiple groups eager to see if their models are a match for these rich observations, and we're excited to see the whole community use this public data set to help us better understand the deep links between black holes and their jets," says co-author Daryl Haggard of McGill University in Montreal, Canada.

The data were collected by a team of 760 scientists and engineers from nearly 200 institutions, spanning 32 countries or regions, and using observatories funded by agencies and institutions around the globe. The observations were concentrated from the end of March to the middle of April 2017.

"This incredible set of observations includes many of the world's best telescopes," says co-author Juan Carlos Algaba of the University of Malaya in Kuala Lumpur, Malaysia. "This is a wonderful example of astronomers around the world working together in the pursuit of science."

The first results show that the intensity of the light produced by material around M87's supermassive black hole was the lowest that had ever been observed. This produced ideal conditions for viewing the 'shadow' of the black hole, as well as being able to isolate the light from regions close to the event horizon from those tens of thousands of light-years away from the black hole.

The combination of data from these telescopes, and current (and future) EHT observations, will allow scientists to conduct important lines of investigation into some of astrophysics' most significant and challenging fields of study. For example, scientists plan to use these data to improve tests of Einstein's Theory of General Relativity. Currently, uncertainties about the material rotating around the black hole and being blasted away in jets, in particular the properties that determine the emitted light, represent a major hurdle for these General Relativity tests.

A related question that is addressed by today's study concerns the origin of energetic particles called "cosmic rays," which continually bombard the Earth from outer space. Their energies can be a million times higher than what can be produced in the most powerful accelerator on Earth, the Large Hadron Collider. The huge jets launched from black holes, like the ones shown in today's images, are thought to be the most likely source of the highest energy cosmic rays, but there are many questions about the details, including the precise locations where the particles get accelerated. Because cosmic rays produce light via their collisions, the highest-energy gamma rays can pinpoint this location, and the new study indicates that these gamma-rays are likely not produced near the event horizon -- at least not in 2017. A key to settling this debate will be comparison to the observations from 2018, and the new data being collected this week.

"Understanding the particle acceleration is really central to our understanding of both the EHT image as well as the jets, in all their 'colors'," says co-author Sera Markoff from the University of Amsterdam. "These jets manage to transport energy released by the black hole out to scales larger than the host galaxy, like a huge power cord. Our results will help us calculate the amount of power carried, and the effect the black hole's jets have on its environment."

The release of this new treasure trove of data coincides with the EHT's 2021 observing run, which leverages a worldwide array of radio dishes, the first since 2018. Last year's campaign was canceled because of the COVID-19 pandemic, and the previous year was suspended because of unforeseen technical problems. This very week, for six nights, EHT astronomers are targeting several supermassive black holes: the one in M87 again, the one in our Galaxy called Sagittarius A*, and several more distant black holes. Compared to 2017, the array has been improved by adding three more radio telescopes: the Greenland Telescope, the Kitt Peak 12-meter Telescope in Arizona, and the NOrthern Extended Millimeter Array (NOEMA) in France.

"With the release of these data, combined with the resumption of observing and an improved EHT, we know many exciting new results are on the horizon," says co-author Mislav Balokovi? of Yale University.

Read more at Science Daily

Apr 14, 2021

New approach to centuries-old 'three-body problem'

 The "three-body problem," the term coined for predicting the motion of three gravitating bodies in space, is essential for understanding a variety of astrophysical processes as well as a large class of mechanical problems, and has occupied some of the world's best physicists, astronomers and mathematicians for over three centuries. Their attempts have led to the discovery of several important fields of science; yet its solution remained a mystery.

At the end of the 17th century, Sir Isaac Newton succeeded in explaining the motion of the planets around the sun through a law of universal gravitation. He also sought to explain the motion of the moon. Since both the earth and the sun determine the motion of the moon, Newton became interested in the problem of predicting the motion of three bodies moving in space under the influence of their mutual gravitational attraction (see attached illustration), a problem that later became known as "the three-body problem."

However, unlike the two-body problem, Newton was unable to obtain a general mathematical solution for it. Indeed, the three-body problem proved easy to define, yet difficult to solve.

New research, led by Professor Barak Kol at Hebrew University of Jerusalem's Racah Institute of Physics, adds a step to this scientific journey that began with Newton, touching on the limits of scientific prediction and the role of chaos in it.

The theoretical study presents a novel and exact reduction of the problem, enabled by a re-examination of the basic concepts that underlie previous theories. It allows for a precise prediction of the probability for each of the three bodies to escape the system.

Following Newton and two centuries of fruitful research in the field including by Euler, Lagrange and Jacobi, by the late 19th century the mathematician Poincare discovered that the problem exhibits extreme sensitivity to the bodies' initial positions and velocities. This sensitivity, which later became known as chaos, has far-reaching implications -- it indicates that there is no deterministic solution in closed-form to the three-body problem.

In the 20th century, the development of computers made it possible to re-examine the problem with the help of computerized simulations of the bodies' motion. The simulations showed that under some general assumptions, a three-body system experiences periods of chaotic, or random, motion alternating with periods of regular motion, until finally the system disintegrates into a pair of bodies orbiting their common center of mass and a third one moving away, or escaping, from them.

The chaotic nature implies that not only is a closed-form solution impossible, but also computer simulations cannot provide specific and reliable long-term predictions. However, the availability of large sets of simulations led in 1976 to the idea of seeking a statistical prediction of the system, and in particular, predicting the escape probability of each of the three bodies. In this sense, the original goal, to find a deterministic solution, was found to be wrong, and it was recognized that the right goal is to find a statistical solution.

Determining the statistical solution has proven to be no easy task due to three features of this problem: the system presents chaotic motion that alternates with regular motion; it is unbounded and susceptible to disintegration. A year ago, Racah's Dr. Nicholas Stone and his colleagues used a new method of calculation and, for the first time, achieved a closed mathematical expression for the statistical solution. However, this method, like all its predecessor statistical approaches, rests on certain assumptions. Inspired by these results, Kol initiated a re-examination of these assumptions.

The infinite unbounded range of the gravitational force suggests the appearance of infinite probabilities through the so-called infinite phase-space volume. To avoid this pathology, and for other reasons, all previous attempts postulated a somewhat arbitrary "strong interaction region," and accounted only for configurations within it in the calculation of probabilities.

The new study, recently published in the scientific journal Celestial Mechanics and Dynamical Astronomy, focuses on the outgoing flux of phase-volume, rather than the phase-volume itself. Since the flux is finite even when the volume is infinite, this flux-based approach avoids the artificial problem of infinite probabilities, without ever introducing the artificial strong interaction region.

The flux-based theory predicts the escape probabilities of each body, under a certain assumption. The predictions are different from all previous frameworks, and Prof. Kol emphasizes that "tests by millions of computer simulations shows strong agreement between theory and simulation." The simulations were carried out in collaboration with Viraj Manwadkar from the University of Chicago, Alessandro Trani from the Okinawa Institute in Japan, and Nathan Leigh from University of Concepcion in Chile. This agreement proves that understanding the system requires a paradigm shift and that the new conceptual basis describes the system well. It turns out, then, that even for the foundations of such an old problem, innovation is possible.

Read more at Science Daily

Unlocking richer intracellular recordings

 Behind every heartbeat and brain signal is a massive orchestra of electrical activity. While current electrophysiology observation techniques have been mostly limited to extracellular recordings, a forward-thinking group of researchers from Carnegie Mellon University and Istituto Italiano di Tecnologia has identified a flexible, low-cost, and biocompatible platform for enabling richer intracellular recordings.

The group's unique "across the ocean" partnership started two years ago at the Bioelectronics Winter School (BioEl) with libations and a bar napkin sketch. It has evolved into research published today in Science Advances, detailing a novel microelectrode platform that leverages three-dimensional fuzzy graphene (3DFG) to enable richer intracellular recordings of cardiac action potentials with high signal to noise ratio. This advancement could revolutionize ongoing research related to neurodegenerative and cardiac diseases, as well as the development of new therapeutic strategies.

A key leader in this work, Tzahi Cohen-Karni, associate professor of biomedical engineering and materials science and engineering, has studied the properties, effects, and potential applications of graphene throughout his entire career. Now, he is taking a collaborative step in a different direction, using a vertically-grown orientation of the extraordinary carbon-based material (3DFG) to access the intracellular compartment of the cell and record intracellular electrical activity.

Due to its unique electrical properties, graphene stands out as a promising candidate for carbon-based biosensing devices. Recent studies have shown the successful deployment of graphene biosensors for monitoring the electrical activity of cardiomyocytes, or heart cells, outside of the cells, or in other words, extracellular recordings of action potentials. Intracellular recordings, on the other hand, have remained limited due to ineffective tools...until now.

"Our aim is to record the whole orchestra -- to see all the ionic currents that cross the cell membrane -- not just the subset of the orchestra shown by extracellular recordings," explains Cohen-Karni. "Adding the dynamic dimension of intracellular recordings is fundamentally important for drug screening and toxicity assay, but this is just one important aspect of our work."

"The rest is the technology advancement," Cohen-Karni continues. "3DFG is cheap, flexible and an all-carbon platform; no metals involved. We can generate wafer-sized electrodes of this material to enable multi-site intracellular recordings in a matter of seconds, which is a significant enhancement from an existing tool, like a patch clamp, which requires hours of time and expertise."

So, how does it work? Leveraging a technique developed by Michele Dipalo and Francesco De Angelis, researchers at Istituto Italiano di Tecnologia, an ultra-fast laser is used to access the cell membrane. By shining short pulses of laser onto the 3DFG electrode, an area of the cell membrane becomes porous in a way, allowing for electrical activity within the cell be recorded. Then, the cardiomyocytes are cultured to further investigate interactions between the cells.

Interestingly, 3DFG is black and absorbs most of the light, resulting in unique optical properties. Combined with its foam-like structure and enormous exposed surface area, 3DFG has many desirable traits that are needed to make small biosensors.

"We have developed a smarter electrode; an electrode that allows us better access," emphasizes Cohen-Karni. "The biggest advantage from my end is that we can have access to this signal richness, to be able to look into processes of intracellular importance. Having a tool like this will revolutionize the way we can investigate effects of therapeutics on terminal organs, such as the heart."

Read more at Science Daily

Why some of us are hungry all the time

 New research shows that people who experience big dips in blood sugar levels, several hours after eating, end up feeling hungrier and consuming hundreds more calories during the day than others.

A study published today in Nature Metabolism, from PREDICT, the largest ongoing nutritional research program in the world that looks at responses to food in real life settings, the research team from King's College London and health science company ZOE (including scientists from Harvard Medical School, Harvard T.H. Chan School of Public Health, Massachusetts General Hospital, the University of Nottingham, Leeds University, and Lund University in Sweden) found why some people struggle to lose weight, even on calorie-controlled diets, and highlight the importance of understanding personal metabolism when it comes to diet and health.

The research team collected detailed data about blood sugar responses and other markers of health from 1,070 people after eating standardized breakfasts and freely chosen meals over a two-week period, adding up to more than 8,000 breakfasts and 70,000 meals in total. The standard breakfasts were based on muffins containing the same amount of calories but varying in composition in terms of carbohydrates, protein, fat and fibre. Participants also carried out a fasting blood sugar response test (oral glucose tolerance test), to measure how well their body processes sugar.

Participants wore stick-on continuous glucose monitors (CGMs) to measure their blood sugar levels over the entire duration of the study, as well as a wearable device to monitor activity and sleep. They also recorded levels of hunger and alertness using a phone app, along with exactly when and what they ate over the day.

Previous studies looking at blood sugar after eating have focused on the way that levels rise and fall in the first two hours after a meal, known as a blood sugar peak. However, after analyzing the data, the PREDICT team noticed that some people experienced significant 'sugar dips' 2-4 hours after this initial peak, where their blood sugar levels fell rapidly below baseline before coming back up.

Big dippers had a 9% increase in hunger, and waited around half an hour less, on average, before their next meal than little dippers, even though they ate exactly the same meals.

Big dippers also ate 75 more calories in the 3-4 hours after breakfast and around 312 calories more over the whole day than little dippers. This kind of pattern could potentially turn into 20 pounds of weight gain over a year.

Dr Sarah Berry from King's College London said, "It has long been suspected that blood sugar levels play an important role in controlling hunger, but the results from previous studies have been inconclusive. We've now shown that sugar dips are a better predictor of hunger and subsequent calorie intake than the initial blood sugar peak response after eating, changing how we think about the relationship between blood sugar levels and the food we eat."

Professor Ana Valdes from the School of Medicine at the University of Nottingham, who led the study team, said: "Many people struggle to lose weight and keep it off, and just a few hundred extra calories every day can add up to several pounds of weight gain over a year. Our discovery that the size of sugar dips after eating has such a big impact on hunger and appetite has great potential for helping people understand and control their weight and long-term health."

Comparing what happens when participants eat the same test meals revealed large variations in blood sugar responses between people. The researchers also found no correlation between age, bodyweight or BMI and being a big or little dipper, although males had slightly larger dips than females on average.

There was also some variability in the size of the dips experienced by each person in response to eating the same meals on different days, suggesting that whether you're a dipper or not depends on individual differences in metabolism, as well as the day-to-day effects of meal choices and activity levels.

Choosing foods that work together with your unique biology could help people feel fuller for longer and eat less overall.

Lead author on the study, Patrick Wyatt from ZOE, notes, "This study shows how wearable technology can provide valuable insights to help people understand their unique biology and take control of their nutrition and health. By demonstrating the importance of sugar dips, our study paves the way for data-driven, personalized guidance for those seeking to manage their hunger and calorie intake in a way that works with rather than against their body."

Read more at Science Daily

Gene therapy shows promise in treating rare eye disease in mice

 A gene therapy protects eye cells in mice with a rare disorder that causes vision loss, especially when used in combination with other gene therapies, shows a study published today in eLife.

The findings suggest that this therapy, whether used alone or in combination with other gene therapies that boost eye health, may offer a new approach to preserving vision in people with retinitis pigmentosa or other conditions that cause vision loss.

Retinitis pigmentosa is a slowly progressive disease, which begins with the loss of night vision due to genetic lesions that affect rod photoreceptors -- cells in the eyes that sense light when it is low. These photoreceptors die because of their intrinsic genetic defects. This then impacts cone photoreceptors, the eye cells that detect light during the day, which leads to the eventual loss of daylight vision. One theory about why cones die concerns the loss of nutrient supply, especially glucose.

Scientists have developed a few targeted gene therapies to help individuals with certain mutations that affect the photoreceptors, but no treatments are currently available that would be effective for a broad set of families with the disease. "A gene therapy that would preserve photoreceptors in people with retinitis pigmentosa regardless of their specific genetic mutation would help many more patients," says lead author Yunlu Xue, Postdoctoral Fellow at senior author Constance Cepko's lab, Harvard Medical School, Boston, US.

To find a widely effective gene therapy for the disease, Xue and colleagues screened 20 potential therapies in mouse models with the same genetic deficits as humans with retinitis pigmentosa. The team chose the therapies based on the effects they have on sugar metabolism.

Their experiments showed that using a virus carrier to deliver a gene called Txnip was the most effective approach in treating the condition across three different mouse models. A version of Txnip called C247S worked especially well, as it helped the cone photoreceptors switch to using alternative energy sources and improved mitochondria health in the cells.

The team then showed that giving the mice gene therapies that reduced oxidative stress and inflammation, along with Txnip gene therapy, provided additional protection for the cells. Further studies are now needed to confirm whether this approach would help preserve vision in people with retinitis pigmentosa.

Read more at Science Daily

Apr 13, 2021

Study warns of 'oxygen false positives' in search for signs of life on other planets

 In the search for life on other planets, the presence of oxygen in a planet's atmosphere is one potential sign of biological activity that might be detected by future telescopes. A new study, however, describes several scenarios in which a lifeless rocky planet around a sun-like star could evolve to have oxygen in its atmosphere.

The new findings, published April 13 in AGU Advances, highlight the need for next-generation telescopes that are capable of characterizing planetary environments and searching for multiple lines of evidence for life in addition to detecting oxygen.

"This is useful because it shows there are ways to get oxygen in the atmosphere without life, but there are other observations you can make to help distinguish these false positives from the real deal," said first author Joshua Krissansen-Totton, a Sagan Fellow in the Department of Astronomy and Astrophysics at UC Santa Cruz. "For each scenario, we try to say what your telescope would need to be able to do to distinguish this from biological oxygen."

In the coming decades, perhaps by the late 2030s, astronomers hope to have a telescope capable of taking images and spectra of potentially Earth-like planets around sun-like stars. Coauthor Jonathan Fortney, professor of astronomy and astrophysics and director of UCSC's Other Worlds Laboratory, said the idea would be to target planets similar enough to Earth that life might have emerged on them and characterize their atmospheres.

"There has a been a lot of discussion about whether detection of oxygen is 'enough' of a sign of life," he said. "This work really argues for needing to know the context of your detection. What other molecules are found in addition to oxygen, or not found, and what does that tell you about the planet's evolution?"

This means astronomers will want a telescope that is sensitive to a broad range of wavelengths in order to detect different types of molecules in a planet's atmosphere.

The researchers based their findings on a detailed, end-to-end computational model of the evolution of rocky planets, starting from their molten origins and extending through billions of years of cooling and geochemical cycling. By varying the initial inventory of volatile elements in their model planets, the researchers obtained a surprisingly wide range of outcomes.

Oxygen can start to build up in a planet's atmosphere when high-energy ultraviolet light splits water molecules in the upper atmosphere into hydrogen and oxygen. The lightweight hydrogen preferentially escapes into space, leaving the oxygen behind. Other processes can remove oxygen from the atmosphere. Carbon monoxide and hydrogen released by outgassing from molten rock, for example, will react with oxygen, and weathering of rock also mops up oxygen. These are just a few of the processes the researchers incorporated into their model of the geochemical evolution of a rocky planet.

"If you run the model for Earth, with what we think was the initial inventory of volatiles, you reliably get the same outcome every time -- without life you don't get oxygen in the atmosphere," Krissansen-Totton said. "But we also found multiple scenarios where you can get oxygen without life."

For example, a planet that is otherwise like Earth but starts off with more water will end up with very deep oceans, putting immense pressure on the crust. This effectively shuts down geological activity, including all of the processes such as melting or weathering of rocks that would remove oxygen from the atmosphere.

In the opposite case, where the planet starts off with a relatively small amount of water, the magma surface of the initially molten planet can freeze quickly while the water remains in the atmosphere. This "steam atmosphere" puts enough water in the upper atmosphere to allow accumulation of oxygen as the water breaks up and hydrogen escapes.

"The typical sequence is that the magma surface solidifies simultaneously with water condensing out into oceans on the surface," Krissansen-Totton said. "On Earth, once water condensed on the surface, escape rates were low. But if you retain a steam atmosphere after the molten surface has solidified, there's a window of about a million years when oxygen can build up because there are high water concentrations in the upper atmosphere and no molten surface to consume the oxygen produced by hydrogen escape."

A third scenario that can lead to oxygen in the atmosphere involves a planet that is otherwise like Earth but starts off with a higher ratio of carbon dioxide to water. This leads to a runaway greenhouse effect, making it too hot for water to ever condense out of the atmosphere onto the surface of the planet.

"In this Venus-like scenario, all the volatiles start off in the atmosphere and few are left behind in the mantle to be outgassed and mop up oxygen," Krissansen-Totton said.

He noted that previous studies have focused on atmospheric processes, whereas the model used in this study explores the geochemical and thermal evolution of the planet's mantle and crust, as well as the interactions between the crust and atmosphere.

"It's not computationally intensive, but there are a lot of moving parts and interconnected processes," he said.

Read more at Science Daily

Study showing how the brain retrieves facts and may help people with memory problems

 A shared set of systems in the brain may play an important role in controlling the retrieval of facts and personal memories utilised in everyday life, new research shows.

Scientists from the University of York say their findings may have relevance to memory disorders, including dementia, where problems remembering relevant information can impact on the daily life of patients.

Researchers say the findings may also have important implications for the development of a new generation of artificial intelligence systems, which use long-term memory in solving computational problems.

The brain's long-term memory stores are categorised into two: factual memory and memory of personal experiences.

Together, these two long-term memory stores help us understand and respond to the world around us.

Decades of clinical and experimental research has shown that these two memory stores are represented across two separate brain regions.

But the new study suggests that a shared set of brain regions play an important role in controlling the successful retrieval of weak memories.

Using functional magnetic resonance imaging technology, researchers studied how these regions were shown to increase their activity when participants were asked to retrieve fact memories and personal memories.

Lead researcher Dr Deniz Vatansever, formerly of the University of York and now working for the Institute of Science and Technology for Brain-inspired Intelligence, Fudan University said: "The new research suggests that despite their functional differences, successfully retrieving weak information from these two memory systems might be dependent upon a shared brain mechanism.

"Our memories allow us to make sense and flexibly interact with the world around us. Although in most cases, our strongly encoded memories might be sufficient for the task at hand, remembering to pack a beach towel for an upcoming seaside holiday, this strong memory may be irrelevant in other instances, such as when packing for a business trip. As such, we need to tightly control the retrieval of relevant memories to solve different tasks under different circumstances. Our results indicate that this control process might be shared across both factual and personal memory types."

Senior author Prof. Elizabeth Jefferies from the Department of Psychology, University of York, said: "In order to generate appropriate thoughts and behaviours, we have to draw on our memory stores in a highly flexible way. This new study highlights control processes within the brain that allow us to focus on unusual aspects of the meanings of words and to retrieve weakly encoded personal experiences. This control over memory allows us to be creative and to adapt as our goals or circumstances change."

Read more at Science Daily

People may trust computers more than humans

 Despite increasing concern over the intrusion of algorithms in daily life, people may be more willing to trust a computer program than their fellow humans, especially if a task becomes too challenging, according to new research from data scientists at the University of Georgia.

From choosing the next song on your playlist to choosing the right size pants, people are relying more on the advice of algorithms to help make everyday decisions and streamline their lives.

"Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day," said Eric Bogert, a Ph.D. student in the Terry College of Business Department of Management Information Systems. "It seems like there's a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people."

Bogert worked with management information systems professor Rick Watson and assistant professor Aaron Schecter on the paper, "Humans rely more on algorithms than social influence as a task becomes more difficult," which was published April 13 in Nature's Scientific Reports journal.

Their study, which involved 1,500 individuals evaluating photographs, is part of a larger body of work analyzing how and when people work with algorithms to process information and make decisions.

For this study, the team asked volunteers to count the number of people in a photograph of a crowd and supplied suggestions that were generated by a group of other people and suggestions generated by an algorithm.

As the number of people in the photograph expanded, counting became more difficult and people were more likely to follow the suggestion generated by an algorithm rather than count themselves¬ or follow the "wisdom of the crowd," Schecter said.

Schecter explained that the choice of counting as the trial task was an important one because the number of people in the photo makes the task objectively harder as it increases. It also is the type of task that laypeople expect computers to be good at.

"This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects," Schecter said. "One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there -- like income and credit score -- so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren't considered."

Facial recognition and hiring algorithms have come under scrutiny in recent years as well because their use has revealed cultural biases in the way they were built, which can cause inaccuracies when matching faces to identities or screening for qualified job candidates, Schecter said.

Those biases may not be present in a simple task like counting, but their presence in other trusted algorithms is a reason why it's important to understand how people rely on algorithms when making decisions, he added.

This study was part of Schecter's larger research program into human-machine collaboration, which is funded by a $300,000 grant from the U.S. Army Research Office.

"The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior," Schecter said. "Because there's very little research in that setting, we're starting with the fundamentals."

Read more at Science Daily

Simple genetic modification aims to stop mosquitoes spreading malaria

Altering a mosquito's gut genes to make them spread antimalarial genes to the next generation of their species shows promise as an approach to curb malaria, suggests a preliminary study published today in eLife.

The study is the latest in a series of steps toward using CRISPR-Cas9 gene-editing technology to make changes in mosquito genes that could reduce their ability to spread malaria. If further studies support this approach, it could provide a new way to reduce illnesses and deaths caused by malaria.

Growing mosquito resistance to pesticides, as well as malaria parasite resistance to antimalarial drugs, has created an urgent need for new ways to fight the disease. Gene drives are being tested as a new approach. They work by creating genetically modified mosquitoes that, when released into the environment, would spread genes that either reduce mosquito populations or make the insects less likely to spread the malaria parasite. But scientists must prove that this approach is safe and effective before releasing genetically modified mosquitoes into the wild.

"Gene drives are promising tools for malaria control," says first author Astrid Hoermann, Research Associate at Imperial College London, UK. "But we wanted a clear pathway for safely testing such tools in countries where the disease most commonly occurs."

In the study, Hoermann and colleagues genetically modified the malaria-transmitting mosquito Anopheles gambiae. They used the CRISPR-Cas9 technology to insert a gene that encodes an antimalarial protein amidst genes that are turned on after the mosquito eats a blood meal. The team did this in a manner that allowed the whole section of DNA to also function as a gene drive that could be passed on to most of the mosquitoes' offspring. They initially inserted the gene along with a fluorescent marker to help them track it in three different spots in the DNA, and then later removed the marker, leaving only a minor genetic modification behind.

Next, the team bred the mosquitoes to see if they were able to successfully reproduce and remain healthy. They also tested how well the malaria parasite developed in the mosquitoes' guts. Their experiments provide preliminary evidence that this approach to genetic modifications could create successful gene drives.

Read more at Science Daily

Apr 12, 2021

Life on Venus? First we need to know more about molecules in the atmosphere

The search for life on other planets has received a major boost after scientists revealed the spectral signatures of almost 1000 atmospheric molecules that may be involved in the production or consumption of phosphine, a study led by UNSW Sydney revealed.

Scientists have long conjectured that phosphine -- a chemical compound made of one phosphorus atom surrounded by three hydrogen atoms (PH3) -- may indicate evidence of life if found in the atmospheres of small rocky planets like our own, where it is produced by the biological activity of bacteria.

So when an international team of scientists last year claimed to have detected phosphine in the atmosphere of Venus, it raised the tantalising prospect of the first evidence of life on another planet -- albeit the primitive, single-celled variety.

But not everyone was convinced, with some scientists questioning whether the phosphine in Venus's atmosphere was really produced by biological activity, or whether phosphine was detected at all.

Now an international team, led by UNSW Sydney scientists, has made a key contribution to this and any future searches for life on other planets by demonstrating how an initial detection of a potential biosignature must be followed by searches for related molecules.

In a paper published today in the journal Frontiers in Astronomy and Space Sciences, they described how the team used computer algorithms to produce a database of approximate infrared spectral barcodes for 958 molecular species containing phosphorus.

LOOK AND LEARN

As UNSW School of Chemistry's Dr Laura McKemmish explains, when scientists look for evidence of life on other planets, they don't need to go into space, they can simply point a telescope at the planet in question.

"To identify life on a planet, we need spectral data," she says.

"With the right spectral data, light from a planet can tell you what molecules are in the planet's atmosphere."

Phosphorus is an essential element for life, yet up until now, she says, astronomers could only look for one polyatomic phosphorus-containing molecule, phosphine.

"Phosphine is a very promising biosignature because it is only produced in tiny concentrations by natural processes. However, if we can't trace how it is produced or consumed, we can't answer the question of whether it is unusual chemistry or little green men who are producing phosphine on a planet," says Dr McKemmish.

To provide insight, Dr McKemmish brought together a large interdisciplinary team to understand how phosphorus behaves chemically, biologically and geologically and ask how this can be investigated remotely through atmospheric molecules alone.

"What was great about this study is that it brought together scientists from disparate fields -- chemistry, biology, geology -- to address these fundamental questions around the search for life elsewhere that one field alone could not answer," says astrobiologist and co-author on the study, Associate Professor Brendan Burns.

Dr McKemmish continues: "At the start, we looked for which phosphorus-bearing molecules -- what we called P-molecules -- are most important in atmospheres but it turns out very little is known. So we decided to look at a large number of P-molecules that could be found in the gas-phase which would otherwise go undetected by telescopes sensitive to infrared light."

Barcode data for new molecular species are normally produced for one molecule at a time, Dr McKemmish says, a process that often takes years. But the team involved in this research used what she calls "high-throughput computational quantum chemistry" to predict the spectra of 958 molecules within only a couple of weeks.

"Though this new dataset doesn't yet have the accuracy to enable new detections, it can help prevent misassignments by highlighting the potential for multiple molecular species having similar spectral barcodes -- for example, at low resolution with some telescopes, water and alcohol could be indistinguishable.

"The data can also be used to rank how easy a molecule is to detect. For example, counter-intuitively, alien astronomers looking at Earth would find it much easier to detect 0.04% CO2 in our atmosphere than the 20% O2. This is because CO2 absorbs light much more strongly than O2 -- this is actually what causes the greenhouse effect on Earth."

LIFE ON EXOPLANETS

Regardless of the outcomes from the debate about the existence of phosphine in Venus's atmosphere and the potential signs of life on the planet, this recent addition to the knowledge of what can be detected using telescopes will be important in the detection of potential signs of life on exoplanets -- planets in other solar systems.

"The only way we're going to be able to look at exoplanets and see whether there's life there is to use spectral data collected by telescopes -- that is our one and only tool," says Dr McKemmish.

"Our paper provides a novel scientific approach to following up the detection of potential biosignatures and has relevance to the study of astrochemistry within and outside the Solar System," says Dr McKemmish. "Further studies will rapidly improve the accuracy of the data and expand the range of molecules considered, paving the way for its use in future detections and identifications of molecules."

Fellow co-author and CSIRO astronomer Dr Chenoa Tremblay says the team's contribution will be beneficial as more powerful telescopes come online in the near future.

"This information has come at a critical time in astronomy," she says.

"A new infrared telescope called the James Web Space Telescope is due to launch later this year and it will be far more sensitive and cover more wavelengths than its predecessors like the Herschel Space Observatory. We will need this information at a very rapid rate to identify new molecules in the data."

She says although the team's work was focused on the vibrational motions of molecules detected with telescopes sensitive to infrared light, they are currently working to extend the technique to the radio wavelengths as well.

Read more at Science Daily

Search for sterile neutrinos: It's all about a bend in the curve

 There are many questions surrounding the elementary particle neutrino, in particular regarding its mass. Physicists are also interested in whether besides the "classic" neutrinos there are variants such as the so-called sterile neutrinos. The KATRIN experiment has now succeeded in strongly narrowing the search for these elusive particles. The publication appeared recently in the journal Physical Review Letters.

Strictly speaking, the neutrino is not a singleparticle but rather comprises several species: the electron neutrino, the muon neutrino, and the tau neutrino. These particles are constantly transforming into each other in a process referred to as neutrino oscillation. It is assumed that neutrinos have mass; this is to be determined in the KATRIN experiment, which started in 2019 at the Karlsruhe Institute for Technology (KIT). According to the results to date, the neutrino has a mass less than 1 electron volt.

KATRIN could also be used to track down related species that have so far only been hypothetical: The sterile neutrinos. The heavier branch (mass in kiloelectronvolt range) is considered a candidate for dark matter and will be sought after a new detector is installed in KATRIN. Besides this, there could also a lighter sterile neutrino type.

New exclusion criteria for the light sterile neutrino

Quite a few experiments are looking for light sterile neutrinos (mass in the electronvolt range). It could also reveal itself in the KATRIN experiment. The mass and the mixing ratio of active (normal) and sterile neutrinos play an essential real in the search for the light sterile neutrino.

Susanne Mertens and her team at the Max Planck Institute for Physics (MPP) succeeded in defining new exclusion limits with the help of KATRIN. "With our evaluations, we were able to significantly reduce the search area for this neutrino," says Mertens.

With the new analysis of the KATRIN data, developed by the group of Susanne Mertens and Thierry Lasserre at MPP, the existence of sterile neutrinos with a mass between about 3 and 30 electronvolts and a mixing ratio greater than 10% can now be ruled out. This result complements previously achieved exclusion limits.

Search by measuring the neutrino mass

But how can KATRIN find sterile neutrinos? Using the same method, the experiment also determines the mass of the active neutrino. The mass of the neutrino can be measured via radioactive decay. KATRIN uses tritium (heavy water) for this purpose. When a proton is converted into a neutron, one neutrino and one electron are produced. The decay energy of 18.6 kiloelectronvolts is divided between them.

"We know that the neutrino is extremely light and receives only a tiny fraction of the decay energy," says Mertens. "The maximum energy of the electron is reduced by the mass of the neutrino." The mass of the neutrino therefore results from the difference between the decay energy and the maximum energy of the electron.

Read more at Science Daily

Major risk of injury for recreational runners

 Almost half of all recreational runners incur injuries, mostly relating to knees, calves or Achilles tendons, and the level of risk is equally high whatever your age, gender or running experience. These are the findings of a thesis within sport science.

Doctoral student Jonatan Jungmalm recruited a little over 200 recreational runners from the list of entrants for the Göteborgsvarvet Half Marathon and monitored them over a period of one year. To take part in the study, they had to have been running for at least a year, have run an average of at least 15 km per week over the past year and have been injury free for at least six months. The participants were men and women in the age range 18-55.

Calculation shows injury for half of runners

Over the year of the study, the recreational runners filled in a training diary, entering information about how far they ran each day and whether they felt any pain. Those who suffered sudden injury or felt pain for a prolonged period were examined by a sports doctor.

"A third of the participants were injured over the course of the study. But if you also take account of the participants who dropped out of the study, it is reasonable to assume that almost half of all recreational runners injure themselves in a year," states Jonatan Jungmalm.

Jonatan used a particular statistical method to calculate the proportion of injured runners, taking into account the rate of dropout that is common in studies based on voluntary participation.

Injuries to knee, calf and Achilles tendon

Of those hit by injury, half had problems with their knees, calves or Achilles tendons.

"Few of the injuries were long-lasting. But all the injuries prevented the runners from exercising as usual," says Jonatan.

No difference was found in terms of gender, age, running experience or weight between those who injured themselves and those who did not.

"However, those who had previously been injured were more likely to be affected again."

Multiple physical tests

All the participants were put through a series of physical tests before the study, ranging from strength tests and mobility tests to tests of running style.

"Those who had relatively weak outer thighs faced a higher risk of injury. Those with late pronation in their running gait were also at higher risk. However, having a weak torso or limited muscle flexibility was of no great significance."

Read more at Science Daily

Prehistoric Pacific Coast diets had salmon limits

 Humans cannot live on protein alone -- even for the ancient indigenous people of the Pacific Northwest whose diet was once thought to be almost all salmon.

In a new paper led by Washington State University anthropologist Shannon Tushingham, researchers document the many dietary solutions ancient Pacific Coast people in North America likely employed to avoid "salmon starvation," a toxic and potentially fatal condition brought on by eating too much lean protein.

"Salmon was a critical resource for thousands of years throughout the Pacific Rim, but there were a lot of foods that were important," said Tushingham the lead author of the paper published online on April 8 in the American Journal of Physical Anthropology. "Native people were not just eating salmon. There's a bigger picture."

Some archeologists have contended for years that prehistoric Northwest people had an "extreme salmon specialization," a theory primarily based on the amount of salmon bone found at archeological sites.

Tushingham and her co-authors argue that such a protein-intensive diet would be unsustainable. They point to nutritional studies and a global database of hunter-gatherer diets that indicate people have dietary limit on lean protein of around 35%. While it can vary by individual, exceeding that ceiling can be physically debilitating within a few days and fatal within weeks. Early explorers in the U.S. West subsisting on lean wild game discovered this problem the hard way and called it "rabbit starvation" or "caribou sickness."

This toxic situation can apply to any lean meat, including salmon, Tushingham said. To avoid "salmon starvation," early Pacific Coast people had to find ways to get other nutrients, especially for children and nursing mothers who have even lower dietary thresholds for lean protein.

"There were ingenious nutritional and cultural solutions to the circumstances in the Northwest," said Tushingham. "Yes, salmon was important, but it wasn't that simple. It wasn't just a matter of going fishing and getting everything they needed. They also had to think about balancing their diet and making sure everybody could make it through the winter."

The researchers point to evidence in California that people offset stored salmon protein with acorns; in Oregon and Washington, they ate root crops like camas as well as more fat-heavy fish such as eulachon. Further north, where plants are more limited, communities often ate marine mammals with high fat content such as seals and walrus. In far north interior, where there are few plants and the salmon runs can go thousands of miles inland, this was particularly challenging. Lean dried salmon was an important food source, and people circumvented salmon starvation through trading for oil with coastal peoples or obtaining fat through processing bone marrow from caribou and elk.

The authors focus on the limits of salmon, which used to be considered a "prime mover" of Pacific Northwest populations, but their analysis also has implications for the study of historical human nutrition. If their argument is correct, it is unlikely that any human society was fully driven by pursuit of protein alone as their diets had to be more complex.

Read more at Science Daily

Apr 11, 2021

Curiosity rover explores stratigraphy of Gale crater

Gale Crater's central sedimentary mound (Aeolis Mons or, informally, Mount Sharp) is a 5.5-km-tall remnant of the infilling and erosion of this ancient impact crater. Given its thickness and age, Mount Sharp preserves one of the best records of early Martian climatic, hydrological, and sedimentary history.

In this paper, published today in Geology, William Rapin and colleagues present the first description of key facies in the sulfate-bearing unit, recently observed in the distance by the rover, and propose a model for changes in depositional environments.

The basal part of this sedimentary sequence is ahead of the Curiosity rover traverse and was recently analyzed with unprecedented resolution by the rover cameras. The telescopic imager of the ChemCam instrument was used here in particular, and its images show sedimentary structures that reveal evolution of environments on Mars during the Hesperian age (3.7-2.9 billion years ago).

Analysis of the structures shows that on top of the ancient lake deposits currently explored by the rover (Murray formation), vast aeolian deposits were formed by a dune field during a prolonged dry climatic episode. Yet, higher up, the stratigraphy reveals the resumption of wetter climatic conditions.

The climate of Mars appears therefore to have fluctuated several times at high order between dry conditions and wet conditions in the Hesperian age, a period during which Mars' environment is thought to have changed globally due to the gradual loss of its atmosphere to space.

From Science Daily

Thinking with your stomach? The brain may have evolved to regulate digestion

Many life forms use light as an important biological signal, including animals with visual and non-visual systems. But now, researchers from Japan have found that neuronal cells may have initially evolved to regulate digestion according to light information.

In a study published this month in BMC Biology, researchers from the University of Tsukuba have revealed that sea urchins use light to regulate the opening and closing of the pylorus, which is an important component of the digestive tract.

Light-dependent systems often rely on the activity of proteins in the Opsin family, and these are found across the animal kingdom, including in organisms with visual and non-visual systems. Understanding the function of Opsins in animals from different taxonomic groups may provide important clues regarding how visual/non-visual systems evolved in different creatures to use light as an external signal. The function of Opsins in the Ambulacraria groups of animals, which include sea urchins, has not been characterized, something the researchers aimed to address.

"The functions of eyes and visual systems have been well-characterized," says senior author of the study Professor Shunsuke Yaguchi. "However, the way in which light dependent systems were acquired and diversified throughout evolution is unclear especially in deuterostomes because of the lack of data regarding the signaling pathway in the Ambulacraria group."

To address this, the researchers tested whether light exposure caused changes in digestive tract activity in sea urchins. They then conducted micro-surgical and genetic knockdown experiments to test whether Opsin cells in the sea urchin digestive system mediated the effect of light.

"The results provided new information about the role of Opsins in sea urchins," explains Professor Yaguchi. "Specifically, we found that stimulation of sea urchin larvae via light caused changes in digestive system function, even in the absence of food stimuli."

Furthermore, the researchers identified brain serotonergic neurons near the Opsin-expressing cells that were essential for mediating the light-stimulated release of nitric oxide, which acts as a neurotransmitter.

"Our results have important implications for understanding the process of evolution, specifically, that of light-dependent systems controlled via neurotransmitters," says Professor Yaguchi.

Read more at Science Daily