May 14, 2022

New study indicates limited water circulation late in the history of Mars

A research team led by Lund University in Sweden has investigated a meteorite from Mars using neutron and X-ray tomography. The technology, which will probably be used when NASA examines samples from the Red Planet in 2030, showed that the meteorite had limited exposure to water, thus making life at that specific time and place unlikely.

In a cloud of smoke, NASA's spacecraft Perseverance parachuted onto the dusty surface of Mars in February 2021. For several years, the vehicle will skid around and take samples to try to answer the question posed by David Bowie in Life on Mars in 1971. It isn't until around 2030 that Nasa actually intends to send the samples back to Earth, but material from Mars is already being studied -- in the form of meteorites. In a new study published in Science Advances, an international research team has studied an approximately 1.3 billion-year-old meteorite using advanced scanning.

"Since water is central to the question of whether life ever existed on Mars, we wanted to investigate how much of the meteorite reacted with water when it was still part of the Mars bedrock," explains Josefin Martell, geology doctoral student at Lund University.

To answer the question of whether there was any major hydrothermal system, which is generally a favorable environment for life to occur, the researchers used neutron and X-ray tomography. X-ray tomography is a common method of examining an object without damaging it. Neutron tomography was used because neutrons are very sensitive to hydrogen.

This means that if a mineral contains hydrogen, it is possible to study it in three dimensions and see where in the meteorite the hydrogen is located. Hydrogen (H) is always of interest when scientists study material from Mars, because water (H2O) is a prerequisite for life as we know it. The results show that a fairly small part of the sample seems to have reacted with water, and that it therefore probably wasn't a large hydrothermal system that gave rise to the alteration.

"A more probable explanation is that the reaction took place after small accumulations of underground ice melted during a meteorite impact about 630 million years ago. Of course, that doesn't mean that life couldn't have existed in other places on Mars, or that there couldn't have been life at other times," says Josefin Martell.

The researchers hope that the results of their study will be helpful when NASA brings back the first samples from Mars around 2030, and there are many reasons to believe that the current technology with neutron and X-ray tomography will be useful when this happens.

Read more at Science Daily

How sleep helps to process emotions

Researchers at the Department of Neurology of the University of Bern and University Hospital Bern identified how the brain triages emotions during dream sleep to consolidate the storage of positive emotions while dampening the consolidation of negative ones. The work expands the importance of sleep in mental health and opens new ways of therapeutic strategies.

Rapid eye movement (REM or paradoxical) sleep is a unique and mysterious sleep state during which most of the dreams occur together with intense emotional contents. How and why these emotions are reactivated is unclear. The prefrontal cortex integrates many of these emotions during wakefulness but appears paradoxically quiescent during REM sleep. "Our goal was to understand the underlying mechanism and the functions of such a surprising phenomenon," says Prof. Antoine Adamantidis from the Department of Biomedical Research (DBMR) at the University of Bern and the Department of Neurology at the Inselspital, University Hospital of Bern.

Processing emotions, particularly distinguishing between danger and safety, is critical for the survival of animals. In humans, excessively negative emotions, such as fear reactions and states of anxiety, lead to pathological states like Post-Traumatic Stress Disorders (PTSD). In Europe, roughly 15% of the population is affected by persistent anxiety and severe mental illness. The research group headed by Antoine Adamantidis is now providing insights into how the brain helps to reinforce positive emotions and weaken strongly negative or traumatic emotions during REM sleep. This study was published in the journal Science.

A Dual mechanism

The researchers first conditioned mice to recognize auditory stimuli associated with safety and others associated with danger (aversive stimuli). The activity of neurons in the brain of mice was then recorded during sleep-wake cycles. In this way, the researchers were able to map different areas of a cell and determine how emotional memories are transformed during REM sleep.

Neurons are composed of a cell body (soma) that integrates information coming from the dendrites (inputs) and send signals to other neurons via their axons (outputs). The results obtained showed that cell somas are kept silent while their dendrites are activated. "This means a decoupling of the two cellular compartments, in other words soma wide asleep and dendrites wide awake," explains Adamantidis. This decoupling is important because the strong activity of the dendrites allows the encoding of both danger and safety emotions, while the inhibitions of the soma completely block the output of the circuit during REM sleep. In other words, the brain favours the discrimination of safety versus danger in the dendrites, but block the over-reaction to emotion, in particular danger.

A survival advantage

According to the researchers, the coexistence of both mechanisms is beneficial to the stability and survival of the organisms: "This bi-directional mechanism is essential to optimize the discrimination between dangerous and safe signals," says Mattia Aime from the DBMR, first author of the study. If this discrimination is missing in humans and excessive fear reactions are generated, this can lead to anxiety disorders. The findings are particularly relevant to pathological conditions such as post-traumatic stress disorders, in which trauma is over-consolidated in the prefrontal cortex, day after day during sleep.

Read more at Science Daily

May 13, 2022

Astronomers reveal first image of the black hole at the heart of our galaxy

Today, at simultaneous press conferences around the world, including at the European Southern Observatory (ESO) headquarters in Germany, astronomers have unveiled the first image of the supermassive black hole at the centre of our own Milky Way galaxy. This result provides overwhelming evidence that the object is indeed a black hole and yields valuable clues about the workings of such giants, which are thought to reside at the centre of most galaxies. The image was produced by a global research team called the Event Horizon Telescope (EHT) Collaboration, using observations from a worldwide network of radio telescopes.

The image is a long-anticipated look at the massive object that sits at the very centre of our galaxy. Scientists had previously seen stars orbiting around something invisible, compact, and very massive at the centre of the Milky Way. This strongly suggested that this object -- known as Sagittarius A* (Sgr A*, pronounced "sadge-ay-star") -- is a black hole, and today's image provides the first direct visual evidence of it.

Although we cannot see the black hole itself, because it is completely dark, glowing gas around it reveals a telltale signature: a dark central region (called a shadow) surrounded by a bright ring-like structure. The new view captures light bent by the powerful gravity of the black hole, which is four million times more massive than our Sun.

"We were stunned by how well the size of the ring agreed with predictions from Einstein's Theory of General Relativity," said EHT Project Scientist Geoffrey Bower from the Institute of Astronomy and Astrophysics, Academia Sinica, Taipei. "These unprecedented observations have greatly improved our understanding of what happens at the very centre of our galaxy, and offer new insights on how these giant black holes interact with their surroundings." The EHT team's results are being published today in a special issue of The Astrophysical Journal Letters.

Because the black hole is about 27 000 light-years away from Earth, it appears to us to have about the same size in the sky as a doughnut on the Moon. To image it, the team created the powerful EHT, which linked together eight existing radio observatories across the planet to form a single "Earth-sized" virtual telescope. The EHT observed Sgr A* on multiple nights in 2017, collecting data for many hours in a row, similar to using a long exposure time on a camera.

In addition to other facilities, the EHT network of radio observatories includes the Atacama Large Millimeter/submillimeter Array (ALMA) and the Atacama Pathfinder EXperiment (APEX) in the Atacama Desert in Chile, co-owned and co-operated by ESO on behalf of its member states in Europe. Europe also contributes to the EHT observations with other radio observatories -- the IRAM 30-meter telescope in Spain and, since 2018, the NOrthern Extended Millimeter Array (NOEMA) in France -- as well as a supercomputer to combine EHT data hosted by the Max Planck Institute for Radio Astronomy in Germany. Moreover, Europe contributed with funding to the EHT consortium project through grants by the European Research Council and by the Max Planck Society in Germany.

"It is very exciting for ESO to have been playing such an important role in unravelling the mysteries of black holes, and of Sgr A* in particular, over so many years," commented ESO Director General Xavier Barcons. "ESO not only contributed to the EHT observations through the ALMA and APEX facilities but also enabled, with its other observatories in Chile, some of the previous breakthrough observations of the Galactic centre."

The EHT achievement follows the collaboration's 2019 release of the first image of a black hole, called M87*, at the centre of the more distant Messier 87 galaxy.

The two black holes look remarkably similar, even though our galaxy's black hole is more than a thousand times smaller and less massive than M87*. "We have two completely different types of galaxies and two very different black hole masses, but close to the edge of these black holes they look amazingly similar," says Sera Markoff, Co-Chair of the EHT Science Council and a professor of theoretical astrophysics at the University of Amsterdam, the Netherlands. "This tells us that General Relativity governs these objects up close, and any differences we see further away must be due to differences in the material that surrounds the black holes."

This achievement was considerably more difficult than for M87*, even though Sgr A* is much closer to us. EHT scientist Chi-kwan ('CK') Chan, from Steward Observatory and Department of Astronomy and the Data Science Institute of the University of Arizona, USA, explains: "The gas in the vicinity of the black holes moves at the same speed -- nearly as fast as light -- around both Sgr A* and M87*. But where gas takes days to weeks to orbit the larger M87*, in the much smaller Sgr A* it completes an orbit in mere minutes. This means the brightness and pattern of the gas around Sgr A* were changing rapidly as the EHT Collaboration was observing it -- a bit like trying to take a clear picture of a puppy quickly chasing its tail."

The researchers had to develop sophisticated new tools that accounted for the gas movement around Sgr A*. While M87* was an easier, steadier target, with nearly all images looking the same, that was not the case for Sgr A*. The image of the Sgr A* black hole is an average of the different images the team extracted, finally revealing the giant lurking at the centre of our galaxy for the first time.

The effort was made possible through the ingenuity of more than 300 researchers from 80 institutes around the world that together make up the EHT Collaboration. In addition to developing complex tools to overcome the challenges of imaging Sgr A*, the team worked rigorously for five years, using supercomputers to combine and analyse their data, all while compiling an unprecedented library of simulated black holes to compare with the observations.

Scientists are particularly excited to finally have images of two black holes of very different sizes, which offers the opportunity to understand how they compare and contrast. They have also begun to use the new data to test theories and models of how gas behaves around supermassive black holes. This process is not yet fully understood but is thought to play a key role in shaping the formation and evolution of galaxies.

"Now we can study the differences between these two supermassive black holes to gain valuable new clues about how this important process works," said EHT scientist Keiichi Asada from the Institute of Astronomy and Astrophysics, Academia Sinica, Taipei. "We have images for two black holes -- one at the large end and one at the small end of supermassive black holes in the Universe -- so we can go a lot further in testing how gravity behaves in these extreme environments than ever before."

Read more at Science Daily

Family size may influence cognitive functioning in later life

A new study at Columbia University Mailman School of Public Health, and the Robert Butler Columbia Aging Center and Université Paris-Dauphine -- PSL, found that having three or more versus two children has a negative effect on late-life cognition. The results further indicated that this effect was strongest in Northern Europe, where higher fertility decreases financial resources but does not improve social resources in this region. This is the first to study the causal effect of high fertility on late-life cognition.

Until now fertility has not received much attention as a potential predictor of late-life cognition compared with other factors, such as education or occupation. The findings are published in the journal Demography.

"Understanding the factors that contribute to optimal late-life cognition is essential for ensuring successful aging at the individual and societal levels -- particularly in Europe, where family sizes have shrunk and populations are aging rapidly," said Vegard Skirbekk, PhD, professor of population and Family health at Columbia Mailman School. "For individuals, late life cognitive health is essential for maintaining independence and being socially active and productive in late life. For societies, ensuring the cognitive health of the older population is essential for extending work lives and reducing health care costs and care needs," said Eric Bonsang, PhD, professor of economics at the Université Paris-Dauphine -- PSL.

The researchers analyzed data from the Survey of Health, Aging and Retirement in Europe (SHARE) to examine the extent to which having three or more children versus two children causally affects late-life cognition. SHARE surveys representative samples of the older populations in 20 European countries and Israel including Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, France, Germany, Greece, Hungary, Italy, Luxembourg, the Netherlands, Poland, Portugal, Slovenia, Spain, Sweden, and Switzerland. Participants were aged 65 or older who had at least two biological children.

Based on advanced econometric methods able disentangle causality from simple associations, the evidence suggests that that having three or more versus two children is related to worse late-life cognition. They also found that this effect is similar for both men and women.

Fertility may affect late-life cognition via several pathways. First, having an additional child often incurs considerable financial costs, reduces family income and increases the likelihood of falling below the poverty line, thus decreasing the standard of living for all family members and possibly causing financial worries and uncertainties, which could contribute to cognitive deterioration.

Second, having an additional child is causally related to women's lower labor market participation, fewer hours worked, and lower earnings. In turn, labor force participation -- compared with retirement -- positively affects cognitive functioning among men and women.

Third, having children decreases the risk of social isolation among older individuals which is a key risk factor for cognitive impairment and dementia, and often raises the level of social interaction and support, which can be protective against cognitive decline at older ages.

Finally, having children can be stressful, affect health risk behaviors and adversely affect adult cognitive development. Parents with more children can experience more stress, have less time to relax and invest in cognitively stimulating leisure activities. This can imply sleep deprivation for the parent.

"The negative effect of having three or more children on cognitive functioning is not negligible, it is equivalent to 6.2 years of aging," noted Bonsang. It suggests that the decrease in the proportion of Europeans having three or more children may have positive implications for the cognitive health of the older population.

Read more at Science Daily

A brain circuit in the thalamus helps us hold information in mind

As people age, their working memory often declines, making it more difficult to perform everyday tasks. One key brain region linked to this type of memory is the anterior thalamus, which is primarily involved in spatial memory -- memory of our surroundings and how to navigate them.

In a study of mice, MIT researchers have identified a circuit in the anterior thalamus that is necessary for remembering how to navigate a maze. The researchers also found that this circuit is weakened in older mice, but enhancing its activity greatly improves their ability to run the maze correctly.

This region could offer a promising target for treatments that could help reverse memory loss in older people, without affecting other parts of the brain, the researchers say.

"By understanding how the thalamus controls cortical output, hopefully we could find more specific and druggable targets in this area, instead of generally modulating the prefrontal cortex, which has many different functions," says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in the Proceedings of the National Academy of Sciences. Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, and Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, are the lead authors of the paper.

Spatial memory

The thalamus, a small structure located near the center of the brain, contributes to working memory and many other executive functions, such as planning and attention. Feng's lab has recently been investigating a region of the thalamus known as the anterior thalamus, which has important roles in memory and spatial navigation.

Previous studies in mice have shown that damage to the anterior thalamus leads to impairments in spatial working memory. In humans, studies have revealed age-related decline in anterior thalamus activity, which is correlated with lower performance on spatial memory tasks.

The anterior thalamus is divided into three sections: ventral, dorsal, and medial. In a study published last year, Feng, Roy and Zhang studied the role of the anterodorsal (AD) thalamus and anteroventral (AV) thalamus in memory formation. They found that the AD thalamus is involved in creating mental maps of physical spaces, while the AV thalamus helps the brain to distinguish these memories from other memories of similar spaces.

In their new study, the researchers wanted to look more deeply at the AV thalamus, exploring its role in a spatial working memory task. To do that, they trained mice to run a simple T-shaped maze. At the beginning of each trial, the mice ran until they reached the T. One arm was blocked off, forcing them to run down the other arm. Then, the mice were placed in the maze again, with both arms open. The mice were rewarded if they chose the opposite arm from the first run. This meant that in order to make the correct decision, they had to remember which way they had turned on the previous run.

As the mice performed the task, the researchers used optogenetics to inhibit activity of either AV or AD neurons during three different parts of the task: the sample phase, which occurs during the first run; the delay phase, while they are waiting for the second run to begin; and the choice phase, when the mice make their decision which way to turn during the second run.

The researchers found that inhibiting AV neurons during the sample or choice phases had no effect on the mice's performance, but when they suppressed AV activity during the delay phase, which lasted 10 seconds or longer, the mice performed much worse on the task.

This suggests that the AV neurons are most important for keeping information in mind while it is needed for a task. In contrast, inhibiting the AD neurons disrupted performance during the sample phase but had little effect during the delay phase. This finding was consistent with the research team's earlier study showing that AD neurons are involved in forming memories of a physical space.

"The anterior thalamus in general is a spatial learning region, but the ventral neurons seem to be needed in this maintenance period, during this short delay," Roy says. "Now we have two subdivisions within the anterior thalamus: one that seems to help with contextual learning and the other that actually helps with holding this information."

Age-related decline

The researchers then tested the effects of age on this circuit. They found that older mice (14 months) performed worse on the T-maze task and their AV neurons were less excitable. However, when the researchers artificially stimulated those neurons, the mice's performance on the task dramatically improved.

Another way to enhance performance in this memory task is to stimulate the prefrontal cortex, which also undergoes age-related decline. However, activating the prefrontal cortex also increases measures of anxiety in the mice, the researchers found.

"If we directly activate neurons in medial prefrontal cortex, it will also elicit anxiety-related behavior, but this will not happen during AV activation," Zhang says. "That is an advantage of activating AV compared to prefrontal cortex."

If a noninvasive or minimally invasive technology could be used to stimulate those neurons in the human brain, it could offer a way to help prevent age-related memory decline, the researchers say. They are now planning to perform single-cell RNA sequencing of neurons of the anterior thalamus to find genetic signatures that could be used to identify cells that would make the best targets.

Read more at Science Daily

Jellyfish's stinging cells hold clues to biodiversity

The cnidocytes -- or stinging cells -- that are characteristic of sea anemones, hydrae, corals and jellyfish, and make us careful of our feet while wading in the ocean, are also an excellent model for understanding the emergence of new cell types, according to new Cornell research.

In new research published in the Proceedings of the National Academy of Sciences on May 2, Leslie Babonis, assistant professor of ecology and evolutionary biology in the College of Arts and Sciences, showed that these stinging cells evolved by repurposing a neuron inherited from a pre-cnidarian ancestor.

"These surprising results demonstrate how new genes acquire new functions to drive the evolution of biodiversity," Babonis said. "They suggest that co-option of ancestral cell types was an important source for new cell functions during the early evolution of animals."

Understanding how specialized cell types, such as stinging cells, come to be is one of the key challenges in evolutionary biology, Babonis said. For nearly a century, it's been known that cnidocytes developed from a pool of stem cells that also gives rise to neurons (brain cells), but up to now, no one knew how those stem cells decide to make either a neuron or a cnidocyte. Understanding this process in living cnidarians can reveal clues about now cnidocytes evolved in the first place, Babonis said.

Cnidocytes ("cnidos is Greek for "stinging nettle"), common to species in the diverse phylum Cnidaria, can launch a toxic barb or blob or enable cnidarians to stun prey or deter invaders. Cnidarians are the only animals that have cnidocytes, but lots of animals have neurons, Babonis said. So she and her colleagues at the University of Florida's Whitney Lab for Marine Bioscience studied cnidarians -- specifically sea anemones -- to understand how a neuron could be reprogrammed to make a new cell.

"One of the unique features of cnidocytes is that they all have an explosive organelle (a little pocket inside the cell) that contains the harpoon that shoots out to sting you," Babonis said. "These harpoons are made of a protein that is also found only in cnidarians, so cnidocytes seem to be one of the clearest examples of how the origin of a new gene (that encodes a unique protein) could drive the evolution of a new cell type."

Using functional genomics in the starlet sea anemone, Nematostella vectensis, the researchers showed that cnidocytes develop by turning off the expression of a neuropeptide, RFamide, in a subset of developing neurons and repurposing those cells as cnidocytes. Moreover, the researchers showed that a single cnidarian-specific regulatory gene is responsible both for turning off the neural function of those cells and turning on the cnidocyte-specific traits.

Neurons and cnidocytes are similar in form, Babonis said; both are secretory cells capable of ejecting something out of the cell. Neurons secrete neuropeptides -- proteins that rapidly communicate information to other cells. Cnidocytes secrete poison-laced harpoons.

"There is a single gene that acts like a light switch -- when it's on, you get a cnidocyte, when it's off you get a neuron," Babonis said. "It's a pretty simple logic for controlling cell identity."

This is the first study to show that this logic is in place in a cnidarian, Babonis said, so this feature was likely to regulate how cells became different from each other in the earliest multicellular animals.

Read more at Science Daily

May 12, 2022

Explosion on a white dwarf observed

When stars like our Sun use up all their fuel, they shrink to form white dwarfs. Sometimes such dead stars flare back to life in a super hot explosion and produce a fireball of X-ray radiation. A research team led by FAU has now been able to observe such an explosion of X-ray light for the very first time.

"It was to some extent a fortunate coincidence, really," explains Ole König from the Astronomical Institute at FAU in the Dr. Karl Remeis observatory in Bamberg, who has published an article about this observation in the journal Nature, together with Prof. Dr. Jörn Wilms and a research team from the Max Planck Institute for Extraterrestrial Physics, the University of Tübingen, the Universitat Politécnica de Catalunya in Barcelona und the Leibniz Institute for Astrophysics Potsdam. "These X-ray flashes last only a few hours and are almost impossible to predict, but the observational instrument must be pointed directly at the explosion at exactly the right time," explains the astrophysicist.

The instrument in this case is the eROSITA X-ray telescope, which is currently located one and a half million kilometers from Earth and has been surveying the sky for soft X-rays since 2019. On July 7, 2020 it measured strong X-ray radiation in an area of the sky that had been completely inconspicuous four hours previously. When the X-ray telescope surveyed the same position in the sky four hours later, the radiation had disappeared. It follows that the X-ray flash that had previously completely overexposed the center of the detector must have lasted less than eight hours.

X-ray explosions such as this were predicted by theoretical research more than 30 years ago, but have never been observed directly until now. These fireballs of X-rays occur on the surface of stars that were originally comparable in size to the Sun before using up most of their fuel made of hydrogen and later helium deep inside their cores. These stellar corpses shrink until "white dwarfs" remain, which are similar to Earth in size but contain a mass that can be similar to that of our Sun. "One way to picture these proportions is to think of the Sun being the same size as an apple, which means Earth would be the same size as a pin head orbiting around the apple at a distance of 10 meters," explains Jörn Wilms.

Stellar corpses resemble gemstones

On the other hand, if you were to shrink an apple to the size of a pin head, this tiny particle would retain the comparatively large weight of the apple. "A teaspoon of matter from the inside of a white dwarf easily has the same mass as a large truck," Jörn Wilms continues. Since these burnt out stars are mainly made up of oxygen and carbon, we can compare them to gigantic diamonds that are the same size as Earth floating around in space. These objects in the form of precious gems are so hot they glow white. However, the radiation is so weak that it is difficult to detect from Earth.

Unless the white dwarf is accompanied by a star that is still burning, that is, and when the enormous gravitational pull of the white dwarf draws hydrogen from the shell of the accompanying star. "In time, this hydrogen can collect to form a layer only a few meters thick on the surface of the white dwarf," explains FAU astrophysicist Jörn Wilms. In this layer, the huge gravitational pull generates enormous pressure that is so great that it causes the star to reignite. In a chain reaction, it soon comes to a huge explosion during which the layer of hydrogen is blown off. The X-ray radiation of an explosion like this is what hit the detectors of eROSITA on July 7, 2020 producing an overexposed image.

"Using the model calculations we originally drew up while supporting the development of the X-ray instrument, we were able to analyze the overexposed image in more detail during a complex process to gain a behind the scenes view of an explosion of a white dwarf, or nova," explains Jörn Wilms. According to the results, the white dwarf has around the mass of our Sun and is therefore relatively large. The explosion generated a fireball with a temperature of around 327,000 degrees, making it around sixty times hotter than the Sun.

Read more at Science Daily

Traveling to the centre of planet Uranus: Materials synthesis research and study in terapascal range

Jules Verne could not even dream of this: A research team from the University of Bayreuth, together with international partners, has pushed the boundaries of high-pressure and high-temperature research into cosmic dimensions. For the first time, they have succeeded in generating and simultaneously analyzing materials under compression pressures of more than one terapascal (1,000 gigapascals). Such extremely high pressures prevail, for example, at the center of the planet Uranus; they are more than three times higher than the pressure at the center of the Earth. In Nature, the researchers present the method they have developed for the synthesis and structural analysis of novel materials.

Theoretical models predict very unusual structures and properties of materials under extreme pressure-temperature conditions. But so far, these predictions could not be verified in experiments at compression pressures of more than 200 gigapascals. On the one hand, complex technical requirements are necessary to expose material samples to such extreme pressures, and on the other hand, sophisticated methods for simultaneous structural analyses were lacking. The experiments published in Nature therefore open up completely new dimensions for high-pressure crystallography: materials can now be created and studied in the laboratory that exist -- if at all -- only under extremely high pressures in the vastness of the universe.

"The method we have developed enables us for the first time to synthesize new material structures in the terapascal range and to analyze them in situ -- that is: while the experiment is still running. In this way, we learn about previously unknown states, properties and structures of crystals and can significantly deepen our understanding of matter in general. Valuable insights can be gained for the exploration of terrestrial planets and the synthesis of functional materials used in innovative technologies," explains Prof. Dr. Leonid Dubrovinsky of the Bavarian Geoinstitute (BGI) at the University of Bayreuth, the first author of the publication.

In their new study, the researchers show how they have generated and visualized in situ novel rhenium compounds using the now discovered method. The compounds in question are a novel rhenium nitride (Re₇N₃) and a rhenium-nitrogen alloy. These materials were synthesized under extreme pressures in a two-stage diamond anvil cell heated by laser beams. Synchrotron single-crystal X-ray diffraction enabled full chemical and structural characterization. "Two and a half years ago, we were very surprised in Bayreuth when we were able to produce a superhard metallic conductor based on rhenium and nitrogen that could withstand even extremely high pressures. If we apply high-pressure crystallography in the terapascal range in the future, we may make further surprising discoveries in this direction. The doors are now wide open for creative materials research that generates and visualizes unexpected structures under extreme pressures," says the study's lead author, Prof. Dr. Natalia Dubrovinskaia from the Laboratory of Crystallography at the University of Bayreuth.

Read more at Science Daily

The genetic origins of the world's first farmers clarified

The genetic origins of the first agriculturalists in the Neolithic period long seemed to lie in the Near East. A new study published in the journal Cell shows that the first farmers actually represented a mixture of Ice Age hunter-gatherer groups, spread from the Near East all the way to south-eastern Europe. Researchers from the University of Bern and the SIB Swiss Institute of Bioinformatics as well as from the Johannes Gutenberg University Mainz and the University of Fribourg were involved in the study. The method they developed could help reveal other human evolution patterns with unmatched resolution.

The first signs of agriculture and a sedentary lifestyle are found in the so-called 'Fertile Crescent', a region in the Near East where people began to settle down and domesticate animals and plants about 11,000 years ago. The question of the origin of agriculture and sedentism has occupied researchers for over 100 years: did farming spread from the Near East through cultural diffusion or through migration? Genetic analyses of prehistoric skeletons so far supported the idea that Europe's first farmers were descended from hunter-gatherer populations in Anatolia. While that may well be the case, this new study shows that the Neolithic genetic origins cannot clearly be attributed to a single region. Unexpected and complex population dynamics occurred at the end of the Ice Age, and led to the ancestral genetic makeup of the populations who invented agriculture and a sedentary life-style i.e. the first Neolithic farmers.

First farmers emerged from a mixing process starting 14,000 years ago

Previous analyses had suggested that the first Neolithic people were genetically different from other human groups from that time. Little was known about their origins. Nina Marchi, one of the study's first authors from the Institute of Ecology and Evolution at the University of Bern and SIB says: "We now find that the first farmers of Anatolia and Europe emerged from a population admixed between hunter-gatherers from Europe and the Near East." According to the authors, the mixing process started around 14,000 years ago, which was followed by a period of extreme genetic differentiation lasting several thousand years.

A novel approach to model population history from prehistoric skeletons

This research was made possible by combining two techniques: the production of high-quality ancient genomes from prehistoric skeletons, coupled with demographic modeling on the resulting data. The research team coined the term "demogenomic modeling" for this purpose. "It is necessary to have genome data of the best possible quality so that the latest statistical genomic methods can reconstruct the subtle demographic processes of the last 30 thousand years at high resolution," says Laurent Excoffier, one of the senior authors of the study. Laurent Excoffier is a professor at the Institute of Ecology and Evolution at the University of Bern and group leader at SIB. He initiated the project together with Joachim Burger of the Johannes Gutenberg University in Mainz and Daniel Wegmann of the University of Fribourg. Nina Marchi adds: "Simply comparing the similarity of different ancient genomes is not enough to understand how they evolved. We had to reconstruct the actual histories of the populations studied as accurately as possible. This is only possible with complex population genetic statistics."

Interdisciplinarity key to solve such ancient puzzles

Joachim Burger of the University of Mainz and second senior author emphasizes the necessity of interdisciplinarity: "It took close to ten years to gather and analyze the skeletons suitable for such a study. This was only possible by collaborating with numerous archaeologists and anthropologists, who helped us to anchor our models historically." The historical contextualisation was coordinated by Maxime Brami, who works with Burger at Johannes Gutenberg University. The young prehistorian was surprised by some of the study's findings: "Europe's first farmers seem to be descended from hunter-gatherer populations that lived all the way from the Near East to the Balkans. This was not foreseeable archaeologically."

Towards a general model of human population evolution

Genetic data from fossils (skeletons) are badly damaged and must be processed accordingly using bioinformatics, as Daniel Wegmann from the University of Fribourg and group leader at SIB explains: "The high-resolution reconstruction of the prehistory of the Europeans was only possible thanks to methods that we specifically developed to analyse ancient fossil genomes." Joachim Burger adds: "With these approaches, we have not only elucidated the origins of the world's first Neolithic populations, but we have established a general model of the evolution of human populations in Southwest Asia and Europe."

Read more at Science Daily

Video games can help boost children's intelligence

Researchers at Karolinska Institutet in Sweden have studied how the screen habits of US children correlates with how their cognitive abilities develop over time. They found that the children who spent an above-average time playing video games increased their intelligence more than the average, while TV watching or social media had neither a positive nor a negative effect. The results are published in the journal Scientific Reports.

Children are spending more and more time in front of screens. How this affects their health and whether it has a positive or negative impact on their cognitive abilities are hotly debated. For this present study, researchers at Karolinska Institutet and Vrije Universiteit Amsterdam specifically studied the link between screen habits and intelligence over time.

Over 9,000 boys and girls in the USA participated in the study. At the age of nine or ten, the children performed a battery of psychological tests to gauge their general cognitive abilities (intelligence). The children and their parents were also asked about how much time the children spent watching TV and videos, playing video games and engaging with social media.

Followed up after two years

Just over 5,000 of the children were followed up after two years, at which point they were asked to repeat the psychological tests. This enabled the researchers to study how the children's performance on the tests varied from the one testing session to the other, and to control for individual differences in the first test. They also controlled for genetic differences that could affect intelligence and differences that could be related to the parents' educational background and income.

On average, the children spent 2.5 hours a day watching TV, half an hour on social media and 1 hour playing video games. The results showed that those who played more games than the average increased their intelligence between the two measurements by approximately 2.5 IQ points more than the average. No significant effect was observed, positive or negative, of TV-watching or social media.

"We didn't examine the effects of screen behaviour on physical activity, sleep, wellbeing or school performance, so we can't say anything about that," says Torkel Klingberg, professor of cognitive neuroscience at the Department of Neuroscience, Karolinska Institutet. "But our results support the claim that screen time generally doesn't impair children's cognitive abilities, and that playing video games can actually help boost intelligence. This is consistent with several experimental studies of video-game playing."

Intelligence is not constant

The results are also in line with recent research showing that intelligence is not a constant, but a quality that is influenced by environmental factors.

"We'll now be studying the effects of other environmental factors and how the cognitive effects relate to childhood brain development," says Torkel Klingberg.

One limitation of the study is that it only covered US children and did not differentiate between different types of video games, which makes the results difficult to transfer to children in other countries with other gaming habits. There was also a risk of reporting error since screen time and habits were self-rated.

Read more at Science Daily

From cavefish to humans: Evolution of metabolism in cavefish may provide insight into treatments for a host of diseases such as diabetes, heart disease, and stroke

New research from the Stowers Institute for Medical Research examines how cavefish, surface-dwelling river fish that flooded into underground cave systems over 100,000 years ago, developed unique metabolic adaptations to survive in nutrient-scarce environments. The study, published online in Nature Genetics on May 12, 2022, led by Jaya Krishnan, PhD, a senior research associate in the lab of Nicolas Rohner, PhD, created a genome-wide map of liver tissue for two independent colonies of cavefish along with river fish to understand how cavefish metabolism evolved and how this may be applicable for humans.

Historically, humans have been able to adapt during periods of feast or famine. Today, however, feast has replaced famine in many regions around the globe leading to a rise in a host of diseases related to metabolism such as diabetes, heart disease and stroke. Collectively called metabolic syndrome, these conditions are associated with genetic mutations in regions of DNA that regulate how our genes work to keep us healthy; on an evolutionary timescale, the constant "feast state" is in its infancy, which for humans, means disease rather than adaptation.

This study marks the first time genetic mapping of the non-coding regions of liver DNA that act to regulate gene activity and expression have been performed. The new data is a now valuable resource for the scientific community studying starvation resistance and metabolism.

"It's a very good foundation for us or anyone to now ask relevant questions in relation to metabolism, diet, and adaptation," said Krishnan.

Metabolism, or the way in which we utilize and store energy, is an integral part of health in all species. Cavefish are ideal for studying metabolism; during periodic flooding of caves, these fish intake and store all the nutrition they need to survive until the next nutrient inundation, which may not be for another year. "They can shed light on metabolic disorders such as diabetes and obesity," said Krishnan, because, despite elevated fat and blood glucose levels, these fish remain vibrant and healthy.

"The fact that these fish are apparently healthy, despite having these extreme traits is, by definition, a good place to ask how they deal with that," said Rohner.

What is truly remarkable is that the two independently derived cavefish colonies examined in this study evolved strikingly similar metabolic adaptations to survive in dark, nutrient-scarce environments. This raises the question, what can we learn from animals who have had the time to evolve? And even further, if multiple cavefish populations evolved in a very similar manner completely independently from each other, are there universal adaptation mechanisms that could potentially be triggered in other species like humans?

"We know only a handful of genes that could be therapeutic targets," said Krishnan. "This means we need to adopt novel ways to identify such potential genes so that we can investigate them, and cavefish are a very powerful system for us to do that."

Read more at Science Daily

May 11, 2022

Researchers reveal the origin story for carbon-12, a building block for life

With the help of the world's most powerful supercomputer and new artificial intelligence techniques, an international team of researchers has theorized how the extreme conditions in stars produce carbon-12, which they describe as "a critical gateway to the birth of life."

The researchers' fundamental question: "How does the cosmos produce carbon-12?" said James Vary, a professor of physics and astronomy at Iowa State University and a longtime member of the research collaboration.

"It turns out it's not easy to produce carbon-12," Vary said.

It takes the extreme heat and pressures inside stars or in stellar collisions and explosions to create emergent, unstable, excited-state carbon nuclei with three loosely linked clumps, each with two protons and two neutrons. A fraction of those unstable carbon nuclei can shoot off a little extra energy in the form of gamma rays and become stable carbon-12, the stuff of life.

A paper recently published by the online journal Nature Communications describes the researchers' supercomputer simulations and resulting theory for the nuclear structure of carbon that favors its formation in the cosmos. The corresponding author is Takaharu Otsuka of the University of Tokyo, the RIKEN Nishina Center for Accelerator-Based Science and the Advanced Science Research Center of the Japan Atomic Energy Agency.

The paper describes how alpha particles -- helium-4 atoms, with two protons and two neutrons -- can cluster to form much heavier atoms, including an unstable, excited carbon-12 state known as the Hoyle state (predicted by theoretical astrophysicist Fred Hoyle in 1953 as a precursor to life as we know it).

The researchers write that this alpha-particle clustering "is a very beautiful and fascinating idea and is indeed plausible because the (alpha) particle is particularly stable with a large binding energy."

To test the theory, the researchers ran supercomputer simulations, including calculations on the Fugaku supercomputer at the RIKEN Center for Computational Science in Kobe, Japan. Fugaku is listed as the most powerful supercomputer in the world and is three times more powerful than No. 2, according to the latest TOP500 supercomputer rankings.

Vary said the researchers also did their work ab initio, or from first principles, meaning their calculations were based on known science and didn't include additional assumptions or parameters.

They also developed techniques in statistical learning, a branch of computational artificial intelligence, to reveal alpha clustering the Hoyle state and the eventual production of stable carbon-12.

Vary said the team has worked for more than a decade to develop its software, refine its supercomputer codes, run its calculations and work out smaller problems while building up to the current work.

"There's a lot of subtlety -- a lot of beautiful interactions going on in there," Vary said.

All the calculations, physical quantities and theoretical subtlety match what experimental data there is in this corner of nuclear physics, the researchers wrote.

So they think they have some basic answers about the origins of carbon-12. Vary said that should lead to more studies looking for "fine-grain detail" about the process and how it works.

Was carbon production, for example, mostly the result of internal processes in stars? Vary asked. Or was it supernova star explosions? Or collisions of super-dense neutron stars?

Read more at Science Daily

DNA provides unique look at moa and climate change

Ancient moa DNA has provided insights into how species react to climate change, a University of Otago study has found.

By analysing ancient DNA of the extinct eastern moa, researchers from the Department of Zoology found the giant birds altered their distribution as the climate warmed and cooled.

Lead author Dr Alex Verry says the species was spread across the eastern and southern South Island during the warmer Holocene period, but was restricted to the southern South Island during the height of the last Ice Age about 25,000 years ago.

This is in comparison to the heavy-footed moa, which retreated to both southern and northern regions of the South Island, while the upland moa inhabited four different areas.

"The eastern moa's response had consequences for its population size and genetic diversity -- the last Ice Age lead to a pronounced genetic bottleneck which meant it ended up with lower genetic diversity than other moa living in the same areas," Dr Verry says.

The study, published in Biology Letters, is the first time high throughput DNA sequencing, which simultaneously sequences millions of pieces of DNA, has been used to investigate moa at the population level.

The findings highlight how past climate change impacted species in different ways and that a 'one size fits all' model is not practical.

"It makes us wonder what is going to happen to species as they attempt to adapt to climate change today and into the future? Will they also attempt to move to new areas in order to survive?

"For some species this will not be possible, some species will run out of space, such as alpine species which will have to move upward but can only go so far until there is no more 'up'," he says.

Co-author Dr Nic Rawlence, Director of Otago's Palaeogenetics Laboratory, says the research is a rare example of the impacts of past climate change on extinct megafauna from New Zealand.

It also demonstrates how fossil remains and museum collections can be used to answer new questions about the past.

"This is really bringing the power of palaeogenomics to New Zealand research questions, whereas previously most research and interest has focused on Eurasian or American species. We are really starting to build capacity for this research in New Zealand," he says.

Read more at Science Daily

For outdoor workers, extreme heat poses extreme danger

Working outdoors during periods of extreme heat can cause discomfort, heat stress, or heat illnesses -- all growing concerns for people who live and work in Southwestern cities like Las Vegas, where summer temperatures creep higher each year. But, did you know that female outdoor workers are experiencing disproportionate impacts? Or, that more experienced outdoor workers are at higher risk than those with fewer years on the job?

In a new study in the International Journal of Environmental Science and Technology, scientists from DRI, Nevada State College, and the Guinn Center for Policy Priorities explore the growing threat that extreme heat poses to workforce health in three of the hottest cities in North America -- Las Vegas, Los Angeles, and Phoenix. Their study results hold important findings for outdoor workers, their employers, and policymakers across the Southwestern U.S.

To assess the relationship between extreme heat and nonfatal workplace heat-related illness, the study compared data on occupational injuries and illnesses for the years 2011-2018 with heat index data from Las Vegas, Los Angeles, and Phoenix. Heat index data combines temperature and humidity as a measure of how people feel the heat.

"We expected to see a correlation between high temperatures and people getting sick -- and we found that there was a very clear trend in most cases," said lead author Erick Bandala, Ph.D., assistant research professor of environmental science at DRI. "Surprisingly, this type of analysis hadn't been done in the past, and there are some really interesting social implications to what we learned."

First, the research team analyzed changes in heat index data for the three cities. They found a significant increase in heat index at two of the three locations (Phoenix and Las Vegas) during the study period, with average heat index values for June-Aug climbing from "extreme caution" in 2012 into the "danger" range by 2018. Over the same period, data from the Bureau of Labor and Statistics showed that the number of nonfatal heat-related workplace injuries and illnesses in each of the three states increased steadily, climbing from below the national average in 2011 to above the national average in 2018.

"Our data indicate that the increases in heat are happening alongside increases in the number of nonfatal occupational injuries across these three states," Bandala said. "Every year we are seeing increased heat waves and higher temperatures, and all of the people who work outside in the streets or in gardens or agriculture are exposed to this."

Next, the study team looked deeper into the data to learn about the number of male and female workers being affected by heat-related workplace injuries. At the beginning of the study in 2011, 26 to 50 percent of the people affected across the three states were female. By 2018, 42 to 86 percent of the people affected were female.

Study authors believe that the reason for this increase may be due to more women entering the outdoor workforce, or it could be related to the vulnerability of women to certain heat-related effects, like hyponatremia -- a condition that develops when too much plain water is consumed under high heat conditions and sodium levels in blood get too low.

"As the number of female workers exposed to extreme temperatures increases, there is an increasing need to consider the effect of gender and use different approaches to recommend prevention measures as hormonal factors and cycles that can be exacerbated during exposure to extreme heat," said study coauthor Kebret Kebede, M.D., associate professor of biology at Nevada State College.

The authors examined other variables, such as the length of an employee's service with an employer. They found that the number of heat-related injury/illnesses tended to increase as the length of service with the employer increased, and that those with more than five years of service were at greater risk than those with less than one year of service. This may be due to employees with more years of service having a reduced perception of risk, or could be a cumulative effect of years of chronic heat exposure on the well-being of outdoor workers.

In severe cases, heat-related illness or injury can cause extensive damage to all tissues and organs, disrupting the central nervous system, blood-clotting mechanisms, and liver and kidney functions. In these cases, lengthy recoveries are required. The authors found concerning evidence that heat-related injuries are keeping many outdoor workers away from work for more than 30 days.

"These lengthy recovery times are a significant problem for workers and their families, many of whom are living day-to-day," Bandala said. "When we have these extreme heat conditions coming every year and a lot of people working outside, we need to know what are the consequences of these problems, and we need the people to know about the risk so that they take proper precautions."

The study also explored connections between heat-related injuries/illnesses and the number of hours worked, the time of day that the event occurred, and the ethnicities and age groups that were most impacted.

Study authors hope that their results will be useful to policymakers to protect outdoor workers. They also hope that the information will be useful to outdoor workers who need to stay safe during times of extreme heat, and employers who rely on a healthy workforce to keep their businesses operating.

"This study underscores the importance of and the need for the work the Nevada Occupational Safety and Health Administration (OSHA) is doing to adopt a regulation to address heat illness," stated Nancy Brune, Ph.D., study co-author and senior fellow at the Guinn Center.

Read more at Science Daily

Key protein identified for brain stem cell longevity

A receptor that was first identified as necessary for insulin action, that also is located on the neural stem cells found deep in the brains of mice, is pivotal for brain stem cell longevity, according to a Rutgers study, a finding that has important implications for brain health and future therapies for brain disorders.

The study, appearing in the journal Stem Cell Reports, pinpoints a specific protein known as the insulin receptor (INSR), which is abundant on the neural stem cells that reside in the brain's subventricular zone. During development, neural stem cells give rise to the entire nervous system, and they persist into adulthood. Over the lifespan these neural stem cells produce new neurons and non-neuronal cells that maintain the infrastructure and functioning of the brain.

Separately, the scientists made another finding when examining brain tumors: INSR plays a crucial role in sustaining and maintaining a population of specialized brain cancer cells known as glioblastoma (GBM) stem cells. When they inactivated the INSR in the GBM stem cells they inhibited the growth of those primitive tumor forming cells.

"It's important to understand the molecular mechanisms that are critical for the growth and sustenance of the brain's stem cells under normal and abnormal growth states," said study author Steven Levison, a professor of neuroscience in the Department of Pharmacology, Physiology and Neuroscience and director of the Laboratory for Regenerative Neurobiology at Rutgers New Jersey Medical School. "Comprehending the signals that regulate these primitive cells could one day lead to new therapeutics for brain disorders."

Many neurodegenerative disorders, such as multiple sclerosis, Parkinson disease and Alzheimer's disease, are connected with the destruction of brain cells, said co-author Teresa Wood, a Distinguished Professor and Rena Warshow Endowed Chair in Multiple Sclerosis in the Department of Pharmacology, Physiology and Neuroscience at Rutgers New Jersey Medical School.

"If we could influence how brain stem cells function then we can use this knowledge to replace diseased or dead brain cells with living ones, which would advance the treatment of neurological diseases and brain injuries," said Wood, who also teaches and conducts research at the Cancer Institute of New Jersey.

Cell receptors such as INSR are protein molecules that reside on the surfaces of cells. Substances, either natural or human-made, that open the "lock" of a receptor can spur a cell to divide, differentiate or die. By identifying which receptors perform these functions on specific cell types, and by understanding their structures and functions, scientists can design substances that act as keys to receptors, to turn them "on" or "off."

Previous studies by this research team had shown that a certain "key," the signaling protein known as the insulin-like growth factor-II (IGF-II), was necessary to maintain the neural stem cells in the two places of the adult brain that harbor these primitive cells. In the current experiment, scientists were looking to identify the receptor. To do so, they used genetic tools that allowed them to both delete the INSR and introduce a fluorescent protein so they could track the neural stem cells and the cells they generate. They found that the numbers of neural stem cells in the subventricular zone in the brains of mice lacking the INSR collapsed.

Adult neurogenesis -- the idea that new cells are produced in the adult brain -- has been a burgeoning field of scientific inquiry since the late 1990s, when researchers confirmed what had only been a theory in lab studies of human, primate and bird brains. Neural stem cells in the adult are stem cells that can self-renew and produce new neurons and the supporting cells of the brain, oligodendrocytes and astrocytes.

"Given the widespread interest in stem cells as well as interest in whether alterations to adult stem cells might contribute to cancer, our research findings should be of interest," Levison said.

Read more at Science Daily

May 10, 2022

New method to synchronize devices on Earth makes use of cosmic rays

Various technologies, networks and institutions benefit from or require accurate time keeping to synchronize their activities. Current ways of synchronizing time have some drawbacks that a new proposed method seeks to address. The cosmic time synchronizer works by synchronizing devices around cosmic ray events detected by those devices. This could bring accurate timing abilities to remote sensing stations, or even underwater, places that other methods cannot serve. Early tests show promise, but the real challenge may lie in the adoption of this new technique.

Humanity is intimately connected with the idea of time. Historically, we used the cosmos itself -- stars, the sun, and the moon -- to measure time and coordinate our activities. It's fitting, then, that researchers are looking out to the cosmos again to further develop our ability to keep time. Professor Hiroyuki Tanaka from Muographix at the University of Tokyo devised and tested a way to synchronize multiple devices, so they agree upon the time, that makes use of cosmic rays from deep space. Appropriately, it's called cosmic time synchronization (CTS).

"It's relatively easy to keep time accurately these days. For example, atomic clocks have been doing this for decades now," said Tanaka. "However, these are large and expensive devices that are very easy to disrupt. This is one reason I have been working on an improved way to keep time. The other is that, related to time measurement, position measurement could also be made better. So really, CTS is a precursor to a potential replacement for GPS, but that's still a little further down the line."

The reason it's critical for devices to have a shared sense of time is that certain devices are increasingly important in many aspects of life. Computer networks responsible for financial transactions must agree upon time so that the order of transactions can be ensured. There are sensors that work in unison to observe various physical phenomena which need to agree upon time so that, for example, the origin of a particular reading can be determined. Such sensors could even potentially be part of some kind of disaster warning system.

CTS works thanks to cosmic rays from deep space that strike the atmosphere around 15 kilometers up, creating showers of particles including muons. The muons travel close to the speed of light, reaching the ground almost immediately, they can easily penetrate water or rock, and spread out as they travel to cover a few square kilometers of ground. Independent CTS-enabled devices under the same particle shower can detect the incoming muons, which will have a specific signature unique to the cosmic ray event that generated them. By sharing this information, CTS devices can confer with one another and synchronize their clocks according to when the cosmic ray event took place. The ultrahigh-energy cosmic ray strikes occur frequently enough, about a hundred times per hour over every square kilometer of Earth, for CTS devices to work together in real time.

Read more at Science Daily

Rare discovery: How a gene mutation causes higher intelligence

Synapses are the contact points in the brain via which nerve cells 'talk' to each other. Disturbances in this communication lead to diseases of the nervous system, since altered synaptic proteins, for example, can impair this complex molecular mechanism. This can result in mild symptoms, but also very severe disabilities in those affected.

The interest of the two neurobiologists Professor Tobias Langenhan and Professor Manfred Heckmann, from Leipzig and Würzburg respectively, was aroused when they read in a scientific publication about a mutation that damages a synaptic protein. At first, the affected patients attracted scientists' attention because the mutation caused them to go blind. However, doctors then noticed that the patients were also of above-average intelligence. "It's very rare for a mutation to lead to improvement rather than loss of function," says Langenhan, professor and holder of a chair at the Rudolf Schönheimer Institute of Biochemistry at the Faculty of Medicine.

The two neurobiologists from Leipzig and Würzburg have been using fruit flies to analyse synaptic functions for many years. "Our research project was designed to insert the patients' mutation into the corresponding gene in the fly and use techniques such as electrophysiology to test what then happens to the synapses. It was our assumption that the mutation makes patients so clever because it improves communication between the neurons which involve the injured protein," explains Langenhan. "Of course, you can't conduct these measurements on the synapses in the brains of human patients. You have to use animal models for that."

75 per cent of genes that cause diseases in humans also exist in fruit flies

First, the scientists, together with researchers from Oxford, showed that the fly protein called RIM looks molecularly identical to that of humans. This was essential in order to be able to study the changes in the human brain in the fly. In the next step, the neurobiologists inserted mutations into the fly genome that looked exactly as they did in the diseased people. They then took electrophysiological measurements of synaptic activity. "We actually observed that the animals with the mutation showed a much increased transmission of information at the synapses. This amazing effect on the fly synapses is probably found in the same or a similar way in human patients, and could explain their increased cognitive performance, but also their blindness," concludes Professor Langenhan.

The scientists also found out how the increased transmission at the synapses occurs: the molecular components in the transmitting nerve cell that trigger the synaptic impulses move closer together as a result of the mutation effect and lead to increased release of neurotransmitters. A novel method, super-resolution microscopy, was one of the techniques used in the study. "This gives us a tool to look at and even count individual molecules and confirms that the molecules in the firing cell are closer together than they normally are," says Professor Langenhan, who was also assisted in the study by Professor Hartmut Schmidt's research group from the Carl Ludwig Institute in Leipzig.

Read more at Science Daily

Psychopathic individuals are more likely to have larger striatum region in the brain

Neuroscientists from Nanyang Technological University, Singapore (NTU Singapore), University of Pennsylvania, and California State University, have established the existence of a biological difference between psychopaths and non-psychopaths.

Using magnetic resonance imaging (MRI) scans, they found that a region of the forebrain known as the striatum, was on average ten per cent larger in psychopathic individuals compared to a control group of individuals that had low or no psychopathic traits.

Psychopaths, or those with psychopathic traits, are generally defined as individuals that have an egocentric and antisocial personality. This is normally marked by a lack of remorse for their actions, a lack of empathy for others, and often criminal tendencies.

The striatum, which is a part of the forebrain, the subcortical region of the brain that contains the entire cerebrum, coordinates multiple aspects of cognition, including both motor and action planning, decision-making, motivation, reinforcement, and reward perception.

Previous studies have pointed to an overly active striatum in psychopaths but have not conclusively determined the impact of its size on behaviours. The new study reveals a significant biological difference between people who have psychopathic traits and those who do not.

While not all individuals with psychopathic traits end up breaking the law, and not all criminals meet the criteria for psychopathy, there is a marked correlation. There is clear evidence that psychopathy is linked to more violent behaviour.

The understanding of the role of biology in antisocial and criminal behaviour may help improve existing theories of behaviour, as well as inform policy and treatment options.

To conduct their study, the neuroscientists scanned the brains of 120 participants in the United States and interviewed them using the Psychopathy Checklist -- Revised, a psychological assessment tool to determine the presence of psychopathic traits in individuals.

Assistant Professor Olivia Choy, from NTU's School of Social Sciences, a neurocriminologistwho co-authored the study, said: "Our study's results help advance our knowledge about what underlies antisocial behaviour such as psychopathy. We find that in addition to social environmental influences, it is important to consider that there can be differences in biology, in this case, the size of brain structures, between antisocial and non-antisocial individuals."

Professor Adrian Raine from the Departments of Criminology, Psychiatry, and Psychology at University of Pennsylvania, who co-authored the study, said: "Because biological traits, such as the size of one's striatum, can be inherited to child from parent, these findings give added support to neurodevelopmental perspectives of psychopathy -- that the brains of these offenders do not develop normally throughout childhood and adolescence."

Professor Robert Schug from the School of Criminology, Criminal Justice, and Emergency Management at California State University, Long Beach, who co-authored the study, said: "The use of the Psychopathy Checklist -- Revised in a community sample remains a novel scientific approach: Helping us understand psychopathic traits in individuals who are not in jails and prisons, but rather in those who walk among us each day."

Highlighting the significance of the work done by the joint research team, Associate Professor Andrea Glenn from the Department of Psychology of The University of Alabama, who is not involved in the research, said: "By replicating and extending prior work, this study increases our confidence that psychopathy is associated with structural differences in the striatum, a brain region that is important in a variety of processes important for cognitive and social functioning. Future studies will be needed to understand the factors that may contribute to these structural differences."

The results of the study were published recently in the peer-reviewed academic publication Journal of Psychiatric Research.

Bigger striatum, larger appetite for stimulation

Through analyses of the MRI scans and results from the interviews to screen for psychopathy, the researchers linked having a larger striatum to an increased need for stimulation, through thrills and excitement, and a higher likelihood of impulsive behaviours.

The striatum is part of the basal ganglia, which is made up of clusters of neurons deep in the centre of the brain. The basal ganglia receive signals from the cerebral cortex, which controls cognition, social behaviour, and discerning which sensory information warrants attention.

In the past two decades, however, the understanding of the striatum has expanded, yielding hints that the region is linked to difficulties in social behaviour.

Previous studies have not addressed whether striatal enlargement is observed in adult females with psychopathic traits.

The neuroscientists say that within their study of 120 individuals, they examined 12 females and observed, for the first time, that psychopathy was linked to an enlarged striatum in females, just as in males. In human development, the striatum typically becomes smaller as a child matures, suggesting that psychopathy could be related to differences in how the brain develops.

Asst Prof Choy added: "A better understanding of the striatum's development is still needed. Many factors are likely involved in why one individual is more likely to have psychopathic traits than another individual. Psychopathy can be linked to a structural abnormality in the brain that may be developmental in nature. At the same time, it is important to acknowledge that the environment can also have effects on the structure of the striatum."

Prof Raine added: "We have always known that psychopaths go to extreme lengths to seek out rewards, including criminal activities that involve property, sex, and drugs. We are now finding out a neurobiological underpinning of this impulsive and stimulating behaviour in the form of enlargement to the striatum, a key brain area involved in rewards.

Read more at Science Daily

Cells take out the trash before they divide

MIT researchers have discovered that before cells start to divide, they do a little cleanup, tossing out molecules that they appear not to need anymore.

Using a new method they developed for measuring the dry mass of cells, the researchers found that cells lose about 4 percent of their mass as they enter cell division. The researchers believe that this emptying of trash helps cells to give their offspring a "fresh start," without the accumulated junk of the parent cell.

"Our hypothesis is that cells might be throwing out things that are building up, toxic components or just things that don't function properly that you don't want to have there. It could allow the newborn cells to be born with more functional contents," says Teemu Miettinen, an MIT research scientist and the lead author of the new study.

Scott Manalis, the David H. Koch Professor of Engineering in the departments of Biological Engineering and Mechanical Engineering, and a member of the Koch Institute for Integrative Cancer Research, is the senior author of the paper, which appears today in eLife. MIT biological engineering undergraduates Kevin Ly and Alice Lam are also authors of the paper.

Measuring mass

Measuring the dry mass of a cell -- the weight of its contents not including the water -- is commonly done using a microscopy technique called quantitative phase microscopy. This technique can measure cell growth, but it does not reveal information about the molecular content of the dry mass and it is difficult to use with cells that grow in suspension.

Manalis' lab has previously developed a technique for measuring the buoyant mass of cells, which is their mass as they float in a fluid such as water. This method measures buoyant mass by flowing cells through a channel embedded in a vibrating cantilever, which can be done repeatedly to track changes in a particular cell's mass over many hours or days.

For their new study, the researchers wanted to adapt the technique so that it could be used to calculate the dry mass of cells, as well as the density of the dry mass. About 10 years ago, they had discovered that they could calculate a cell's dry mass if they first measured the cell in normal water and then in heavy water (which contains deuterium instead of ordinary hydrogen). These two measurements can be used to calculate the cell's dry mass.

However, heavy water is toxic to cells, so they were only able to obtain a single measurement per cell. Last year, Miettinen set out to see if he could design a system in which cells could be measured repeatedly with minimal exposure to heavy water.

In the system he came up with, cells are exposed to heavy water very briefly as they flow through microfluidic channels. It takes only one second for a cell to completely exchange its water content, so the researchers could measure the cell's mass when it was full of heavy water, compare it to the mass in normal water, and then calculate the dry mass.

"Our idea was that if we minimize the cells' exposure to the heavy water, we could engineer the system so that we could repeat this measurement over extended time periods without hurting the cell," Miettinen says. "That enabled us for the first time to track not just the dry mass of a cell, which is what others do using microscopic methods, but also the density of the dry mass, which informs us of the cell's biomolecular composition."

The researchers showed that their dry mass measurements qualitatively agreed with previous work using quantitative phase microscopy. And, in addition to providing density of the dry mass, the MIT team's method enables higher temporal resolution, which proved to be useful for revealing dynamics during mitosis (cell division).

Taking out the trash

In cells undergoing mitosis, the researchers used their new technique to study what happens to cell mass and composition during that process. In a 2019 paper, Miettinen and Manalis found that buoyant mass increases slightly as mitosis begins. However, other studies that used quantitative phase microscopy suggested that cells might retain or lose dry mass early in cell division.

In the new study, the MIT team measured three types of cancer cells, which are easier to study because they divide more frequently than healthy cells. To their surprise, the researchers found that the dry mass of cells actually decreases when they enter the cell division cycle. This mass is regained later on, before division is complete.

Further experiments revealed that as cells enter mitosis, they ramp up activity of a process called lysosomal exocytosis. Lysosomes are cell organelles that break down or recycle cellular waste products, and exocytosis is the process they use to jettison any molecules that aren't needed any more.

The researchers also found that the density of the dry mass increases as the cells lose dry mass, leading them to believe that the cells are losing low-density molecules such as lipids or lipoproteins. They hypothesize that cells use this process to clear out toxic molecules before dividing. "What we are seeing is that cells might be trying to throw out damaged components before dividing," Miettinen says.

The researchers speculate that their findings may help explain why neurons, which do not divide, are more likely to accumulate toxic proteins such as Tau or amyloid beta, which are linked to the development of Alzheimer's disease.

The findings could also be relevant to cancer: Cancer cells can expel some chemotherapy drugs using exocytosis, helping them to become resistant to the drugs. In theory, preventing exocytosis from occurring before cell division could help to make cancer cells more susceptible to such drugs.

"There are diseases where we might want upregulate exocytosis, for example in neurodegenerative diseases, but then there are diseases like cancer where maybe we want to dial it down," Miettinen says. "In the future, if we could better understand the molecular mechanism behind this, and find a way to trigger it outside of mitosis or prevent it during mitosis, we could really have a new toggle to use when treating disease."

Read more at Science Daily

May 9, 2022

In a pair of merging supermassive black holes, a new method for measuring the void

Three years ago, the first ever image of a black hole stunned the world. A black pit of nothingness enclosed by a fiery ring of light. That iconic image of the black hole at the center of galaxy Messier 87 came into focus thanks to the Event Horizon Telescope, a global network of synchronized radio dishes acting as one giant telescope.

Now, a pair of Columbia researchers have devised a potentially easier way of gazing into the abyss. Outlined in complementary studies in Physical Review Letters and Physical Review D, their imaging technique could allow astronomers to study black holes smaller than M87's, a monster with a mass of 6.5 billion suns, harbored in galaxies more distant than M87, which at 55 million light-years away, is still relatively close to our own Milky Way.

The technique has just two requirements. First, you need a pair of supermassive black holes in the throes of merging. Second, you need to be looking at the pair at a nearly side-on angle. From this sideways vantage point, as one black hole passes in front of the other, you should be able to see a bright flash of light as the glowing ring of the black hole farther away is magnified by the black hole closest to you, a phenomenon known as gravitational lensing.

The lensing effect is well known, but what the researchers discovered here was a hidden signal: a distinctive dip in brightness corresponding to the "shadow" of the black hole in back. This subtle dimming can last from a few hours to a few days, depending on how massive the black holes, and how closely entwined their orbits. If you measure how long the dip lasts, the researchers say, you can estimate the size and shape of the shadow cast by the black hole's event horizon, the point of no exit, where nothing escapes, not even light.

"It took years and a massive effort by dozens of scientists to make that high-resolution image of the M87 black holes," said the study's first author, Jordy Davelaar, a postdoc at Columbia and the Flatiron Institute's Center for Computational Astrophysics. "That approach only works for the biggest and closest black holes -- the pair at the heart of M87 and potentially our own Milky Way."

He added, "with our technique, you measure the brightness of the black holes over time, you don't need to resolve each object spatially. It should be possible to find this signal in many galaxies."

The shadow of a black hole is both its most mysterious and informative feature. "That dark spot tells us about the size of the black hole, the shape of the space-time around it, and how matter falls into the black hole near its horizon," said co-author Zoltan Haiman, a physics professor at Columbia.

Black hole shadows may also hold the secret to the true nature of gravity, one of the fundamental forces of our universe. Einstein's theory of gravity, known as general relativity, predicts the size of black holes. Physicists, therefore, have sought them out to test alternative theories of gravity in an effort to reconcile two competing ideas of how nature works: Einstein's general relativity, which explains large scale phenomena like orbiting planets and the expanding universe, and quantum physics, which explains how tiny particles like electrons and photons can occupy multiple states at once.

The researchers became interested in flaring supermassive black holes after spotting a suspected pair of supermassive black holes at the center of a far-off galaxy in the early universe. NASA's planet-hunting Kepler space telescope was scanning for the tiny dips in brightness corresponding to a planet passing in front of its host star. Instead, Kepler ended up detecting the flares of what Haiman and his colleagues claim are a pair of merging black holes.

They named the distant galaxy "Spikey" for the spikes in brightness triggered by its suspected black holes magnifying each other on each full rotation via the lensing effect. To learn more about the flare, Haiman built a model with his postdoc, Davelaar.

They were confused, however, when their simulated pair of black holes produced an unexpected, but periodic, dip in brightness each time one orbited in front of the other. At first, they thought it was a coding mistake. But further checking led them to trust the signal.

As they looked for a physical mechanism to explain it, they realized that each dip in brightness closely matched the time it took for the black hole closest to the viewer to pass in front of the shadow of the black hole in back.

The researchers are currently looking for other telescope data to try and confirm the dip they saw in the Kepler data to verify that Spikey is, in fact, harboring a pair of merging black holes. If it all checks out, the technique could be applied to a handful of other suspected pairs of merging supermassive black holes among the 150 or so that have been spotted so far and are awaiting confirmation.

As more powerful telescopes come online in the coming years, other opportunities may arise. The Vera Rubin Observatory, set to open this year, has its sights on more than 100 million supermassive black holes. Further black hole scouting will be possible when NASA's gravitational wave detector, LISA, is launched into space in 2030.

Read more at Science Daily

Spider can hide underwater for 30 minutes

A tropical spider species uses a "film" of air to hide underwater from predators for as long as 30 minutes, according to faculty at Binghamton University, State University of New York.

Lindsey Swierk, assistant research professor of biological sciences at Binghamton University, State University of New York, observed a large tropical spider (Trechalea extensa) fleeing from humans and hiding underwater; this species was not previously known to use water to escape. Swierk had previously observed a Costa-Rican lizard species that was able to stay underwater for 16 minutes to hide from predators.

"For a lot of species, getting wet and cold is almost as risky to survival as dealing with their predators to begin with," said Swierk. "Trechalea spiders weren't previously known to hide underwater from threats -- and certainly not for so long."

The spider spent about 30 minutes underwater. While submerged, it kept a "film" of air over its entire body. Swierk and her colleagues suspect that the fuzzy hairs that cover its body help it to maintain this film of air, which helps to prevent thermal loss while underwater, or to prevent water from entering the spider's respiratory organs.

The film of air surrounding the spider when it is underwater appears to be held in place by hydrophobic hairs covering the spider's entire body surface," said Swierk. "It's so complete that the spider almost looks like it's been dipped in silver. The film of air might serve to keep the respiratory openings away from water, since these spiders are air-breathing. The film of air might also help to minimize thermal loss to the cold stream water that the spider submerges itself in."

According to Swierk, this observation provides new insight into how species can cope with the problem of finding refuge underwater.

Read more at Science Daily

Confirmed: Atmospheric helium levels are rising

Scientists at Scripps Institution of Oceanography at UC San Diego used an unprecedented technique to detect that levels of helium are rising in the atmosphere, resolving an issue that has lingered among atmospheric chemists for decades.

The atmospheric abundance of the 4-helium (4He) isotope is rising because 4He is released during the burning and extraction of fossil fuels. The researchers report that it is increasing at a very small but, for the first time, clearly measurable rate. The 4He isotope itself does not add to the greenhouse effect that is making the planet warmer, but measures of it could serve as indirect markers of fossil-fuel use.

The National Science Foundation-supported study appears today in the journal Nature Geoscience.

"The main motivation was to resolve a longstanding controversy in the science community about atmospheric helium concentrations," said study lead author Benni Birner, a former graduate student and now postdoctoral researcher at Scripps Institution of Oceanography at UC San Diego.

The isotope 4He is produced by radioactive decay in the Earth's crust and accumulates in the same reservoirs as fossil fuels, in particular those of natural gas. During the extraction and combustion of fossil fuels, 4He is coincidentally released, which creates another means to evaluate the scale of industrial activity.

The study's breakthrough is in the technique the Scripps Oceanography team used to measure how much helium is in the atmosphere. Birner and Scripps geoscientists Jeff Severinghaus, Bill Paplawsky, and Ralph Keeling created a precise method to compare the 4He isotope to levels of the common atmospheric gas nitrogen. Because nitrogen levels in the atmosphere are constant, an increase in He/N2 is indicative of the rate of 4He buildup in the atmosphere.

Study co-author and Scripps Oceanography geochemist Ralph Keeling, overseer of the famed carbon dioxide measurement known as the Keeling Curve, describes the study as a "masterpiece of fundamental geochemistry." Though helium is relatively easy for scientists to detect in air samples, present at levels of five parts per million of air, no one had done the work to measure it carefully enough to observe an atmospheric increase, he said.

The study also provides a foundation for scientists to better understand the valuable 3-helium (3He) isotope, which has uses for nuclear fusion, cryogenics, and other applications. Proposals to acquire the scarce gas from the moon are an indication of the lengths to which manufacturers will go to harvest it.

According to previous work by other researchers, the 4He isotope exists in the atmosphere in what appears to be an unvarying ratio with 3He. The atmospheric rise of 4He isotope measured at Scripps therefore implies that the 3He isotope must be rising at a comparable rate as 4He. The research by Birner's team raises several questions about the accuracy of scientists' previous assumptions about how 3He is produced and in what quantity.

Read more at Science Daily

Multi-tasking wearable continuously monitors glucose, alcohol, and lactate

Imagine being able to measure your blood sugar levels, know if you've had too much to drink, and track your muscle fatigue during a workout, all in one small device worn on your skin. Engineers at the University of California San Diego have developed a prototype of such a wearable that can continuously monitor several health stats -- glucose, alcohol, and lactate levels -- simultaneously in real-time.

The device is about the size of a stack of six quarters. It is applied to the skin through a Velcro-like patch of microscopic needles, or microneedles, that are each about one-fifth the width of a human hair. Wearing the device is not painful -- the microneedles barely penetrate the surface of the skin to sense biomolecules in interstitial fluid, which is the fluid surrounding the cells beneath the skin. The device can be worn on the upper arm and sends data wirelessly to a custom smartphone app.

Researchers at the UC San Diego Center for Wearable Sensors describe their device in a paper published May 9 in Nature Biomedical Engineering.

"This is like a complete lab on the skin," said center director Joseph Wang, a professor of nanoengineering at UC San Diego and co-corresponding author of the paper. "It is capable of continuously measuring multiple biomarkers at the same time, allowing users to monitor their health and wellness as they perform their daily activities."

Most commercial health monitors, such as continuous glucose monitors for patients with diabetes, only measure one signal. The problem with that, the researchers said, is that it leaves out information that could help people with diabetes, for example, manage their disease more effectively. Monitoring alcohol levels is useful because drinking alcohol can lower glucose levels. Knowing both levels can help people with diabetes prevent their blood sugar from dropping too low after having a drink. Combining information about lactate, which can be monitored during exercise as a biomarker for muscle fatigue, is also useful because physical activity influences the body's ability to regulate glucose.

"With our wearable, people can see the interplay between their glucose spikes or dips with their diet, exercise and drinking of alcoholic beverages. That could add to their quality of life as well," said Farshad Tehrani, a nanoengineering Ph.D. student in Wang's lab and one of the co-first authors of the study.

Microneedles merged with electronics

The wearable consists of a microneedle patch connected to a case of electronics. Different enzymes on the tips of the microneedles react with glucose, alcohol and lactate in interstitial fluid. These reactions generate small electric currents, which are analyzed by electronic sensors and communicated wirelessly to an app that the researchers developed. The results are displayed in real time on a smartphone.

An advantage of using microneedles is that they directly sample the interstitial fluid, and research has shown that biochemical levels measured in that fluid correlate well with levels in blood.

"We're starting at a really good place with this technology in terms of clinical validity and relevance," said Patrick Mercier, a professor of electrical and computer engineering at UC San Diego and co-corresponding author of the paper. "That lowers the barriers to clinical translation."

The microneedle patch, which is disposable, can be detached from the electronic case for easy replacement. The electronic case, which is reusable, houses the battery, electronic sensors, wireless transmitter and other electronic components. The device can be recharged on any wireless charging pad used for phones and smartwatches.

Integrating all these components together into one small, wireless wearable was one of the team's biggest challenges. It also required some clever design and engineering to combine the reusable electronics, which must stay dry, with the microneedle patch, which gets exposed to biological fluid.

"The beauty of this is that it is a fully integrated system that someone can wear without being tethered to benchtop equipment," said Mercier, who is also the co-director of the UC San Diego Center for Wearable Sensors.

Testing

The wearable was tested on five volunteers, who wore the device on their upper arm, while exercising, eating a meal, and drinking a glass of wine. The device was used to continuously monitor the volunteers' glucose levels simultaneously with either their alcohol or lactate levels. The glucose, alcohol and lactate measurements taken by the device closely matched the measurements taken respectively by a commercial blood glucose monitor, Breathalyzer, and blood lactate measurements performed in the lab.

Read more at Science Daily

May 8, 2022

Proposed spacecraft navigation uses x-rays from dead stars

The remnants of a collapsed neutron star, called a pulsar, are magnetically charged and spinning anywhere from one rotation per second to hundreds of rotations per second. These celestial bodies, each 12 to 15 miles in diameter, generate light in the x-ray wavelength range. Researchers at The Grainger College of Engineering, University of Illinois Urbana-Champaign developed a new way spacecraft can use signals from multiple pulsars to navigate in deep space.

"We can use star trackers to determine the direction a spacecraft is pointing, but to learn the precise location of the spacecraft, we rely on radio signals sent between the spacecraft and the Earth, which can take a lot of time and requires use of oversubscribed infrastructure, like NASA's Deep Space Network," said Zach Putnam, professor in the Department of Aerospace Engineering at Illinois.

"Using x-ray navigation eliminates those two factors, but until now, required an initial position estimate of the spacecraft as a starting point. This research presents a system that finds candidates for possible spacecraft locations without prior information, so the spacecraft can navigate autonomously."

"Also, our ground communication systems for deep space missions are overloaded right now," he said. "This system would give spacecraft autonomy and reduce the dependency on the ground. X-ray pulsar navigation gets us around that and allows us to determine where we are, without calling."

Putnam said because our atmosphere filters out all the x-rays, you have to be in space to observe them. The pulsars emit electromagnetic radiation that look like pulses because we measure the peak in the x-ray signals every time the pulsar spins around and points toward us -- like the ray of light cast from the beacon on a lighthouse.

"Each pulsar has its own characteristic signal, like a fingerprint," he said. "We have records of the x-rays over time from the 2,000 or so pulsars and how they've changed over time."

Much like the Global Positioning System, location can be determined from intersection of three signals.

"The issue with pulsars is that they spin so fast that the signal repeats itself a lot," he said. "By comparison, GPS repeats every two weeks. With pulsars, while there are an infinite number of possible spacecraft locations, we know how far apart these candidate locations are from each other.

"We are looking at determining spacecraft position within domains that have diameters on the order of multiple astronomical units, like the size of the orbit of Jupiter -- something like a square with one billion miles on a side. The challenge we are trying to address is, how do we intelligently observe pulsars and fully determine all possible spacecraft locations in a domain without using an excessive amount of compute resources," Putnam said.

The algorithm developed by graduate student Kevin Lohan combines observations from numerous pulsars to determine all the possible positions of the spacecraft. The algorithm processes all the candidate intersections in two dimensions or three dimensions.

Read more at Science Daily