New research indicates river delta deposits within Mars' Jezero crater -- the destination of NASA' Perseverance rover on the Red Planet -- formed over time scales that promoted habitability and enhanced preservation of evidence.
Undulating streaks of land visible from space reveal rivers once coursed across the Martian surface -- but for how long did the water flow? Enough time to record evidence of ancient life, according to a new Stanford study.
Scientists have speculated that the Jezero crater on Mars -- the site of the next NASA rover mission to the Red Planet -- could be a good place to look for markers of life. A new analysis of satellite imagery supports that hypothesis. By modeling the length of time it took to form the layers of sediment in a delta deposited by an ancient river as it poured into the crater, researchers have concluded that if life once existed near the Martian surface, traces of it could have been captured within the delta layers.
"There probably was water for a significant duration on Mars and that environment was most certainly habitable, even if it may have been arid," according to lead author Mathieu Lapôtre, an assistant professor of geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "We showed that sediments were deposited rapidly and that if there were organics, they would have been buried rapidly, which means that they would likely have been preserved and protected."
Jezero crater was selected for NASA's next rover mission partly because the site contains a river delta, which on Earth are known to effectively preserve organic molecules associated with life. But without an understanding of the rates and durations of delta-building events, the analogy remained speculative. The new research, published online on April 23 in AGU Advances, offers guidance for sample recovery in order to better understand the ancient Martian climate and duration of the delta formation for NASA's Perseverance Rover to Mars, which is expected to launch in July 2020 as part of the first Mars sample return mission.
Extrapolating from Earth
The study incorporates a recent discovery the researchers made about Earth: Single-threaded sinuous rivers that don't have plants growing over their banks move sideways about ten times faster than those with vegetation. Based on the strength of Mars' gravity, and assuming the Red Planet did not have plants, the scientists estimate that the delta in Jezero crater took at least 20 to 40 years to form, but that formation was likely discontinuous and spread out across about 400,000 years.
"This is useful because one of the big unknowns on Mars is time," Lapôtre said. "By finding a way to calculate rate for the process, we can start gaining that dimension of time."
Because single-threaded, meandering rivers are most often found with vegetation on Earth, their occurrence without plants remained largely undetected until recently. It was thought that before the appearance of plants, only braided rivers, made up of multiple interlaced channels, existed. Now that researchers know to look for them, they have found meandering rivers on Earth today where there are no plants, such as in the McLeod Springs Wash in the Toiyabe basin of Nevada.
"This specifically hadn't been done before because single-threaded rivers without plants were not really on anyone's radar," Lapôtre said. "It also has cool implications for how rivers might have worked on Earth before there were plants."
The researchers also estimated that wet spells conducive to significant delta buildup were about 20 times less frequent on ancient Mars than they are on Earth today.
"People have been thinking more and more about the fact that flows on Mars probably were not continuous and that there have been times when you had flows and other times when you had dry spells," Lapôtre said. "This is a novel way of putting quantitative constraints on how frequently flows probably happened on Mars."
Findings from Jezero crater could aid our understanding of how life evolved on Earth. If life once existed there, it likely didn't evolve beyond the single-cell stage, scientists say. That's because Jezero crater formed over 3.5 billion years ago, long before organisms on Earth became multicellular. If life once existed at the surface, its evolution was stalled by some unknown event that sterilized the planet. That means the Martian crater could serve as a kind of time capsule preserving signs of life as it might once have existed on Earth.
Read more at Science Daily
Apr 24, 2020
Hubble celebrates its 30th anniversary with a tapestry of blazing starbirth
Hubble Space Telescope's iconic images and scientific breakthroughs have redefined our view of the Universe. To commemorate three decades of scientific discoveries, this image is one of the most photogenic examples of the many turbulent stellar nurseries the telescope has observed during its 30-year lifetime. The portrait features the giant nebula NGC 2014 and its neighbour NGC 2020 which together form part of a vast star-forming region in the Large Magellanic Cloud, a satellite galaxy of the Milky Way, approximately 163,000 light-years away. The image is nicknamed the "Cosmic Reef" because it resembles an undersea world.
On 24 April 1990 the Hubble Space Telescope was launched aboard the space shuttle Discovery, along with a five-astronaut crew. Deployed into low-Earth orbit a day later, the telescope has since opened a new eye onto the cosmos that has been transformative for our civilization.
Hubble is revolutionising modern astronomy not only for astronomers, but also by taking the public on a wondrous journey of exploration and discovery. Hubble's seemingly never-ending, breathtaking celestial snapshots provide a visual shorthand for its exemplary scientific achievements. Unlike any other telescope before it, Hubble has made astronomy relevant, engaging, and accessible for people of all ages. The mission has yielded to date 1.4 million observations and provided data that astronomers around the world have used to write more than 17,000 peer-reviewed scientific publications, making it one of the most prolific space observatories in history. Its rich data archive alone will fuel future astronomy research for generations to come.
Each year, the NASA/ESA Hubble Space Telescope dedicates a small portion of its precious observing time to taking a special anniversary image, showcasing particularly beautiful and meaningful objects. These images continue to challenge scientists with exciting new surprises and to fascinate the public with ever more evocative observations.
This year, Hubble is celebrating this new milestone with a portrait of two colourful nebulae that reveals how energetic, massive stars sculpt their homes of gas and dust. Although NGC 2014 and NGC 2020 appear to be separate in this visible-light image, they are actually part of one giant star formation complex. The star-forming regions seen here are dominated by the glow of stars at least 10 times more massive than our Sun. These stars have short lives of only a few million years, compared to the 10-billion-year lifetime of our Sun.
The sparkling centerpiece of NGC 2014 is a grouping of bright, hefty stars near the centre of the image that has blown away its cocoon of hydrogen gas (coloured red) and dust in which it was born. A torrent of ultraviolet radiation from the star cluster is illuminating the landscape around it. These massive stars also unleash fierce winds that are eroding the gas cloud above and to the right of them. The gas in these areas is less dense, making it easier for the stellar winds to blast through them, creating bubble-like structures reminiscent of brain coral, that have earned the nebula the nickname the "Brain Coral."
By contrast, the blue-coloured nebula below NGC 2014 has been shaped by one mammoth star that is roughly 200,000 times more luminous than our Sun. It is an example of a rare class of stars called Wolf-Rayet stars. They are thought to be the descendants of the most massive stars. Wolf-Rayet stars are very luminous and have a high rate of mass loss through powerful winds. The star in the Hubble image is 15 times more massive than the Sun and is unleashing powerful winds, which have cleared out the area around it. It has ejected its outer layers of gas, sweeping them around into a cone-like shape, and exposing its searing hot core. The behemoth appears offset from the centre because the telescope is viewing the cone from a slightly tilted angle. In a few million years, the star might become a supernova. The brilliant blue colour of the nebula comes from oxygen gas that is heated to roughly 11,000 degrees Celsius, which is much hotter than the hydrogen gas surrounding it.
Stars, both big and small, are born when clouds of dust and gas collapse because of gravity. As more and more material falls onto the forming star, it finally becomes hot and dense enough at its centre to trigger the nuclear fusion reactions that make stars, including our Sun, shine. Massive stars make up only a few percent of the billions of stars in our Universe. Yet they play a crucial role in shaping our Universe, through stellar winds, supernova explosions, and the production of heavy elements.
Read more at Science Daily
On 24 April 1990 the Hubble Space Telescope was launched aboard the space shuttle Discovery, along with a five-astronaut crew. Deployed into low-Earth orbit a day later, the telescope has since opened a new eye onto the cosmos that has been transformative for our civilization.
Hubble is revolutionising modern astronomy not only for astronomers, but also by taking the public on a wondrous journey of exploration and discovery. Hubble's seemingly never-ending, breathtaking celestial snapshots provide a visual shorthand for its exemplary scientific achievements. Unlike any other telescope before it, Hubble has made astronomy relevant, engaging, and accessible for people of all ages. The mission has yielded to date 1.4 million observations and provided data that astronomers around the world have used to write more than 17,000 peer-reviewed scientific publications, making it one of the most prolific space observatories in history. Its rich data archive alone will fuel future astronomy research for generations to come.
Each year, the NASA/ESA Hubble Space Telescope dedicates a small portion of its precious observing time to taking a special anniversary image, showcasing particularly beautiful and meaningful objects. These images continue to challenge scientists with exciting new surprises and to fascinate the public with ever more evocative observations.
This year, Hubble is celebrating this new milestone with a portrait of two colourful nebulae that reveals how energetic, massive stars sculpt their homes of gas and dust. Although NGC 2014 and NGC 2020 appear to be separate in this visible-light image, they are actually part of one giant star formation complex. The star-forming regions seen here are dominated by the glow of stars at least 10 times more massive than our Sun. These stars have short lives of only a few million years, compared to the 10-billion-year lifetime of our Sun.
The sparkling centerpiece of NGC 2014 is a grouping of bright, hefty stars near the centre of the image that has blown away its cocoon of hydrogen gas (coloured red) and dust in which it was born. A torrent of ultraviolet radiation from the star cluster is illuminating the landscape around it. These massive stars also unleash fierce winds that are eroding the gas cloud above and to the right of them. The gas in these areas is less dense, making it easier for the stellar winds to blast through them, creating bubble-like structures reminiscent of brain coral, that have earned the nebula the nickname the "Brain Coral."
By contrast, the blue-coloured nebula below NGC 2014 has been shaped by one mammoth star that is roughly 200,000 times more luminous than our Sun. It is an example of a rare class of stars called Wolf-Rayet stars. They are thought to be the descendants of the most massive stars. Wolf-Rayet stars are very luminous and have a high rate of mass loss through powerful winds. The star in the Hubble image is 15 times more massive than the Sun and is unleashing powerful winds, which have cleared out the area around it. It has ejected its outer layers of gas, sweeping them around into a cone-like shape, and exposing its searing hot core. The behemoth appears offset from the centre because the telescope is viewing the cone from a slightly tilted angle. In a few million years, the star might become a supernova. The brilliant blue colour of the nebula comes from oxygen gas that is heated to roughly 11,000 degrees Celsius, which is much hotter than the hydrogen gas surrounding it.
Stars, both big and small, are born when clouds of dust and gas collapse because of gravity. As more and more material falls onto the forming star, it finally becomes hot and dense enough at its centre to trigger the nuclear fusion reactions that make stars, including our Sun, shine. Massive stars make up only a few percent of the billions of stars in our Universe. Yet they play a crucial role in shaping our Universe, through stellar winds, supernova explosions, and the production of heavy elements.
Read more at Science Daily
Water may look like a simple liquid; however, it is anything but simple to analyze
An international team of scientists lead by Professor Martina Havenith from Ruhr-Universität Bochum (RUB) has been able to shed new light on the properties of water at the molecular level. In particular, they were able to describe accurately the interactions between three water molecules, which contribute significantly to the energy landscape of water. The research could pave the way to better understand and predict water behaviour at different conditions, even under extreme ones. The results have been published online in the journal Angewandte Chemie on 19 April 2020.
Interactions via vibrations
Despite water is at first glance looking like a simple liquid it has many unusual properties, one of them being that it is less dense when it is frozen than when it is liquid. In the simplest way liquids are described by the interaction of their direct partners, which are mostly sufficient for a good description, but not in the case of water: The interactions in water dimers account for 75 per cent of the energy that keeps water together. Martina Havenith, head of the Bochum-based Chair of Physical Chemistry II and spokesperson for the Ruhr Explores Solvation (Resolv) Cluster of Excellence, and her colleagues from Emory University in Atlanta, US, recently published an accurate description of the interactions related to the water dimer. In order to get access to the cooperative interactions, which make up 25 per cent of the total water interaction, the water trimer had to be investigated.
Now, the team lead by Martina Havenith in collaboration with colleagues from Emory University and of the University of Mississipi, US, has been able to describe for the first time in an accurate way the interaction energy among three water molecules. They tested modern theoretical descriptions against the result of the spectroscopic fingerprint of these intermolecular interactions.
Obstacles for experimental research
Since more than 40 years, scientists have developed computational models and simulations to describe the energies involved in the water trimer. Experiments have been less successful, despite some pioneer insights in gas phase studies, and they rely on spectroscopy. The technique works by irradiating a water sample with radiation and recording how much light has been absorbed. The obtained pattern is related to the different type of excitations of intermolecular motions involving more than one water molecules. Unfortunately, to obtain these spectroscopic fingerprints for water dimers and trimers, one needs to irradiate in the terahertz frequency region. And laser sources that provide high-power have been lacking for that frequency region.
This technical gap has been filled only recently. In the current publication, the RUB scientists used the free electron lasers at Radboud University in Nijmegen in The Netherlands, which allows for high powers in the terahertz frequency region. The laser was applied through tiny droplets of superfluid helium, which is cooled down at extremely low temperatures, at minus 272,75 degrees Celsius. These droplets can collect water molecules one by one, allowing to isolate small aggregates of dimers and trimers. In this way the scientists were able to irradiate exactly the molecules they wanted to and to acquire the first comprehensive spectrum of the water trimer in the terahertz frequency region.
Read more at Science Daily
Interactions via vibrations
Despite water is at first glance looking like a simple liquid it has many unusual properties, one of them being that it is less dense when it is frozen than when it is liquid. In the simplest way liquids are described by the interaction of their direct partners, which are mostly sufficient for a good description, but not in the case of water: The interactions in water dimers account for 75 per cent of the energy that keeps water together. Martina Havenith, head of the Bochum-based Chair of Physical Chemistry II and spokesperson for the Ruhr Explores Solvation (Resolv) Cluster of Excellence, and her colleagues from Emory University in Atlanta, US, recently published an accurate description of the interactions related to the water dimer. In order to get access to the cooperative interactions, which make up 25 per cent of the total water interaction, the water trimer had to be investigated.
Now, the team lead by Martina Havenith in collaboration with colleagues from Emory University and of the University of Mississipi, US, has been able to describe for the first time in an accurate way the interaction energy among three water molecules. They tested modern theoretical descriptions against the result of the spectroscopic fingerprint of these intermolecular interactions.
Obstacles for experimental research
Since more than 40 years, scientists have developed computational models and simulations to describe the energies involved in the water trimer. Experiments have been less successful, despite some pioneer insights in gas phase studies, and they rely on spectroscopy. The technique works by irradiating a water sample with radiation and recording how much light has been absorbed. The obtained pattern is related to the different type of excitations of intermolecular motions involving more than one water molecules. Unfortunately, to obtain these spectroscopic fingerprints for water dimers and trimers, one needs to irradiate in the terahertz frequency region. And laser sources that provide high-power have been lacking for that frequency region.
This technical gap has been filled only recently. In the current publication, the RUB scientists used the free electron lasers at Radboud University in Nijmegen in The Netherlands, which allows for high powers in the terahertz frequency region. The laser was applied through tiny droplets of superfluid helium, which is cooled down at extremely low temperatures, at minus 272,75 degrees Celsius. These droplets can collect water molecules one by one, allowing to isolate small aggregates of dimers and trimers. In this way the scientists were able to irradiate exactly the molecules they wanted to and to acquire the first comprehensive spectrum of the water trimer in the terahertz frequency region.
Read more at Science Daily
How birds evolved big brains
Common raven |
The study, published online today in the journal Current Biology, reveals that prior to the mass extinction at the end of the Cretaceous Period, birds and non-avian dinosaurs had similar relative brain sizes. After the extinction, the brain-body scaling relationship shifted dramatically as some types of birds underwent an explosive radiation to re-occupy ecological space vacated by extinct groups.
"One of the big surprises was that selection for small body size turns out to be a major factor in the evolution of large-brained birds," says Dr. Daniel Ksepka, Curator of Science at the Bruce Museum and lead author of the study. "Many successful bird families evolved proportionally large brains by shrinking down to smaller body sizes while their brain sizes stayed close to those of their larger-bodied ancestors."
In order to understand how bird brains changed, a team of 37 scientists used CT scan data to create endocasts (models of the brain based on the shape of the skull cavity) of hundreds of birds and dinosaurs, which they combined with a large existing database of brain measurements from modern birds. They then analyzed brain-body allometry: the way brain size scales with body size.
"There is no clear line between the brains of advanced dinosaurs and primitive birds," notes co-author Dr. Amy Balanoff of Johns Hopkins University. "Birds like emus and pigeons have the same brains sizes you would expect for a theropod dinosaur of the same body size, and in fact some species like moa have smaller-than-expected brains."
The two groups of birds with truly exceptional brain sizes evolved relatively recently: parrots and corvids (crows, ravens, and kin). These birds show tremendous cognitive capacity, including the ability to use tools and language, and to remember human faces. The new study finds that parrots and crows exhibited very high rates of brain evolution that may have helped them achieve such high proportional brain sizes.
"Several groups of birds show above average rates of brain and body size evolution," remarks co-author Dr. N. Adam Smith of the Campbell Geology Museum at Clemson University. "But crows are really off the charts -- they outpaced all other birds. Our results suggest that calling someone 'bird-brained' is actually quite a compliment!"
Read more at Science Daily
Apr 23, 2020
Ocean microbes' role in climate effects
A new study shows that "hotspots" of nutrients surrounding phytoplankton -- which are tiny marine algae producing approximately half of the oxygen we breathe every day -- play an outsized role in the release of a gas involved in cloud formation and climate regulation.
The new research quantifies the way specific marine bacteria process a key chemical called dimethylsulfoniopropionate (DMSP), which is produced in enormous amounts by phytoplankton. This chemical plays a pivotal role in the way sulfur and carbon get consumed by microorganisms in the ocean and released into the atmosphere.
The work is reported in the journal Nature Communications, in a paper by MIT graduate student Cherry Gao, former MIT professor of civil and environmental engineering Roman Stocker (now a professor at ETH Zurich, in Switzerland), in collaboration with Jean-Baptiste Raina and Professor Justin Seymour of University of Technology Sydney in Australia, and four others.
More than a billion tons of DMSP is produced annually by microorganisms in the oceans, accounting for 10 percent of the carbon that gets taken up by phytoplankton -- a major "sink" for carbon dioxide, without which the greenhouse gas would be building up even faster in the atmosphere. But exactly how this compound gets processed and how its different chemical pathways figure into global carbon and sulfur cycles had not been well-understood until now, Gao says.
"DMSP is a major nutrient source for bacteria," she says. "It satisfies up to 95 percent of bacterial sulfur demand and up to 15 percent of bacterial carbon demand in the ocean. So given the ubiquity and the abundance of DMSP, we expect that these microbial processes would have a significant role in the global sulfur cycle."
Gao and her co-workers genetically modified a marine bacterium called Ruegeria pomeroyi, causing it to fluoresce when one of two different pathways for processing DMSP was activated, allowing the relative expression of the processes to be analyzed under a variety of conditions.
One of the two pathways, called demethylation, produces carbon and sulfur based nutrients that the microbes can use to sustain their growth. The other pathway, called cleavage, produces a gas called dimethylsulfide (DMS), which Gao explains "is the compound that's responsible for the smell of the sea. I actually smelled the ocean a lot in the lab when I was experimenting."
DMS is the gas responsible for most of the biologically derived sulfur that enters the atmosphere from the oceans. Once in the atmosphere, sulfur compounds are a key source of condensation for water molecules, so their concentration in the air affects both rainfall patterns and the overall reflectivity of the atmosphere through cloud generation. Understanding the process responsible for much of that production could be important in multiple ways for refining climate models.
Those climate implications are "why we're interested in knowing when bacteria decide to use the cleavage pathway versus the demethylation pathway," in order to better understand how much of the important DMS gets produced under what conditions, Gao says. "This has been an open question for at least two decades."
The new study found that the concentration of DMSP in the vicinity regulates which pathway the bacteria use. Below a certain concentration, demethylation was dominant, but above a level of about 10 micromoles, the cleavage process dominated.
"What was really surprising to us was, upon experimentation with the engineered bacteria, we found that the concentrations of DMSP in which the cleavage pathway dominates is higher than expected -- orders of magnitude higher than the average concentration in the ocean," she says.
That suggests that this process hardly takes place under typical ocean conditions, the researchers concluded. Rather, microscale "hotspots" of elevated DMSP concentration are probably responsible for a highly disproportionate amount of global DMS production. These microscale "hotspots" are areas surrounding certain phytoplankton cells where extremely high amounts of DMSP are present at about a thousand times greater than average oceanic concentration.
"We actually did a co-incubation experiment between the engineered bacteria and a DMSP-producing phytoplankton," Gao says. The experiment showed "that indeed, bacteria increased their expression of the DMS-producing pathway, closer to the phytoplankton."
The new analysis should help researchers understand key details of how these microscopic marine organisms, through their collective behavior, are affecting global-scale biogeochemical and climatic processes, the researchers say.
Read more at Science Daily
The new research quantifies the way specific marine bacteria process a key chemical called dimethylsulfoniopropionate (DMSP), which is produced in enormous amounts by phytoplankton. This chemical plays a pivotal role in the way sulfur and carbon get consumed by microorganisms in the ocean and released into the atmosphere.
The work is reported in the journal Nature Communications, in a paper by MIT graduate student Cherry Gao, former MIT professor of civil and environmental engineering Roman Stocker (now a professor at ETH Zurich, in Switzerland), in collaboration with Jean-Baptiste Raina and Professor Justin Seymour of University of Technology Sydney in Australia, and four others.
More than a billion tons of DMSP is produced annually by microorganisms in the oceans, accounting for 10 percent of the carbon that gets taken up by phytoplankton -- a major "sink" for carbon dioxide, without which the greenhouse gas would be building up even faster in the atmosphere. But exactly how this compound gets processed and how its different chemical pathways figure into global carbon and sulfur cycles had not been well-understood until now, Gao says.
"DMSP is a major nutrient source for bacteria," she says. "It satisfies up to 95 percent of bacterial sulfur demand and up to 15 percent of bacterial carbon demand in the ocean. So given the ubiquity and the abundance of DMSP, we expect that these microbial processes would have a significant role in the global sulfur cycle."
Gao and her co-workers genetically modified a marine bacterium called Ruegeria pomeroyi, causing it to fluoresce when one of two different pathways for processing DMSP was activated, allowing the relative expression of the processes to be analyzed under a variety of conditions.
One of the two pathways, called demethylation, produces carbon and sulfur based nutrients that the microbes can use to sustain their growth. The other pathway, called cleavage, produces a gas called dimethylsulfide (DMS), which Gao explains "is the compound that's responsible for the smell of the sea. I actually smelled the ocean a lot in the lab when I was experimenting."
DMS is the gas responsible for most of the biologically derived sulfur that enters the atmosphere from the oceans. Once in the atmosphere, sulfur compounds are a key source of condensation for water molecules, so their concentration in the air affects both rainfall patterns and the overall reflectivity of the atmosphere through cloud generation. Understanding the process responsible for much of that production could be important in multiple ways for refining climate models.
Those climate implications are "why we're interested in knowing when bacteria decide to use the cleavage pathway versus the demethylation pathway," in order to better understand how much of the important DMS gets produced under what conditions, Gao says. "This has been an open question for at least two decades."
The new study found that the concentration of DMSP in the vicinity regulates which pathway the bacteria use. Below a certain concentration, demethylation was dominant, but above a level of about 10 micromoles, the cleavage process dominated.
"What was really surprising to us was, upon experimentation with the engineered bacteria, we found that the concentrations of DMSP in which the cleavage pathway dominates is higher than expected -- orders of magnitude higher than the average concentration in the ocean," she says.
That suggests that this process hardly takes place under typical ocean conditions, the researchers concluded. Rather, microscale "hotspots" of elevated DMSP concentration are probably responsible for a highly disproportionate amount of global DMS production. These microscale "hotspots" are areas surrounding certain phytoplankton cells where extremely high amounts of DMSP are present at about a thousand times greater than average oceanic concentration.
"We actually did a co-incubation experiment between the engineered bacteria and a DMSP-producing phytoplankton," Gao says. The experiment showed "that indeed, bacteria increased their expression of the DMS-producing pathway, closer to the phytoplankton."
The new analysis should help researchers understand key details of how these microscopic marine organisms, through their collective behavior, are affecting global-scale biogeochemical and climatic processes, the researchers say.
Read more at Science Daily
First-ever comprehensive geologic map of the moon
Have you ever wondered what kind of rocks make up those bright and dark splotches on the moon? Well, the USGS has just released a new authoritative map to help explain the 4.5-billion-year-old history of our nearest neighbor in space.
For the first time, the entire lunar surface has been completely mapped and uniformly classified by scientists from the USGS, in collaboration with NASA and the Lunar Planetary Institute.
The lunar map, called the "Unified Geologic Map of the Moon," will serve as the definitive blueprint of the moon's surface geology for future human missions and will be invaluable for the international scientific community, educators and the public-at-large. The digital map is available online now and shows the moon's geology in incredible detail (1:5,000,000 scale).
"People have always been fascinated by the moon and when we might return," said current USGS Director and former NASA astronaut Jim Reilly. "So, it's wonderful to see USGS create a resource that can help NASA with their planning for future missions."
To create the new digital map, scientists used information from six Apollo-era regional maps along with updated information from recent satellite missions to the moon. The existing historical maps were redrawn to align them with the modern data sets, thus preserving previous observations and interpretations. Along with merging new and old data, USGS researchers also developed a unified description of the stratigraphy, or rock layers, of the moon. This resolved issues from previous maps where rock names, descriptions and ages were sometimes inconsistent.
"This map is a culmination of a decades-long project," said Corey Fortezzo, USGS geologist and lead author. "It provides vital information for new scientific studies by connecting the exploration of specific sites on the moon with the rest of the lunar surface."
Elevation data for the moon's equatorial region came from stereo observations collected by the Terrain Camera on the recent SELENE (Selenological and Engineering Explorer) mission led by JAXA, the Japan Aerospace Exploration Agency. Topography for the north and south poles was supplemented with NASA's Lunar Orbiter Laser Altimeter data.
Further Information: https://astrogeology.usgs.gov/search/map/Moon/Geology/Unified_Geologic_Map_of_the_Moon_GIS_v2
From Science Daily
For the first time, the entire lunar surface has been completely mapped and uniformly classified by scientists from the USGS, in collaboration with NASA and the Lunar Planetary Institute.
The lunar map, called the "Unified Geologic Map of the Moon," will serve as the definitive blueprint of the moon's surface geology for future human missions and will be invaluable for the international scientific community, educators and the public-at-large. The digital map is available online now and shows the moon's geology in incredible detail (1:5,000,000 scale).
"People have always been fascinated by the moon and when we might return," said current USGS Director and former NASA astronaut Jim Reilly. "So, it's wonderful to see USGS create a resource that can help NASA with their planning for future missions."
To create the new digital map, scientists used information from six Apollo-era regional maps along with updated information from recent satellite missions to the moon. The existing historical maps were redrawn to align them with the modern data sets, thus preserving previous observations and interpretations. Along with merging new and old data, USGS researchers also developed a unified description of the stratigraphy, or rock layers, of the moon. This resolved issues from previous maps where rock names, descriptions and ages were sometimes inconsistent.
"This map is a culmination of a decades-long project," said Corey Fortezzo, USGS geologist and lead author. "It provides vital information for new scientific studies by connecting the exploration of specific sites on the moon with the rest of the lunar surface."
Elevation data for the moon's equatorial region came from stereo observations collected by the Terrain Camera on the recent SELENE (Selenological and Engineering Explorer) mission led by JAXA, the Japan Aerospace Exploration Agency. Topography for the north and south poles was supplemented with NASA's Lunar Orbiter Laser Altimeter data.
Further Information: https://astrogeology.usgs.gov/search/map/Moon/Geology/Unified_Geologic_Map_of_the_Moon_GIS_v2
From Science Daily
Coronaviruses and bats have been evolving together for millions of years
Fruit bats |
"We found that there's a deep evolutionary history between bats and coronaviruses," says Steve Goodman, MacArthur Field Biologist at Chicago's Field Museum and an author of a paper just released in Scientific Reports detailing the discovery. "Developing a better understanding of how coronaviruses evolved can help us build public health programs in the future." The study was led by Université de La Réunion scientists Léa Joffrin and Camille Lebarbenchon, who conducted the genetic analyses in the laboratory of "Processus infectieux en milieu insulaire tropical (PIMIT)" on Réunion Island, focusing on emerging infectious diseases on islands in the western Indian Ocean.
A lot of people use "coronavirus" as a synonym for "COVID-19," the kind of coronavirus causing the current pandemic. However, there are a vast number of types of different coronaviruses, potentially as many as bat species, and most of them are unknown to be transferred to humans and pose no known threat. The coronaviruses carried by the bats studied in this paper are different from the one behind COVID-19, but by learning about coronaviruses in bats in general, we can better understand the virus affecting us today.
All animals have viruses that live inside them, and bats, as well as a range of other mammal groups, happen to be natural carriers of coronaviruses. These coronaviruses don't appear to be harmful to the bats, but there's potential for them to be dangerous to other animals if the viruses have opportunities to jump between species. This study examines the genetic relationships between different strains of coronaviruses and the animals they live in, which sets the stage for a better understanding of the transfer of viruses from animals to humans.
Goodman, who has been based on Madagascar for several decades, and his colleagues took swab and some cases blood samples from more than a thousand bats representing 36 species found on islands in the western Indian Ocean and coastal areas of the African nation of Mozambique. Eight percent of the bats they sampled were carrying a coronavirus.
"This is a very rough estimate of the proportion of infected bats. There is increasing evidence for seasonal variation in the circulation of these viruses in bats, suggesting that this number may significantly vary according to the time of the year," says Camille Lebarbenchon, Disease Ecologist at the Université de La Réunion.
The researchers ran genetic analyses of the coronaviruses present in these bats. By comparing the coronaviruses isolated and sequenced in the context of this study with ones from other animals including dolphins, alpacas, and humans, they were able to build a giant coronavirus family tree. This family tree shows how the different kinds of coronavirus are related to each other.
"We found that for the most part, each of the different genera of families of bats for which coronavirus sequences were available had their own strains," says Goodman. "Moreover, based on the evolutionary history of the different bat groups, it is clear that there is a deep coexistence between bats (at the level of genus and family) and their associated coronaviruses." For example, fruit bats of the family Pteropodidae from different continents and islands formed a cluster in their tree and were genetically different than the coronavirus strains of other groups of bats found in the same geographical zones.
The team found that in rare cases, bats of different families, genera, and species that live in the same caves and have closely spaced day roost sites shared the same strain of coronavirus. But in this study, the transmission between species is the exception, not the rule. "It is quite reassuring that the transmission of coronavirus in the region between two bat species seems to be very rare given the high diversity of bat coronaviruses. Next, we need to understand environmental, biological, and molecular factors leading to these rare shifts" says Léa Joffrin, a disease ecologist who worked on bat coronavirus during her PhD at the Université de La Réunion.
Learning how different strains of coronavirus evolved could be key for preventing future coronavirus outbreaks. "Before you can actually figure out programs for public health and try to deal with the possible shift of certain diseases to humans, or from humans to animals, you have to know what's out there. This is kind of the blueprint," says Goodman.
Co-author Patrick Mavingui, microbial ecologist and head of the PIMIT Laboratory adds, "The development of serological methods targeting coronavirus strains circulating in the Indian Ocean will help show whether there have already been discrete passages in human populations, and their interaction with the hosts will allow a better understanding of the emergence risk."
The study also highlights the importance of museum collections, says Goodman. The researchers used, in part, bat specimens housed in the Field Museum, to confirm the identities of the animals employed in this study. These voucher specimens helped them confidently say which bats and from which geographical regions hosted the different strains of coronaviruses. The research also drew from genetic databases like GenBank. "This information is important for public health, and the point of departure is closely linked to museum specimens," says Goodman. "We're able to use museum material to study the evolution of a group of viruses and its potential applications across wildlife in the world."
Read more at Science Daily
Key nose cells identified as likely COVID-19 virus entry points
SARS-CoV-2 illustration |
Reported today (23rd April) in Nature Medicine, this first publication with the Lung Biological Network is part of an ongoing international effort to use Human Cell Atlas data to understand infection and disease. It further shows that cells in the eye and some other organs also contain the viral-entry proteins. The study also predicts how a key entry protein is regulated with other immune system genes and reveals potential targets for the development of treatments to reduce transmission.
Novel coronavirus disease -- COVID-19 -- affects the lungs and airways. Patient's symptoms can be flu-like, including fever, coughing and sore throat, while some people may not experience symptoms but still have transmissible virus. In the worst cases, the virus causes pneumonia that can ultimately lead to death. The virus is thought to be spread through respiratory droplets produced when an infected person coughs or sneezes, and appears to be easily transmitted within affected areas. So far the virus has spread to more than 184 countries and claimed more than 180,000 lives.
Scientists around the world are trying to understand exactly how the virus spreads, to help prevent transmission and develop a vaccine. While it is known that the virus that causes COVID-19 disease, known as SARS-CoV-2, uses a similar mechanism to infect our cells as a related coronavirus that caused the 2003 SARS epidemic, the exact cell types involved in the nose had not previously been pinpointed.
To discover which cells could be involved in COVID-19 transmission, researchers analysed multiple Human Cell Atlas (HCA) consortium datasets of single cell RNA sequencing, from more than 20 different tissues of non-infected people. These included cells from the lung, nasal cavity, eye, gut, heart, kidney and liver. The researchers looked for which individual cells expressed both of two key entry proteins that are used by the COVID-19 virus to infect our cells.
Dr Waradon Sungnak, the first author on the paper from Wellcome Sanger Institute, said: "We found that the receptor protein -- ACE2 -- and the TMPRSS2 protease that can activate SARS-CoV-2 entry are expressed in cells in different organs, including the cells on the inner lining of the nose. We then revealed that mucus-producing goblet cells and ciliated cells in the nose had the highest levels of both these COVID-19 virus proteins, of all cells in the airways. This makes these cells the most likely initial infection route for the virus."
Dr Martijn Nawijn, from the University Medical Center Groningen in the Netherlands, said, on behalf of the HCA Lung Biological Network: "This is the first time these particular cells in the nose have been associated with COVID-19. While there are many factors that contribute to virus transmissibility, our findings are consistent with the rapid infection rates of the virus seen so far. The location of these cells on the surface of the inside of the nose make them highly accessible to the virus, and also may assist with transmission to other people."
The two key entry proteins ACE2 and TMPRSS2 were also found in cells in the cornea of the eye and in the lining of the intestine. This suggests another possible route of infection via the eye and tear ducts, and also revealed a potential for fecal-oral transmission.
When cells are damaged or fighting an infection, various immune genes are activated. The study showed that ACE2 receptor production in the nose cells is probably switched on at the same time as these other immune genes.
The work was carried out as part of the global Human Cell Atlas consortium which aims to create reference maps of all human cells to understand health and disease. More than 1,600 people across 70 countries are involved in the HCA community, and the data is openly available to scientists worldwide.
Dr Sarah Teichmann, a senior author from the Wellcome Sanger Institute and co-chair of the HCA Organising Committee, said: "As we're building the Human Cell Atlas it is already being used to understand COVID-19 and identify which of our cells are critical for initial infection and transmission. This information can be used to better understand how coronavirus spreads. Knowing which exact cell types are important for virus transmission also provides a basis for developing potential treatments to reduce the spread of the virus."
The global HCA Lung Biological Network continues to analyse the data in order to provide further insights into the cells and targets likely to be involved in COVID-19, and to relate them to patient characteristics.
Read more at Science Daily
Apr 22, 2020
Why relying on new technology won't save the planet
Overreliance on promises of new technology to solve climate change is enabling delay, say researchers from Lancaster University.
Their research published in Nature Climate Change calls for an end to a longstanding cycle of technological promises and reframed climate change targets.
Contemporary technological proposals for responding to climate change include nuclear fusion power, giant carbon sucking machines, ice-restoration using millions of wind-powered pumps, and spraying particulates in the stratosphere.
Researchers Duncan McLaren and Nils Markusson from Lancaster Environment Centre say that: "For forty years, climate action has been delayed by technological promises. Contemporary promises are equally dangerous. Our work exposes how such promises have raised expectations of more effective policy options becoming available in the future, and thereby enabled a continued politics of prevarication and inadequate action.
"Prevarication is not necessarily intentional, but such promises can feed systemic 'moral corruption', in which current elites are enabled to pursue self-serving pathways, while passing off risk onto vulnerable people in the future and in the global South.
The article describes a history of such promises, showing how the overarching international goal of 'avoiding dangerous climate change' has been reinterpreted and differently represented in the light of new modelling methods, scenarios and technological promises.
The researchers argue that the targets, models and technologies have co-evolved in ways that enable delay: "Each novel promise not only competes with existing ideas, but also downplays any sense of urgency, enabling the repeated deferral of political deadlines for climate action and undermining societal commitment to meaningful responses.
They conclude: "Putting our hopes in yet more new technologies is unwise. Instead, cultural, social and political transformation is essential to enable widespread deployment of both behavioural and technological responses to climate change."
Read more at Science Daily
Their research published in Nature Climate Change calls for an end to a longstanding cycle of technological promises and reframed climate change targets.
Contemporary technological proposals for responding to climate change include nuclear fusion power, giant carbon sucking machines, ice-restoration using millions of wind-powered pumps, and spraying particulates in the stratosphere.
Researchers Duncan McLaren and Nils Markusson from Lancaster Environment Centre say that: "For forty years, climate action has been delayed by technological promises. Contemporary promises are equally dangerous. Our work exposes how such promises have raised expectations of more effective policy options becoming available in the future, and thereby enabled a continued politics of prevarication and inadequate action.
"Prevarication is not necessarily intentional, but such promises can feed systemic 'moral corruption', in which current elites are enabled to pursue self-serving pathways, while passing off risk onto vulnerable people in the future and in the global South.
The article describes a history of such promises, showing how the overarching international goal of 'avoiding dangerous climate change' has been reinterpreted and differently represented in the light of new modelling methods, scenarios and technological promises.
The researchers argue that the targets, models and technologies have co-evolved in ways that enable delay: "Each novel promise not only competes with existing ideas, but also downplays any sense of urgency, enabling the repeated deferral of political deadlines for climate action and undermining societal commitment to meaningful responses.
They conclude: "Putting our hopes in yet more new technologies is unwise. Instead, cultural, social and political transformation is essential to enable widespread deployment of both behavioural and technological responses to climate change."
Read more at Science Daily
Heavy cost of excessive drinking on people's decision making
A new study from psychologists at the University of Bath highlights the true impact of heavy drinking on our ability to plan, set goals and make decisions the following day. Published in the Journal of Clinical Medicine, the study provides new evidence as to why hangovers cost the wider economy so much.
A recent report, which involved the same team, found that hangovers cost the UK economy £1.4 billion a year in wasted productivity, including people working while hungover.
The latest study involved thirty-five 18 to 30-year-olds who had reported experiencing a hangover at least once in the past month. Individuals completed measures which assessed their ability to switch attention between tasks, to update and process information from multiple sources and to guide and plan behaviour, whilst experiencing a hangover.
Their findings show how, when hungover, individuals have a reduced ability to retain information in their short-term memory -- for example retaining a telephone number whilst taking a message at the same time. They also highlight impairments when it comes to individuals' ability to switch attention between tasks and focus on a goal.
Few studies have explored how hangover effects key cognitive processes, the so-called 'core executive functions', which we use in daily life to plan, set goals and make decisions.
Lead author Craig Gunn of Bath's Department of Psychology explained: "We know that hangovers can have a big economic cost, but we did not know how hangover affects our ability to switch attention from one task to another, update information in our mind, and maintain focus on set goals. Our study asked participants to complete tasks measuring these processes when they had a hangover and again when they had not consumed alcohol. The results suggest that all of these processes are impaired by a hangover, which could have consequences for other aspects of our lives."
Senior author, Dr Sally Adams from the Addiction & Mental Health Group at the University of Bath added: "Anecdotally, we may experience reduced performance of daily tasks when we are hungover such as planning activities and dividing our attention between several tasks. Our data show that this impairment is likely the result of reduced capability in several core executive functions, which are important for tasks such as workplace performance and driving."
The authors suggest these findings could also have important implications during the current lockdown situation. Earlier this month, Alcohol Change UK estimated that 8.6 million adults in the UK were drinking more frequently. Those drinking heavily at home are at increased risk of experiencing a hangover the next day, which may impact their ability and productivity when working at home.
From Science Daily
A recent report, which involved the same team, found that hangovers cost the UK economy £1.4 billion a year in wasted productivity, including people working while hungover.
The latest study involved thirty-five 18 to 30-year-olds who had reported experiencing a hangover at least once in the past month. Individuals completed measures which assessed their ability to switch attention between tasks, to update and process information from multiple sources and to guide and plan behaviour, whilst experiencing a hangover.
Their findings show how, when hungover, individuals have a reduced ability to retain information in their short-term memory -- for example retaining a telephone number whilst taking a message at the same time. They also highlight impairments when it comes to individuals' ability to switch attention between tasks and focus on a goal.
Few studies have explored how hangover effects key cognitive processes, the so-called 'core executive functions', which we use in daily life to plan, set goals and make decisions.
Lead author Craig Gunn of Bath's Department of Psychology explained: "We know that hangovers can have a big economic cost, but we did not know how hangover affects our ability to switch attention from one task to another, update information in our mind, and maintain focus on set goals. Our study asked participants to complete tasks measuring these processes when they had a hangover and again when they had not consumed alcohol. The results suggest that all of these processes are impaired by a hangover, which could have consequences for other aspects of our lives."
Senior author, Dr Sally Adams from the Addiction & Mental Health Group at the University of Bath added: "Anecdotally, we may experience reduced performance of daily tasks when we are hungover such as planning activities and dividing our attention between several tasks. Our data show that this impairment is likely the result of reduced capability in several core executive functions, which are important for tasks such as workplace performance and driving."
The authors suggest these findings could also have important implications during the current lockdown situation. Earlier this month, Alcohol Change UK estimated that 8.6 million adults in the UK were drinking more frequently. Those drinking heavily at home are at increased risk of experiencing a hangover the next day, which may impact their ability and productivity when working at home.
From Science Daily
Rising carbon dioxide causes more than a climate crisis -- it may directly harm our ability to think
Carbon dioxide in air concept illustration |
"It's amazing how high CO2 levels get in enclosed spaces," said Kris Karnauskas, CIRES Fellow, associate professor at CU Boulder and lead author of the new study published today in the AGU journal GeoHealth. "It affects everybody -- from little kids packed into classrooms to scientists, business people and decision makers to regular folks in their houses and apartments."
Shelly Miller, professor in CU Boulder's school of engineering and coauthor adds that "building ventilation typically modulates CO2 levels in buildings, but there are situations when there are too many people and not enough fresh air to dilute the CO2." CO2 can also build up in poorly ventilated spaces over longer periods of time, such as overnight while sleeping in bedrooms, she said.
Put simply, when we breathe air with high CO2 levels, the CO2 levels in our blood rise, reducing the amount of oxygen that reaches our brains. Studies show that this can increase sleepiness and anxiety, and impair cognitive function.
We all know the feeling: Sit too long in a stuffy, crowded lecture hall or conference room and many of us begin to feel drowsy or dull. In general, CO2 concentrations are higher indoors than outdoors, the authors wrote. And outdoor CO2 in urban areas is higher than in pristine locations. The CO2 concentrations in buildings are a result of both the gas that is otherwise in equilibrium with the outdoors, but also the CO2 generated by building occupants as they exhale.
Atmospheric CO2 levels have been rising since the Industrial Revolution, reaching a 414 ppm peak at NOAA's Mauna Loa Observatory in Hawaii in 2019. In the ongoing scenario in which people on Earth do not reduce greenhouse gas emissions, the Intergovernmental Panel on Climate Change predicts outdoor CO2 levels could climb to 930 ppm by 2100. And urban areas typically have around 100 ppm CO2 higher than this background.
Karnauskas and his colleagues developed a comprehensive approach that considers predicted future outdoor CO2 concentrations and the impact of localized urban emissions, a model of the relationship between indoor and outdoor CO2 levels and the impact on human cognition. They found that if the outdoor CO2 concentrations do rise to 930 ppm, that would nudge the indoor concentrations to a harmful level of 1400 ppm.
"At this level, some studies have demonstrated compelling evidence for significant cognitive impairment," said Anna Schapiro, assistant professor of psychology at the University of Pennsylvania and a coauthor on the study. "Though the literature contains some conflicting findings and much more research is needed, it appears that high level cognitive domains like decision-making and planning are especially susceptible to increasing CO2 concentrations."
In fact, at 1400 ppm, CO2 concentrations may cut our basic decision-making ability by 25 percent, and complex strategic thinking by around 50 percent, the authors found.
The cognitive impacts of rising CO2 levels represent what scientists call a "direct" effect of the gas' concentration, much like ocean acidification. In both cases, elevated CO2 itself -- not the subsequent warming it also causes -- is what triggers harm.
The team says there may be ways to adapt to higher indoor CO2 levels, but the best way to prevent levels from reaching harmful levels is to reduce fossil fuel emissions. This would require globally adopted mitigation strategies such as those set forth by the Paris Agreement of the United Nations Framework Convention on Climate Change.
Karnauskas and his coauthors hope these findings will spark further research on 'hidden' impacts of climate change such as that on cognition. "This is a complex problem, and our study is at the beginning. It's not just a matter of predicting global (outdoor) CO2 levels," he said. "It's going from the global background emissions, to concentrations in the urban environment, to the indoor concentrations, and finally the resulting human impact. We need even broader, interdisciplinary teams of researchers to explore this: investigating each step in our own silos will not be enough."
Read more at Science Daily
Link between obesity and sleep loss
Clock by bed |
But a new study published this week in PLOS Biology found that the direction of this reaction might actually be flipped: It's not the sleep loss that leads to obesity, but rather that excess weight can cause poor sleep, according to researchers from the University of Pennsylvania's Perelman School of Medicine and the University of Nevada, Reno, who discovered their findings in the microscopic worm Caenorhabditis elegans (C. elegans).
"We think that sleep is a function of the body trying to conserve energy in a setting where energetic levels are going down. Our findings suggest that if you were to fast for a day, we would predict you might get sleepy because your energetic stores would be depleted," said study co-author David Raizen, MD, PhD, an associate professor of Neurology and member of the Chronobiology and Sleep Institute at Penn.
Raizen emphasized that while these findings in worms may not translate directly to humans, C. elegans offer a surprisingly good model for studying mammalian slumber. Like all other animals that have nervous systems, they need sleep. But unlike humans, who have complex neural circuitry and are difficult to study, a C. elegans has only 302 neurons -- one of which scientists know for certain is a sleep regulator.
In humans, acute sleep disruption can result in increased appetite and insulin resistance, and people who chronically get fewer than six hours of sleep per night are more likely be obese and diabetic. Moreover, starvation in humans, rats, fruit flies, and worms has been shown to affect sleep, indicating that it is regulated, at least in part, by nutrient availability. However, the ways in which sleeping and eating work in tandem has remained unclear.
"We wanted to know, what is sleep actually doing? Short sleep and other chronic conditions, like diabetes, are linked, but it's just an association. It's not clear if short sleep is causing the propensity for obesity, or that the obesity, perhaps, causes the propensity for short sleep," said study co-author Alexander van der Linden, PhD, an associate professor of Biology at the University of Nevada, Reno.
To study the association between metabolism and sleep, the researchers genetically modified C. elegans to "turn off" a neuron that controls sleep. These worms could still eat, breathe, and reproduce, but they lost their ability to sleep. With this neuron turned off, the researchers saw a severe drop in adenosine triphosphate (ATP) levels, which is the body's energy currency.
"That suggests that sleep is an attempt to conserve energy; it's not actually causing the loss of energy," Raizen explained.
In previous research, the van der Linden lab studied a gene in C. elegans called KIN-29. This gene is homologous to the Salt-Inducible Kinase (SIK-3) gene in humans, which was already known to signal sleep pressure. Surprisingly, when the researchers knocked out the KIN-29 gene to create sleepless worms, the mutant C. elegans accumulated excess fat -- resembling the human obesity condition -- even though their ATP levels lowered.
The researchers hypothesized that the release of fat stores is a mechanism for which sleep is promoted, and that the reason KIN-29 mutants did not sleep is because they were unable to liberate their fat. To test this hypothesis, the researchers again manipulated the KIN-29 mutant worms, this time expressing an enzyme that "freed" their fat. With that manipulation, the worms were again able to sleep.
Raizen said this could explain one reason why people with obesity may experience sleep problems. "There could be a signaling problem between the fat stores and the brain cells that control sleep," he said.
While there is still much to unravel about sleep, Raizen said that this paper takes the research community one step closer to understanding one of its core functions -- and how to treat common sleep disorders.
"There is a common, over-arching sentiment in the sleep field that sleep is all about the brain, or the nerve cells, and our work suggests that this isn't necessarily true," he said. "There is some complex interaction between the brain and the rest of the body that connects to sleep regulation."
Read more at Science Daily
Apr 21, 2020
Studying our galaxy's 'water worlds'
Astrophysical observations have shown that Neptune-like water-rich exoplanets are common in our galaxy. These "water worlds" are believed to be covered with a thick layer of water, hundreds to thousands of miles deep, above a rocky mantle.
While water-rich exoplanets are common, their composition is very different from Earth, so there are many unknowns in terms of these planets' structure, composition and geochemical cycles.
In seeking to learn more about these planets, an international team of researchers, led by Arizona State University, has provided one of the first mineralogy lab studies for water-rich exoplanets. The results of their study have been recently published in the journal Proceedings of the National Academy of Sciences.
"Studying the chemical reactions and processes is an essential step toward developing an understanding of these common planet types," said co-author Dan Shim, of ASU's School of Earth and Space Exploration.
The general scientific conjecture is that water and rock form separate layers in the interiors of water worlds. Because water is lighter, underneath the water layer in water-rich planets, there should be a rocky layer. However, the extreme pressure and temperature at the boundary between water and rocky layers could fundamentally change the behaviors of these materials.
To simulate this high pressure and temperature in the lab, lead author and research scientist Carole Nisr conducted experiments at Shim's Lab for Earth and Planetary Materials at ASU using high pressure diamond-anvil cells.
For their experiment, the team immersed silica in water, compressed the sample between diamonds to a very high pressure, then heated the sample with laser beams to over a few thousand degrees Fahrenheit.
The team also conducted laser heating at the Argonne National Laboratory in Illinois. To monitor the reaction between silica and water, X-ray measurements were taken while the laser heated the sample at high pressures.
What they found was an unexpected new solid phase with silicon, hydrogen and oxygen all together.
"Originally, it was thought that water and rock layers in water-rich planets were well-separated," Nisr said. "But we discovered through our experiments a previously unknown reaction between water and silica and stability of a solid phase roughly in an intermediate composition. The distinction between water and rock appeared to be surprisingly 'fuzzy' at high pressure and high temperature."
The researchers hope that these findings will advance our knowledge on the structure and composition of water-rich planets and their geochemical cycles.
Read more at Science Daily
While water-rich exoplanets are common, their composition is very different from Earth, so there are many unknowns in terms of these planets' structure, composition and geochemical cycles.
In seeking to learn more about these planets, an international team of researchers, led by Arizona State University, has provided one of the first mineralogy lab studies for water-rich exoplanets. The results of their study have been recently published in the journal Proceedings of the National Academy of Sciences.
"Studying the chemical reactions and processes is an essential step toward developing an understanding of these common planet types," said co-author Dan Shim, of ASU's School of Earth and Space Exploration.
The general scientific conjecture is that water and rock form separate layers in the interiors of water worlds. Because water is lighter, underneath the water layer in water-rich planets, there should be a rocky layer. However, the extreme pressure and temperature at the boundary between water and rocky layers could fundamentally change the behaviors of these materials.
To simulate this high pressure and temperature in the lab, lead author and research scientist Carole Nisr conducted experiments at Shim's Lab for Earth and Planetary Materials at ASU using high pressure diamond-anvil cells.
For their experiment, the team immersed silica in water, compressed the sample between diamonds to a very high pressure, then heated the sample with laser beams to over a few thousand degrees Fahrenheit.
The team also conducted laser heating at the Argonne National Laboratory in Illinois. To monitor the reaction between silica and water, X-ray measurements were taken while the laser heated the sample at high pressures.
What they found was an unexpected new solid phase with silicon, hydrogen and oxygen all together.
"Originally, it was thought that water and rock layers in water-rich planets were well-separated," Nisr said. "But we discovered through our experiments a previously unknown reaction between water and silica and stability of a solid phase roughly in an intermediate composition. The distinction between water and rock appeared to be surprisingly 'fuzzy' at high pressure and high temperature."
The researchers hope that these findings will advance our knowledge on the structure and composition of water-rich planets and their geochemical cycles.
Read more at Science Daily
Infant temperament predicts personality more than 20 years later
Researchers investigating how temperament shapes adult life-course outcomes have found that behavioral inhibition in infancy predicts a reserved, introverted personality at age 26. For those individuals who show sensitivity to making errors in adolescence, the findings indicated a higher risk for internalizing disorders (such as anxiety and depression) in adulthood. The study, funded by the National Institutes of Health and published in Proceedings of the National Academy of Sciences, provides robust evidence of the impact of infant temperament on adult outcomes.
"While many studies link early childhood behavior to risk for psychopathology, the findings in our study are unique," said Daniel Pine, M.D., a study author and chief of the NIMH Section on Development and Affective Neuroscience. "This is because our study assessed temperament very early in life, linking it with outcomes occurring more than 20 years later through individual differences in neural processes."
Temperament refers to biologically based individual differences in the way people emotionally and behaviorally respond to the world. During infancy, temperament serves as the foundation of later personality. One specific type of temperament, called behavioral inhibition (BI), is characterized by cautious, fearful, and avoidant behavior toward unfamiliar people, objects, and situations. BI has been found to be relatively stable across toddlerhood and childhood, and children with BI have been found to be at greater risk for developing social withdrawal and anxiety disorders than children without BI.
Although these findings hint at the long-term outcomes of inhibited childhood temperament, only two studies to date have followed inhibited children from early childhood to adulthood. The current study, conducted by researchers at the University of Maryland, College Park, the Catholic University of America, Washington, D.C., and the National Institute of Mental Health, recruited their participant sample at 4 months of age and characterized them for BI at 14 months (almost two years earlier than the previously published longitudinal studies). In addition, unlike the two previously published studies, the researchers included a neurophysiological measure to try to identify individual differences in risk for later psychopathology.
The researchers assessed the infants for BI at 14 months of age. At age 15, these participants returned to the lab to provide neurophysiological data. These neurophysiological measures were used to assess error-related negativity (ERN), which is a negative dip in the electrical signal recorded from the brain that occurs following incorrect responses on computerized tasks. Error-related negativity reflects the degree to which people are sensitive to errors. A larger error-related negativity signal has been associated with internalizing conditions such as anxiety, and a smaller error-related negativity has been associated with externalizing conditions such as impulsivity and substance use. The participants returned at age 26 for assessments of psychopathology, personality, social functioning, and education and employment outcomes.
"It is amazing that we have been able to keep in touch with this group of people over so many years. First their parents, and now they, continue to be interested and involved in the work," said study author Nathan Fox, Ph.D., of the University of Maryland Department of Human Development and Quantitative Methodology.
The researchers found that BI at 14 months of age predicted, at age 26, a more reserved personality, fewer romantic relationships in the past 10 years, and lower social functioning with friends and family. BI at 14 months also predicted higher levels of internalizing psychopathology in adulthood, but only for those who also displayed larger error-related negativity signals at age 15. BI was not associated with externalizing general psychopathology or with education and employment outcomes.
This study highlights the enduring nature of early temperament on adult outcomes and suggests that neurophysiological markers such as error-related negativity may help identify individuals most at risk for developing internalizing psychopathology in adulthood.
"We have studied the biology of behavioral inhibition over time and it is clear that it has a profound effect influencing developmental outcome," concluded Dr. Fox.
Read more at Science Daily
"While many studies link early childhood behavior to risk for psychopathology, the findings in our study are unique," said Daniel Pine, M.D., a study author and chief of the NIMH Section on Development and Affective Neuroscience. "This is because our study assessed temperament very early in life, linking it with outcomes occurring more than 20 years later through individual differences in neural processes."
Temperament refers to biologically based individual differences in the way people emotionally and behaviorally respond to the world. During infancy, temperament serves as the foundation of later personality. One specific type of temperament, called behavioral inhibition (BI), is characterized by cautious, fearful, and avoidant behavior toward unfamiliar people, objects, and situations. BI has been found to be relatively stable across toddlerhood and childhood, and children with BI have been found to be at greater risk for developing social withdrawal and anxiety disorders than children without BI.
Although these findings hint at the long-term outcomes of inhibited childhood temperament, only two studies to date have followed inhibited children from early childhood to adulthood. The current study, conducted by researchers at the University of Maryland, College Park, the Catholic University of America, Washington, D.C., and the National Institute of Mental Health, recruited their participant sample at 4 months of age and characterized them for BI at 14 months (almost two years earlier than the previously published longitudinal studies). In addition, unlike the two previously published studies, the researchers included a neurophysiological measure to try to identify individual differences in risk for later psychopathology.
The researchers assessed the infants for BI at 14 months of age. At age 15, these participants returned to the lab to provide neurophysiological data. These neurophysiological measures were used to assess error-related negativity (ERN), which is a negative dip in the electrical signal recorded from the brain that occurs following incorrect responses on computerized tasks. Error-related negativity reflects the degree to which people are sensitive to errors. A larger error-related negativity signal has been associated with internalizing conditions such as anxiety, and a smaller error-related negativity has been associated with externalizing conditions such as impulsivity and substance use. The participants returned at age 26 for assessments of psychopathology, personality, social functioning, and education and employment outcomes.
"It is amazing that we have been able to keep in touch with this group of people over so many years. First their parents, and now they, continue to be interested and involved in the work," said study author Nathan Fox, Ph.D., of the University of Maryland Department of Human Development and Quantitative Methodology.
The researchers found that BI at 14 months of age predicted, at age 26, a more reserved personality, fewer romantic relationships in the past 10 years, and lower social functioning with friends and family. BI at 14 months also predicted higher levels of internalizing psychopathology in adulthood, but only for those who also displayed larger error-related negativity signals at age 15. BI was not associated with externalizing general psychopathology or with education and employment outcomes.
This study highlights the enduring nature of early temperament on adult outcomes and suggests that neurophysiological markers such as error-related negativity may help identify individuals most at risk for developing internalizing psychopathology in adulthood.
"We have studied the biology of behavioral inhibition over time and it is clear that it has a profound effect influencing developmental outcome," concluded Dr. Fox.
Read more at Science Daily
Exoplanet apparently disappears in latest Hubble observations
Hubble Space Telescope. |
What astronomers thought was a planet beyond our solar system has now seemingly vanished from sight. Though this happens in science fiction, such as Superman's home planet Krypton exploding, astronomers are looking for a plausible explanation.
One interpretation is that, rather than being a full-sized planetary object, which was first photographed in 2004, it could instead be a vast, expanding cloud of dust produced in a collision between two large bodies orbiting the bright nearby star Fomalhaut. Potential follow-up observations might confirm this extraordinary conclusion.
"These collisions are exceedingly rare and so this is a big deal that we actually get to see one," said András Gáspár of the University of Arizona, Tucson. "We believe that we were at the right place at the right time to have witnessed such an unlikely event with NASA's Hubble Space Telescope."
"The Fomalhaut system is the ultimate test lab for all of our ideas about how exoplanets and star systems evolve," added George Rieke of the University of Arizona's Steward Observatory. "We do have evidence of such collisions in other systems, but none of this magnitude has been observed in our solar system. This is a blueprint of how planets destroy each other."
The object, called Fomalhaut b, was first announced in 2008, based on data taken in 2004 and 2006. It was clearly visible in several years of Hubble observations that revealed it was a moving dot. Until then, evidence for exoplanets had mostly been inferred through indirect detection methods, such as subtle back-and-forth stellar wobbles, and shadows from planets passing in front of their stars.
Unlike other directly imaged exoplanets, however, nagging puzzles arose with Fomalhaut b early on. The object was unusually bright in visible light, but did not have any detectable infrared heat signature. Astronomers conjectured that the added brightness came from a huge shell or ring of dust encircling the planet that may possibly have been collision-related. The orbit of Fomalhaut b also appeared unusual, possibly very eccentric.
"Our study, which analyzed all available archival Hubble data on Fomalhaut revealed several characteristics that together paint a picture that the planet-sized object may never have existed in the first place," said Gáspár.
The team emphasizes that the final nail in the coffin came when their data analysis of Hubble images taken in 2014 showed the object had vanished, to their disbelief. Adding to the mystery, earlier images showed the object to continuously fade over time, they say. "Clearly, Fomalhaut b was doing things a bona fide planet should not be doing," said Gáspár.
The interpretation is that Fomalhaut b is slowly expanding from the smashup that blasted a dissipating dust cloud into space. Taking into account all available data, Gáspár and Rieke think the collision occurred not too long prior to the first observations taken in 2004. By now the debris cloud, consisting of dust particles around 1 micron (1/50th the diameter of a human hair), is below Hubble's detection limit. The dust cloud is estimated to have expanded by now to a size larger than the orbit of Earth around our Sun.
Equally confounding is that the team reports that the object is more likely on an escape path, rather than on an elliptical orbit, as expected for planets. This is based on the researchers adding later observations to the trajectory plots from earlier data. "A recently created massive dust cloud, experiencing considerable radiative forces from the central star Fomalhaut, would be placed on such a trajectory," said Gáspár. "Our model is naturally able to explain all independent observable parameters of the system: its expansion rate, its fading, and its trajectory."
Because Fomalhaut b is presently inside a vast ring of icy debris encircling the star, colliding bodies would likely be a mixture of ice and dust, like the comets that exist in the Kuiper belt on the outer fringe of our solar system. Gáspár and Rieke estimate that each of these comet-like bodies measured about 125 miles (200 kilometers) across (roughly half the size of the asteroid Vesta).
According to the authors, their model explains all the observed characteristics of Fomalhaut b. Sophisticated dust dynamical modeling done on a cluster of computers at the University of Arizona shows that such a model is able to fit quantitatively all the observations. According to the author's calculations, the Fomalhaut system, located about 25 light-years from Earth, may experience one of these events only every 200,000 years.
Read more at Science Daily
Origins of human language pathway in the brain at least 25 million years old
The word 'hello' in different languages |
Previously, a precursor of the language pathway was thought by many scientists to have emerged more recently, about 5 million years ago, with a common ancestor of both apes and humans.
For neuroscientists, this is comparable to finding a fossil that illuminates evolutionary history. However, unlike bones, brains did not fossilize. Instead neuroscientists need to infer what the brains of common ancestors may have been like by studying brain scans of living primates and comparing them to humans.
Professor Chris Petkov from the Faculty of Medical Sciences, Newcastle University, UK the study lead said: "It is like finding a new fossil of a long lost ancestor. It is also exciting that there may be an older origin yet to be discovered still."
The international teams of European and US scientists carried out the brain imaging study and analysis of auditory regions and brain pathways in humans, apes and monkeys which is published in Nature Neuroscience.
They discovered a segment of this language pathway in the human brain that interconnects the auditory cortex with frontal lobe regions, important for processing speech and language. Although speech and language are unique to humans, the link via the auditory pathway in other primates suggests an evolutionary basis in auditory cognition and vocal communication.
Professor Petkov added: "We predicted but could not know for sure whether the human language pathway may have had an evolutionary basis in the auditory system of nonhuman primates. I admit we were astounded to see a similar pathway hiding in plain sight within the auditory system of nonhuman primates."
Remarkable transformation
The study also illuminates the remarkable transformation of the human language pathway. A key human unique difference was found: the human left side of this brain pathway was stronger and the right side appears to have diverged from the auditory evolutionary prototype to involve non-auditory parts of the brain.
The study relied on brain scans from openly shared resources by the global scientific community. It also generated original new brain scans that are globally shared to inspire further discovery. Also since the authors predict that the auditory precursor to the human language pathway may be even older, the work inspires the neurobiological search for its earliest evolutionary origin -- the next brain 'fossil' -- to be found in animals more distantly related to humans.
Professor Timothy Griffiths, consultant neurologist at Newcastle University, UK and joint senior author on the study notes: "This discovery has tremendous potential for understanding which aspects of human auditory cognition and language can be studied with animal models in ways not possible with humans and apes. The study has already inspired new research underway including with neurology patients."
Read more at Science Daily
Apr 20, 2020
Neolithic genomes from modern-day Switzerland indicate parallel ancient societies
Genetic research throughout Europe shows evidence of drastic population changes near the end of the Neolithic period, as shown by the arrival of ancestry related to pastoralists from the Pontic-Caspian steppe. But the timing of this change and the arrival and mixture process of these peoples, particularly in Central Europe, is little understood. In a new study published in Nature Communications, researchers analyze 96 ancient genomes, providing new insights into the ancestry of modern Europeans.
Scientists sequence almost one hundred ancient genomes from Switzerland
With Neolithic settlements found everywhere from lake shore and bog environments to inner alpine valleys and high mountain passes, Switzerland's rich archeological record makes it a prime location for studies of population history in Central Europe. Towards the end of the Neolithic period, the emergence of archaeological finds from Corded Ware Complex cultural groups (CWC) coincides with the arrival of new ancestry components from the Pontic-Caspian steppe, but exactly when these new peoples arrived and how they mixed with indigenous Europeans remains unclear.
To find out, an international team led by researchers from the University of Tübingen, the University of Bern and the Max Planck Institute for the Science of Human History (MPI-SHH) sequenced the genomes of 96 individuals from 13 Neolithic and early Bronze Age sites in Switzerland, southern Germany and the Alsace region of France. They detect the arrival of this new ancestry as early as 2800 BCE, and suggest that genetic dispersal was a complex process, involving the gradual mixture of parallel, highly genetically structured societies. The researchers also identified one of the oldest known Europeans that was lactose tolerant, dating to roughly 2100 BCE.
Slow genetic turnover indicates highly structured societies
"Remarkably, we identified several female individuals without any detectable steppe-related ancestry up to 1000 years after this ancestry arrives in the region," says lead author Anja Furtwängler of the University of Tübingen's Institute for Archeological Sciences. Evidence from genetic analysis and stable isotopes suggest a patrilocal society, in which males stayed local to where they were born and females came from distant families that did not carry steppe ancestry.
These results show that CWC was a relatively homogenous population that occupied large parts of Central Europe in the early Bronze Age, but they also show that populations without steppe-related ancestry existed parallel to the CWC cultural groups for hundreds of years.
Read more at Science Daily
Scientists sequence almost one hundred ancient genomes from Switzerland
With Neolithic settlements found everywhere from lake shore and bog environments to inner alpine valleys and high mountain passes, Switzerland's rich archeological record makes it a prime location for studies of population history in Central Europe. Towards the end of the Neolithic period, the emergence of archaeological finds from Corded Ware Complex cultural groups (CWC) coincides with the arrival of new ancestry components from the Pontic-Caspian steppe, but exactly when these new peoples arrived and how they mixed with indigenous Europeans remains unclear.
To find out, an international team led by researchers from the University of Tübingen, the University of Bern and the Max Planck Institute for the Science of Human History (MPI-SHH) sequenced the genomes of 96 individuals from 13 Neolithic and early Bronze Age sites in Switzerland, southern Germany and the Alsace region of France. They detect the arrival of this new ancestry as early as 2800 BCE, and suggest that genetic dispersal was a complex process, involving the gradual mixture of parallel, highly genetically structured societies. The researchers also identified one of the oldest known Europeans that was lactose tolerant, dating to roughly 2100 BCE.
Slow genetic turnover indicates highly structured societies
"Remarkably, we identified several female individuals without any detectable steppe-related ancestry up to 1000 years after this ancestry arrives in the region," says lead author Anja Furtwängler of the University of Tübingen's Institute for Archeological Sciences. Evidence from genetic analysis and stable isotopes suggest a patrilocal society, in which males stayed local to where they were born and females came from distant families that did not carry steppe ancestry.
These results show that CWC was a relatively homogenous population that occupied large parts of Central Europe in the early Bronze Age, but they also show that populations without steppe-related ancestry existed parallel to the CWC cultural groups for hundreds of years.
Read more at Science Daily
Rare South American ground beetles sport unusual, likely multi-purpose antennal cleaners
For 157 years, scientists have wished they could understand the evolutionary relationships of a curious South American ground beetle that was missing a distinctive feature of the huge family of ground beetles (Carabidae). Could it be that this rare species was indeed lacking a characteristic trait known in over 40,000 species worldwide and how could that be? Was that species assigned to the wrong family from the very beginning?
The species, Nototylus fryi, or Fry's strange-combed beetle, is known so far only from a single, damaged specimen found in 1863 in the Brazilian State of Espíritu Santo, which today is kept in the Natural History Museum of London. So rare and unusual, due to its lack of "antennal cleaners" -- specialised "combing" structures located on the forelegs and used by carabids to keep their antennae clean, it also prompted the description of its own genus: Nototylus, now colloquially called strange-combed beetles.
No mention of the structure was made in the original description of the species, so, at one point, scientists even started to wonder whether the beetle they were looking at was in fact a carabid at all.
Because the area where Fry's strange-combed beetle had been found was once Southern Atlantic Forest, but today is mostly sugar cane fields, cacao plantations, and cattle ranches, scientists have feared that additional specimens of strange-combed beetles might never be collected again and that the group was already extinct. Recently, however, a US team of entomologists have reported the discovery of a second specimen, one also representing a second species of strange-combed beetles new to science.
Following a careful study of this second, poorly preserved specimen, collected in French Guiana in 2014, the team of Dr Terry Erwin (Smithsonian Institution), Dr David Kavanaugh () and Dr David Maddison (Oregon State University) described the species, Nototylus balli, or Ball's strange-combed beetle, in a paper that they published in the open-access scholarly journal ZooKeys. The entomologists named the species in honour of their academic leader and renowned carabidologist George E. Ball, after presenting it to him in September 2016 around the time of his 90th birthday.
Despite its poor, yet relatively better condition, the new specimen shows that probable antennal grooming organs are indeed present in strange-combed beetles. However, they looked nothing like those seen in other genera of ground beetles and they are located on a different part of the front legs. Rather than stout and barely movable, the setae (hair-like structures) in the grooming organs of strange-combed beetles are slender, flexible and very differently shaped, which led the researchers to suggest that the structure had a different role in strange-combed beetles.
Judging from the shapes of the setae in the grooming organs, the scientists point out that they are best suited for painting or coating the antennae, rather than scraping or cleaning them. Their hypothesis is that these rare carabids use these grooming structures to cohabitate with ants or termites, where they use them to apply specific substances to their antennae, so that the host colony recognises them as a friendly species, a kind of behaviour already known in some beetles.
Read more at Science Daily
The species, Nototylus fryi, or Fry's strange-combed beetle, is known so far only from a single, damaged specimen found in 1863 in the Brazilian State of Espíritu Santo, which today is kept in the Natural History Museum of London. So rare and unusual, due to its lack of "antennal cleaners" -- specialised "combing" structures located on the forelegs and used by carabids to keep their antennae clean, it also prompted the description of its own genus: Nototylus, now colloquially called strange-combed beetles.
No mention of the structure was made in the original description of the species, so, at one point, scientists even started to wonder whether the beetle they were looking at was in fact a carabid at all.
Because the area where Fry's strange-combed beetle had been found was once Southern Atlantic Forest, but today is mostly sugar cane fields, cacao plantations, and cattle ranches, scientists have feared that additional specimens of strange-combed beetles might never be collected again and that the group was already extinct. Recently, however, a US team of entomologists have reported the discovery of a second specimen, one also representing a second species of strange-combed beetles new to science.
Following a careful study of this second, poorly preserved specimen, collected in French Guiana in 2014, the team of Dr Terry Erwin (Smithsonian Institution), Dr David Kavanaugh () and Dr David Maddison (Oregon State University) described the species, Nototylus balli, or Ball's strange-combed beetle, in a paper that they published in the open-access scholarly journal ZooKeys. The entomologists named the species in honour of their academic leader and renowned carabidologist George E. Ball, after presenting it to him in September 2016 around the time of his 90th birthday.
Despite its poor, yet relatively better condition, the new specimen shows that probable antennal grooming organs are indeed present in strange-combed beetles. However, they looked nothing like those seen in other genera of ground beetles and they are located on a different part of the front legs. Rather than stout and barely movable, the setae (hair-like structures) in the grooming organs of strange-combed beetles are slender, flexible and very differently shaped, which led the researchers to suggest that the structure had a different role in strange-combed beetles.
Judging from the shapes of the setae in the grooming organs, the scientists point out that they are best suited for painting or coating the antennae, rather than scraping or cleaning them. Their hypothesis is that these rare carabids use these grooming structures to cohabitate with ants or termites, where they use them to apply specific substances to their antennae, so that the host colony recognises them as a friendly species, a kind of behaviour already known in some beetles.
Read more at Science Daily
Rising carbon dioxide levels will change marine habitats and fish communities
Rising carbon dioxide in the atmosphere and the consequent changes created through ocean acidification will cause severe ecosystem effects, impacting reef-forming habitats and the associated fish, according to new research.
Using submerged natural CO2 seeps off the Japanese Island of Shikine, an international team of marine biologists showed that even slightly higher CO2 concentrations than those existing today may cause profound changes in marine habitats and the fish that rely on them.
Writing in Science of The Total Environment, researchers from the Universities of Palermo (Italy), Tsukuba (Japan) and Plymouth (UK) showed that under elevated dissolved CO2 conditions, habitats are dominated by few ephemeral algae.
In such conditions, species such as complex corals and canopy-forming macroalgae mostly disappeared. This shift from complex reefs to habitats dominated by opportunistic low-profile algae led to a 45% decrease of fish diversity, with a loss of coral-associated species and a rearrangement of feeding behaviour.
Lead author Dr Carlo Cattano, from the University of Palermo, said: "Our findings show that the CO2-induced habitat shifts and food web simplification, which we observed along a volcanic gradient in a climatic transition zone, will impact specialist tropical species favouring temperate generalist fish. Our data also suggests that near-future projected ocean acidification levels will oppose the ongoing poleward expansion of corals (and consequently of reef-associated fish) due to global warming."
"Submerged volcanic degassing systems may provide realistic insights into future ocean conditions," added Dr Sylvain Agostini, from Shimoda Marine Research Center. "Studying organism and ecosystem responses off submerged CO2 seeps may help us to understand how the oceans will look in the future if anthropogenic CO2 emissions won't be reduced."
In addition to the new findings, the study also reinforces previous research which has demonstrated the ecological effects of habitat changes due to ongoing ocean acidification.
This has shown that decreased seawater pH may impair calcification and accelerate dissolution for many calcifying habitat-formers, while rising CO2 concentrations may favour non-calcifying autotrophs enhancing the primary production and carbon fixation rates.
As a result, there will be losers and winners under increasingly acidified conditions, and fish species that rely on specific resources during their different life stages could disappear. This would lead to the composition of fish communities changing in the near future with potential severe consequences for marine ecosystem functioning and the goods and services they provide to humans.
Read more at Science Daily
Using submerged natural CO2 seeps off the Japanese Island of Shikine, an international team of marine biologists showed that even slightly higher CO2 concentrations than those existing today may cause profound changes in marine habitats and the fish that rely on them.
Writing in Science of The Total Environment, researchers from the Universities of Palermo (Italy), Tsukuba (Japan) and Plymouth (UK) showed that under elevated dissolved CO2 conditions, habitats are dominated by few ephemeral algae.
In such conditions, species such as complex corals and canopy-forming macroalgae mostly disappeared. This shift from complex reefs to habitats dominated by opportunistic low-profile algae led to a 45% decrease of fish diversity, with a loss of coral-associated species and a rearrangement of feeding behaviour.
Lead author Dr Carlo Cattano, from the University of Palermo, said: "Our findings show that the CO2-induced habitat shifts and food web simplification, which we observed along a volcanic gradient in a climatic transition zone, will impact specialist tropical species favouring temperate generalist fish. Our data also suggests that near-future projected ocean acidification levels will oppose the ongoing poleward expansion of corals (and consequently of reef-associated fish) due to global warming."
"Submerged volcanic degassing systems may provide realistic insights into future ocean conditions," added Dr Sylvain Agostini, from Shimoda Marine Research Center. "Studying organism and ecosystem responses off submerged CO2 seeps may help us to understand how the oceans will look in the future if anthropogenic CO2 emissions won't be reduced."
In addition to the new findings, the study also reinforces previous research which has demonstrated the ecological effects of habitat changes due to ongoing ocean acidification.
This has shown that decreased seawater pH may impair calcification and accelerate dissolution for many calcifying habitat-formers, while rising CO2 concentrations may favour non-calcifying autotrophs enhancing the primary production and carbon fixation rates.
As a result, there will be losers and winners under increasingly acidified conditions, and fish species that rely on specific resources during their different life stages could disappear. This would lead to the composition of fish communities changing in the near future with potential severe consequences for marine ecosystem functioning and the goods and services they provide to humans.
Read more at Science Daily
A cheap organic steam generator to purify water
It has been estimated that in 2040 a quarter of the world's children will live in regions where clean and drinkable water is lacking. The desalination of seawater and the purification of wastewater are two possible methods to alleviate this, and researchers at Linköping University have developed a cheap and eco-friendly steam generator to desalinate and purify water using sunlight. The results have been published in the journal Advanced Sustainable Systems.
"The rate of steam production is 4-5 times higher than that of direct water evaporation, which means that we can purify more water," says Associated Professor Simone Fabiano, head of the Organic Nanoelectronics group in the Laboratory of Organic Electronics.
The steam generator consists of an aerogel that contains a cellulose-based structure decorated with the organic conjugated polymer PEDOT:PSS. The polymer has the ability to absorb the energy in sunlight, not least in the infrared part of the spectrum where much of the sun's heat is transported. The aerogel has a porous nanostructure, which means that large quantities of water can be absorbed into its pores.
"A 2 mm layer of this material can absorb 99% of the energy in the sun's spectrum," says Simone Fabiano.
A porous and insulating floating foam is also located between the water and the aerogel, such that the steam generator is kept afloat. The heat from the sun vaporises the water, while salt and other materials remain behind.
"The aerogel is durable and can be cleaned in, for example, salt water such that it can be used again immediately. This can be repeated many times. The water that passes through the system by evaporation becomes very high-quality drinking water," Tero-Petri Ruoko assures us. He is postdoc in the Laboratory of Organic Electronics and one of the authors of the article.
"What's particularly nice about this system is that all the materials are eco-friendly -- we use nanocellulose and a polymer that has a very low impact on the environmental and people. We also use very small amounts material: the aerogel is made up of 90% air. We hope and believe that our results can help the millions of people who don't have access to clean water," says Simone Fabiano.
The aerogel was developed by Shaobo Han within the framework of his doctoral studies in the Laboratory of Organic Electronics, under Professor Xavier Crispin´s supervision. The result was presented in the journal Advanced Science in 2019, and is described at the link below. After taking his doctoral degree, Shaobo Han has returned to China to continue research in the field.
From Science Daily
"The rate of steam production is 4-5 times higher than that of direct water evaporation, which means that we can purify more water," says Associated Professor Simone Fabiano, head of the Organic Nanoelectronics group in the Laboratory of Organic Electronics.
The steam generator consists of an aerogel that contains a cellulose-based structure decorated with the organic conjugated polymer PEDOT:PSS. The polymer has the ability to absorb the energy in sunlight, not least in the infrared part of the spectrum where much of the sun's heat is transported. The aerogel has a porous nanostructure, which means that large quantities of water can be absorbed into its pores.
"A 2 mm layer of this material can absorb 99% of the energy in the sun's spectrum," says Simone Fabiano.
A porous and insulating floating foam is also located between the water and the aerogel, such that the steam generator is kept afloat. The heat from the sun vaporises the water, while salt and other materials remain behind.
"The aerogel is durable and can be cleaned in, for example, salt water such that it can be used again immediately. This can be repeated many times. The water that passes through the system by evaporation becomes very high-quality drinking water," Tero-Petri Ruoko assures us. He is postdoc in the Laboratory of Organic Electronics and one of the authors of the article.
"What's particularly nice about this system is that all the materials are eco-friendly -- we use nanocellulose and a polymer that has a very low impact on the environmental and people. We also use very small amounts material: the aerogel is made up of 90% air. We hope and believe that our results can help the millions of people who don't have access to clean water," says Simone Fabiano.
The aerogel was developed by Shaobo Han within the framework of his doctoral studies in the Laboratory of Organic Electronics, under Professor Xavier Crispin´s supervision. The result was presented in the journal Advanced Science in 2019, and is described at the link below. After taking his doctoral degree, Shaobo Han has returned to China to continue research in the field.
From Science Daily
Apr 19, 2020
Strongest evidence yet that neutrinos explain how the universe exists
Neutrino word cloud illustration |
The current laws of physics do not explain why matter persists over antimatter -- why the universe is made of 'stuff'. Scientists believe equal amounts of matter and antimatter were created at the beginning of the universe, but this would mean they should have wiped each other out, annihilating the universe as it began.
Instead, physicists suggest there must be differences in the way matter and antimatter behave that explain why matter persisted and now dominates the universe. Each particle of matter has an antimatter equivalent, and neutrinos are no different, with an antimatter equivalent called antineutrinos.
They should be exact opposites in their properties and behaviour, which is what makes them annihilate each other on contact.
Now, an international team of researchers that make up the T2K Collaboration, including Imperial College London scientists, have found the strongest evidence yet that neutrinos and antineutrinos behave differently, and therefore may not wipe each other out.
Dr Patrick Dunne, from the Department of Physics at Imperial, said: "This result brings us closer than ever before to answering the fundamental question of why the matter in our universe exists. If confirmed -- at the moment we're over 95 per cent sure -- it will have profound implications for physics and should point the way to a better understanding of how our universe evolved."
Previously, scientists have found some differences in behaviour between matter and antimatter versions of subatomic particles called quarks, but the differences observed so far do not seem to be large enough to account for the dominance of matter in the universe.
However, T2K's new result indicates that the differences in the behaviour of neutrinos and antineutrinos appear to be quite large. Neutrinos are fundamental particles but do not interact with normal matter very strongly, such that around 50 trillion neutrinos from the Sun pass through your body every second.
Neutrinos and antineutrinos can come in three 'flavours', known as muon, electron and tau. As they travel, they can 'oscillate' -- changing into a different flavour. The fact that muon neutrinos oscillate into electron neutrinos was first discovered by the T2K experiment in 2013.
To get the new result, the team fired beams of muon neutrinos and antineutrinos from the J-PARC facility at Tokai, Japan, and detected how many electron neutrinos and antineutrinos arrived at the Super-Kamiokande detector 295km away.
They looked for differences in how the neutrinos or antineutrinos changed flavour, finding neutrinos appear to be much more likely to change than antineutrinos.
The available data also strongly discount the possibility that neutrinos and antineutrinos are as just likely as each other to change flavour. Dr Dunne said: "What our result shows is that we're more than 95 per cent sure that matter neutrinos and antineutrinos behave differently. This is big news in itself; however we do already know of other particles that have matter-antimatter differences that are too small to explain our matter-dominated universe.
"Therefore, measuring the size of the difference is what matters for determining whether neutrinos can answer this fundamental question. Our result today finds that unlike for other particles, the result in neutrinos is compatible with many of the theories explaining the origin of the universe's matter dominance."
While the result is the strongest evidence yet that neutrinos and antineutrinos behave differently, the T2K Collaboration is working to reduce any uncertainties and gather more data by upgrading the detectors and beamlines, including the new Hyper-Kamiokande detector to replace the Super-Kamiokande. A new experiment, called DUNE, is also under construction in the US. Imperial is involved in both.
Imperial researchers have been involved in the T2K Collaboration since 2004, starting with conceptual designs on whiteboards and research and development on novel particle detector components that were key to building this experiment, which was finally completed and turned on in 2010.
For the latest result, the team contributed to the statistical analysis of the results and ensuring the signal they observe is real, as well as including the effects of how neutrinos interact with matter, which is one of the largest uncertainties that go into the analysis.
Read more at Science Daily
Whole genome sequencing reveals genetic structural secrets of schizophrenia
Schizophrenia word concept |
Published in Nature Communications, the study co-led by senior author Jin Szatkiewicz, PhD, associate professor in the UNC Department of Genetics, suggests that rare structural genetic variants could play a role in schizophrenia.
"Our results suggest that ultra-rare structural variants that affect the boundaries of a specific genome structure increase risk for schizophrenia," Szatkiewicz said. "Alterations in these boundaries may lead to dysregulation of gene expression, and we think future mechanistic studies could determine the precise functional effects these variants have on biology."
Previous studies on the genetics of schizophrenia have primarily involved using common genetic variations known as SNPs (alterations in common genetic sequences and each affecting a single nucleotide), rare variations in the part of DNA that provide instructions for making proteins, or very large structural variations (alterations affecting a few hundred thousands of nucleotides). These studies give snapshots of the genome, leaving a large portion of the genome a mystery, as it potentially relates to schizophrenia.
In the Nature Communications study, Szatkiewicz and colleagues examined the entire genome, using a method called whole genome sequencing (WGS). The primary reason WGS hasn't been more widely used is that it is very expensive. For this study, an international collaboration pooled funding from National Institute of Mental Health grants and matching funds from Sweden's SciLife Labs to conduct deep whole genome sequencing on 1,165 people with schizophrenia and 1,000 controls -- the largest known WGS study of schizophrenia ever.
As a result, new discoveries were made. Previously undetectable mutations in DNA were found that scientists had never seen before in schizophrenia.
In particular, this study highlighted the role that a three-dimensional genome structure known as topologically associated domains (TADs) could play in the development of schizophrenia. TADs are distinct regions of the genome with strict boundaries between them that keep the domains from interacting with genetic material in neighboring TADs. Shifting or breaking these boundaries allows interactions between genes and regulatory elements that normally would not interact.
When these interactions occur, gene expression may be changed in undesirable ways that could result in congenital defects, formation of cancers, and developmental disorders. This study found that extremely rare structural variants affecting TAD boundaries in the brain occur significantly more often in people with schizophrenia than in those without it. Structural variants are large mutations that may involve missing or duplicated genetic sequences, or sequences that are not in the typical genome. This finding suggests that misplaced or missing TAD boundaries may also contribute to the development of schizophrenia. This study was the first to discover the connection between anomalies in TADs and the development of schizophrenia.
This work has highlighted TADs-affecting structural variants as prime candidates for future mechanistic studies of the biology of schizophrenia.
"A possible future investigation would be to work with patient-derived cells with these TADs-affecting mutations and figure out what exactly happened at the molecular level," said Szatkiewicz, an adjunct assistant professor of psychiatry at UNC. "In the future, we could use this information about the TAD effects to help develop drugs or precision medicine treatments that could repair disrupted TADs or affected gene expressions which may improve patient outcomes."
This study will be combined with other WGS studies in order to increase the sample size to further confirm these results. This research will also help the scientific community build on the unfolding genetic mysteries of schizophrenia.
Read more at Science Daily
Subscribe to:
Posts (Atom)