Researchers at Rensselaer Polytechnic Institute have developed a way to 3D print living skin, complete with blood vessels. The advancement, published online today in Tissue Engineering Part A, is a significant step toward creating grafts that are more like the skin our bodies produce naturally.
"Right now, whatever is available as a clinical product is more like a fancy Band-Aid," said Pankaj Karande, an associate professor of chemical and biological engineering and member of the Center for Biotechnology and Interdisciplinary Studies (CBIS), who led this research at Rensselaer. "It provides some accelerated wound healing, but eventually it just falls off; it never really integrates with the host cells."
A significant barrier to that integration has been the absence of a functioning vascular system in the skin grafts.
Karande has been working on this challenge for several years, previously publishing one of the first papers showing that researchers could take two types of living human cells, make them into "bio-inks," and print them into a skin-like structure. Since then, he and his team have been working with researchers from Yale School of Medicine to incorporate vasculature.
In this paper, the researchers show that if they add key elements -- including human endothelial cells, which line the inside of blood vessels, and human pericyte cells, which wrap around the endothelial cells -- with animal collagen and other structural cells typically found in a skin graft, the cells start communicating and forming a biologically relevant vascular structure within the span of a few weeks. You can watch Karande explain this development here.
"As engineers working to recreate biology, we've always appreciated and been aware of the fact that biology is far more complex than the simple systems we make in the lab," Karande said. "We were pleasantly surprised to find that, once we start approaching that complexity, biology takes over and starts getting closer and closer to what exists in nature."
Once the Yale team grafted it onto a special type of mouse, the vessels from the skin printed by the Rensselaer team began to communicate and connect with the mouse's own vessels.
"That's extremely important, because we know there is actually a transfer of blood and nutrients to the graft which is keeping the graft alive," Karande said.
In order to make this usable at a clinical level, researchers need to be able to edit the donor cells using something like the CRISPR technology, so that the vessels can integrate and be accepted by the patient's body.
"We are still not at that step, but we are one step closer," Karande said.
"This significant development highlights the vast potential of 3D bioprinting in precision medicine, where solutions can be tailored to specific situations and eventually to individuals," said Deepak Vashishth, the director CBIS. "This is a perfect example of how engineers at Rensselaer are solving challenges related to human health."
Karande said more work will need to be done to address the challenges associated with burn patients, which include the loss of nerve and vascular endings. But the grafts his team has created bring researchers closer to helping people with more discrete issues, like diabetic or pressure ulcers.
Read more at Science Daily
Nov 2, 2019
Engineers develop new way to know liars' intent
Dartmouth engineering researchers have developed a new approach for detecting a speaker's intent to mislead. The approach's framework, which could be developed to extract opinion from "fake news," among other uses, was recently published as part of a paper in Journal of Experimental & Theoretical Artificial Intelligence.
Although previous studies have examined deception, this is possibly the first study to look at a speaker's intent. The researchers posit that while a true story can be manipulated into various deceiving forms, the intent, rather than the content of the communication, determines whether the communication is deceptive or not. For example, the speaker could be misinformed or make a wrong assumption, meaning the speaker made an unintentional error but did not attempt to deceive.
"Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes," said Eugene Santos Jr., co-author and professor of engineering at Thayer School of Engineering at Dartmouth. "To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts."
The researchers developed a unique approach and resulting algorithm that can tell deception apart from all benign communications by retrieving the universal features of deceptive reasoning. However, the framework is currently limited by the amount of data needed to measure a speaker's deviation from their past arguments; the study used data from a 2009 survey of 100 participants on their opinions on controversial topics, as well as a 2011 dataset of 800 real and 400 fictitious reviews of the same 20 hotels.
Santos believes the framework could be further developed to help readers distinguish and closely examine the intent of "fake news," allowing the reader to determine if a reasonable, logical argument is used or if opinion plays a strong role. In further studies, Santos hopes to examine the ripple effect of misinformation, including its impacts.
In the study, the researchers use the popular 2001 film Ocean's Eleven to illustrate how the framework can be used to examine a deceiver's arguments, which in reality may go against his true beliefs, resulting in a falsified final expectation. For example, in the movie, a group of thieves break into a bank vault while simultaneously revealing to the owner that he is being robbed in order to negotiate. The thieves supply the owner with false information, namely that they will only take half the money if the owner doesn't call police. However, the thieves expect the owner to call police, which he does, so the thieves then disguise themselves as police to steal the entirety of the vault contents.
Because Ocean's Eleven is a scripted film, viewers can be sure of the thieves' intent -- to steal all of the money -- and how it conflicts with what they tell the owner -- that they will only take half. This illustrates how the thieves were able to deceive the owner and anticipate his actions due to the fact that the thieves and owner had different information and therefore perceived the scene differently.
"People expect things to work in a certain way," said Santos, "just like the thieves knew that the owner would call police when he found out he was being robbed. So, in this scenario, the thieves used that knowledge to convince the owner to come to a certain conclusion and follow the standard path of expectations. They forced their deception intent so the owner would reach the conclusions the thieves desired."
In popular culture, verbal and non-verbal behaviors such as facial expressions are often used to determine if someone is lying, but the co-authors note that those cues are not always reliable.
Read more at Science Daily
Although previous studies have examined deception, this is possibly the first study to look at a speaker's intent. The researchers posit that while a true story can be manipulated into various deceiving forms, the intent, rather than the content of the communication, determines whether the communication is deceptive or not. For example, the speaker could be misinformed or make a wrong assumption, meaning the speaker made an unintentional error but did not attempt to deceive.
"Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes," said Eugene Santos Jr., co-author and professor of engineering at Thayer School of Engineering at Dartmouth. "To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts."
The researchers developed a unique approach and resulting algorithm that can tell deception apart from all benign communications by retrieving the universal features of deceptive reasoning. However, the framework is currently limited by the amount of data needed to measure a speaker's deviation from their past arguments; the study used data from a 2009 survey of 100 participants on their opinions on controversial topics, as well as a 2011 dataset of 800 real and 400 fictitious reviews of the same 20 hotels.
Santos believes the framework could be further developed to help readers distinguish and closely examine the intent of "fake news," allowing the reader to determine if a reasonable, logical argument is used or if opinion plays a strong role. In further studies, Santos hopes to examine the ripple effect of misinformation, including its impacts.
In the study, the researchers use the popular 2001 film Ocean's Eleven to illustrate how the framework can be used to examine a deceiver's arguments, which in reality may go against his true beliefs, resulting in a falsified final expectation. For example, in the movie, a group of thieves break into a bank vault while simultaneously revealing to the owner that he is being robbed in order to negotiate. The thieves supply the owner with false information, namely that they will only take half the money if the owner doesn't call police. However, the thieves expect the owner to call police, which he does, so the thieves then disguise themselves as police to steal the entirety of the vault contents.
Because Ocean's Eleven is a scripted film, viewers can be sure of the thieves' intent -- to steal all of the money -- and how it conflicts with what they tell the owner -- that they will only take half. This illustrates how the thieves were able to deceive the owner and anticipate his actions due to the fact that the thieves and owner had different information and therefore perceived the scene differently.
"People expect things to work in a certain way," said Santos, "just like the thieves knew that the owner would call police when he found out he was being robbed. So, in this scenario, the thieves used that knowledge to convince the owner to come to a certain conclusion and follow the standard path of expectations. They forced their deception intent so the owner would reach the conclusions the thieves desired."
In popular culture, verbal and non-verbal behaviors such as facial expressions are often used to determine if someone is lying, but the co-authors note that those cues are not always reliable.
Read more at Science Daily
Nov 1, 2019
Scientists may have discovered whole new class of black holes
Black holes are an important part of how astrophysicists make sense of the universe -- so important that scientists have been trying to build a census of all the black holes in the Milky Way galaxy.
But new research shows that their search might have been missing an entire class of black holes that they didn't know existed.
In a study published today in the journal Science, astronomers offer a new way to search for black holes, and show that it is possible there is a class of black holes smaller than the smallest known black holes in the universe.
"We're showing this hint that there is another population out there that we have yet to really probe in the search for black holes," said Todd Thompson, a professor of astronomy at The Ohio State University and lead author of the study.
"People are trying to understand supernova explosions, how supermassive black stars explode, how the elements were formed in supermassive stars. So if we could reveal a new population of black holes, it would tell us more about which stars explode, which don't, which form black holes, which form neutron stars. It opens up a new area of study."
Imagine a census of a city that only counted people 5'9" and taller -- and imagine that the census takers didn't even know that people shorter than 5'9" existed. Data from that census would be incomplete, providing an inaccurate picture of the population. That is essentially what has been happening in the search for black holes, Thompson said.
Astronomers have long been searching for black holes, which have gravitational pulls so fierce that nothing -- not matter, not radiation -- can escape. Black holes form when some stars die, shrink into themselves, and explode. Astronomers have also been looking for neutron stars -- small, dense stars that form when some stars die and collapse.
Both could hold interesting information about the elements on Earth and about how stars live and die. But in order to uncover that information, astronomers first have to figure out where the black holes are. And to figure out where the black holes are, they need to know what they are looking for.
One clue: Black holes often exist in something called a binary system. This simply means that two stars are close enough to one another to be locked together by gravity in a mutual orbit around one another. When one of those stars dies, the other can remain, still orbiting the space where the dead star -- now a black hole or neutron star -- once lived, and where a black hole or neutron star has formed.
For years, the black holes scientists knew about were all between approximately five and 15 times the mass of the sun. The known neutron stars are generally no bigger than about 2.1 times the mass of the sun -- if they were above 2.5 times the sun's mass, they would collapse to a black hole.
But in the summer of 2017, a survey called LIGO -- the Laser Interferometer Gravitational-Wave Observatory -- saw two black holes merging together in a galaxy about 1.8 million light years away. One of those black holes was about 31 times the mass of the sun; the other about 25 times the mass of the sun.
"Immediately, everyone was like 'wow,' because it was such a spectacular thing," Thompson said. "Not only because it proved that LIGO worked, but because the masses were huge. Black holes that size are a big deal -- we hadn't seen them before."
Thompson and other astrophysicists had long suspected that black holes might come in sizes outside the known range, and LIGO's discovery proved that black holes could be larger. But there remained a window of size between the biggest neutron stars and the smallest black holes.
Thompson decided to see if he could solve that mystery.
He and other scientists began combing through data from APOGEE, the Apache Point Observatory Galactic Evolution Experiment, which collected light spectra from around 100,000 stars across the Milky Way. The spectra, Thompson realized, could show whether a star might be orbiting around another object: Changes in spectra -- a shift toward bluer wavelengths, for example, followed by a shift to redder wavelengths -- could show that a star was orbiting an unseen companion.
Thompson began combing through the data, looking for stars that showed that change, indicating that they might be orbiting a black hole.
Then, he narrowed the APOGEE data to 200 stars that might be most interesting. He gave the data to a graduate research associate at Ohio State, Tharindu Jayasinghe, who compiled thousands of images of each potential binary system from ASAS-SN, the All-Sky Automated Survey for Supernovae. (ASAS-SN has found some 1,000 supernovae, and is run out of Ohio State.)
Their data crunching found a giant red star that appeared to be orbiting something, but that something, based on their calculations, was likely much smaller than the known black holes in the Milky Way, but way bigger than most known neutron stars.
After more calculations and additional data from the Tillinghast Reflector Echelle Spectrograph and the Gaia satellite, they realized they had found a low-mass black hole, likely about 3.3 times the mass of the sun.
Read more at Science Daily
But new research shows that their search might have been missing an entire class of black holes that they didn't know existed.
In a study published today in the journal Science, astronomers offer a new way to search for black holes, and show that it is possible there is a class of black holes smaller than the smallest known black holes in the universe.
"We're showing this hint that there is another population out there that we have yet to really probe in the search for black holes," said Todd Thompson, a professor of astronomy at The Ohio State University and lead author of the study.
"People are trying to understand supernova explosions, how supermassive black stars explode, how the elements were formed in supermassive stars. So if we could reveal a new population of black holes, it would tell us more about which stars explode, which don't, which form black holes, which form neutron stars. It opens up a new area of study."
Imagine a census of a city that only counted people 5'9" and taller -- and imagine that the census takers didn't even know that people shorter than 5'9" existed. Data from that census would be incomplete, providing an inaccurate picture of the population. That is essentially what has been happening in the search for black holes, Thompson said.
Astronomers have long been searching for black holes, which have gravitational pulls so fierce that nothing -- not matter, not radiation -- can escape. Black holes form when some stars die, shrink into themselves, and explode. Astronomers have also been looking for neutron stars -- small, dense stars that form when some stars die and collapse.
Both could hold interesting information about the elements on Earth and about how stars live and die. But in order to uncover that information, astronomers first have to figure out where the black holes are. And to figure out where the black holes are, they need to know what they are looking for.
One clue: Black holes often exist in something called a binary system. This simply means that two stars are close enough to one another to be locked together by gravity in a mutual orbit around one another. When one of those stars dies, the other can remain, still orbiting the space where the dead star -- now a black hole or neutron star -- once lived, and where a black hole or neutron star has formed.
For years, the black holes scientists knew about were all between approximately five and 15 times the mass of the sun. The known neutron stars are generally no bigger than about 2.1 times the mass of the sun -- if they were above 2.5 times the sun's mass, they would collapse to a black hole.
But in the summer of 2017, a survey called LIGO -- the Laser Interferometer Gravitational-Wave Observatory -- saw two black holes merging together in a galaxy about 1.8 million light years away. One of those black holes was about 31 times the mass of the sun; the other about 25 times the mass of the sun.
"Immediately, everyone was like 'wow,' because it was such a spectacular thing," Thompson said. "Not only because it proved that LIGO worked, but because the masses were huge. Black holes that size are a big deal -- we hadn't seen them before."
Thompson and other astrophysicists had long suspected that black holes might come in sizes outside the known range, and LIGO's discovery proved that black holes could be larger. But there remained a window of size between the biggest neutron stars and the smallest black holes.
Thompson decided to see if he could solve that mystery.
He and other scientists began combing through data from APOGEE, the Apache Point Observatory Galactic Evolution Experiment, which collected light spectra from around 100,000 stars across the Milky Way. The spectra, Thompson realized, could show whether a star might be orbiting around another object: Changes in spectra -- a shift toward bluer wavelengths, for example, followed by a shift to redder wavelengths -- could show that a star was orbiting an unseen companion.
Thompson began combing through the data, looking for stars that showed that change, indicating that they might be orbiting a black hole.
Then, he narrowed the APOGEE data to 200 stars that might be most interesting. He gave the data to a graduate research associate at Ohio State, Tharindu Jayasinghe, who compiled thousands of images of each potential binary system from ASAS-SN, the All-Sky Automated Survey for Supernovae. (ASAS-SN has found some 1,000 supernovae, and is run out of Ohio State.)
Their data crunching found a giant red star that appeared to be orbiting something, but that something, based on their calculations, was likely much smaller than the known black holes in the Milky Way, but way bigger than most known neutron stars.
After more calculations and additional data from the Tillinghast Reflector Echelle Spectrograph and the Gaia satellite, they realized they had found a low-mass black hole, likely about 3.3 times the mass of the sun.
Read more at Science Daily
Avocados may help manage obesity, prevent diabetes
Your guacamole may hold the key to managing obesity and helping delay or prevent diabetes, according to a new study by a University of Guelph research team.
For the first time, researchers led by Prof. Paul Spagnuolo have shown how a compound found only in avocados can inhibit cellular processes that normally lead to diabetes. In safety testing in humans, the team also found that the substance was absorbed into the blood with no adverse effects in the kidney, liver or muscle.
The study was recently published in the journal Molecular Nutrition and Food Research.
About one in four Canadians is obese, a chronic condition that is a leading cause of Type 2 diabetes. Insulin resistance in diabetic patients means their bodies are unable to properly remove glucose from the blood.
Those complications can arise when mitochondria, or the energy powerhouses in the body's cells, are unable to burn fatty acids completely.
Normally, fatty acid oxidation allows the body to burn fats. Obesity or diabetes hinders that process, leading to incomplete oxidation.
The U of G researchers discovered that avocatin B (AvoB), a fat molecule found only in avocados, counters incomplete oxidation in skeletal muscle and the pancreas to reduce insulin resistance.
In their study, the team fed mice high-fat diets for eight weeks to induce obesity and insulin resistance. For the next five weeks, they added AvoB to the high-fat diets of half of the mice.
The treated mice weighed significantly less than those in the control group, showing slower weight gain. More important, said Spagnuolo, the treated mice showed greater insulin sensitivity, meaning that their bodies were able to absorb and burn blood glucose and improve their response to insulin.
In a human clinical study, AvoB given as a dietary supplement to participants eating a typical western diet was absorbed safely into their blood without affecting the kidney, liver or skeletal muscle. The team also saw reductions in weight in human subjects, although Spagnuolo said the result was not statistically significant.
Having demonstrated its safety in humans, they plan to conduct clinical trials to test AvoB's efficacy in treating metabolic ailments in people.
Spagnuolo said the safety trial helped the team to determine just how much AvoB to include in the supplement formulation.
Having received Health Canada approval for the compound as a human supplement, he will begin selling it in powder and pill forms as soon as 2020 through SP Nutraceuticals Inc., a Burlington, Ont.-based natural health products company.
He said eating avocados alone would likely be ineffective, as the amount of natural avocatin B varies widely in the fruit and we still do not fully understand exactly how it is digested and absorbed when we consume a whole avocado.
Although avocados have been touted as a weight-loss food, Spagnuolo said more study is needed. He said a healthy diet and exercise are recommended to prevent metabolic disorders leading to obesity or diabetes.
Read more at Science Daily
For the first time, researchers led by Prof. Paul Spagnuolo have shown how a compound found only in avocados can inhibit cellular processes that normally lead to diabetes. In safety testing in humans, the team also found that the substance was absorbed into the blood with no adverse effects in the kidney, liver or muscle.
The study was recently published in the journal Molecular Nutrition and Food Research.
About one in four Canadians is obese, a chronic condition that is a leading cause of Type 2 diabetes. Insulin resistance in diabetic patients means their bodies are unable to properly remove glucose from the blood.
Those complications can arise when mitochondria, or the energy powerhouses in the body's cells, are unable to burn fatty acids completely.
Normally, fatty acid oxidation allows the body to burn fats. Obesity or diabetes hinders that process, leading to incomplete oxidation.
The U of G researchers discovered that avocatin B (AvoB), a fat molecule found only in avocados, counters incomplete oxidation in skeletal muscle and the pancreas to reduce insulin resistance.
In their study, the team fed mice high-fat diets for eight weeks to induce obesity and insulin resistance. For the next five weeks, they added AvoB to the high-fat diets of half of the mice.
The treated mice weighed significantly less than those in the control group, showing slower weight gain. More important, said Spagnuolo, the treated mice showed greater insulin sensitivity, meaning that their bodies were able to absorb and burn blood glucose and improve their response to insulin.
In a human clinical study, AvoB given as a dietary supplement to participants eating a typical western diet was absorbed safely into their blood without affecting the kidney, liver or skeletal muscle. The team also saw reductions in weight in human subjects, although Spagnuolo said the result was not statistically significant.
Having demonstrated its safety in humans, they plan to conduct clinical trials to test AvoB's efficacy in treating metabolic ailments in people.
Spagnuolo said the safety trial helped the team to determine just how much AvoB to include in the supplement formulation.
Having received Health Canada approval for the compound as a human supplement, he will begin selling it in powder and pill forms as soon as 2020 through SP Nutraceuticals Inc., a Burlington, Ont.-based natural health products company.
He said eating avocados alone would likely be ineffective, as the amount of natural avocatin B varies widely in the fruit and we still do not fully understand exactly how it is digested and absorbed when we consume a whole avocado.
Although avocados have been touted as a weight-loss food, Spagnuolo said more study is needed. He said a healthy diet and exercise are recommended to prevent metabolic disorders leading to obesity or diabetes.
Read more at Science Daily
Important gene variants found in certain African populations
In the nearly 20 years since the Human Genome Project was completed, experts in genetic variants increasingly have raised concerns about the overemphasis on studying people of European descent when performing large population studies. A study appearing October 31 in the journal Cell aims to address some of this disparity by focusing on populations living in rural Uganda, thus revealing several new genetic variants related to human health.
"This study highlights the high level of diversity in African populations that remains undiscovered despite large numbers of gene sequences that have been generated from Europeans," says co-senior author Manjinder Sandhu, who studies genomic diversity at the University of Cambridge in the UK. "We found that more than a quarter of the genetic variation we observed in the Ugandan population had not been discovered."
The participants in the study came from 25 villages in a rural part of southwestern Uganda. Using blood samples, the investigators generated genotypes from about 5,000 individuals and conducted whole-genome sequencing on about 2,000 individuals. The researchers collected information through electronic questionnaires; carried out physical measurements such as blood pressure, height, and weight; and tested the blood samples for medically important markers such as cholesterol and glucose.
The investigators made several findings related to genetic variants and health. "We found many new associations with blood traits, liver function tests, and glucose-related traits," Sandhu says. "Most of these relate to genetic variants that are either unique to Africans or rare in non-Africans. They may not have been readily discovered even in very large studies of non-African populations."
Specifically, they found that height is less genetically determined among rural Ugandans relative to what's been seen in European studies. In contrast, LDL cholesterol levels appear to be more genetically determined relative to Europeans.
"We think this might relate to differences in the impact of diet and nutrition relative to genetic influences between African and European populations," says co-first author Deepti Gurdasani, a career development fellow at Queen Mary's University of London. "For example, the genetic influences on height might be more limited by malnutrition in early childhood in these populations. On the other hand, so-called Western dietary patterns possibly have a lower influence on cholesterol levels, making these more genetically determined."
The researchers also found an association between a genetic variant that causes alpha-thalassemia among Africans and levels of glycated hemoglobin. This genetic variant, found in 22% of Africans, protects against severe malaria. It is rare in populations where malaria isn't endemic. "Because glycated hemoglobin is commonly used to diagnose diabetes, this finding suggests that it needs careful evaluation as a test for diabetes in relevant populations," says co-senior author Ayesha Motala, of KwaZulu Natal University in South Africa.
The study also revealed important findings about human history and migration. "Uganda is a melting pot of different cultures and languages, and we wanted to understand the genetic structure and history of populations within the country," says Pontiano Kaleebu, the Director of Uganda Virus Research Institute and Director of the MRC/UVRI & London School of Hygiene and Tropical Medicine Uganda Research Unit, who co-led the project. "These studies highlight the extensive movement and population expansions that have occurred within and into Africa over the past few thousand years."
Analysis revealed that the genomes of Ugandans are a mosaic of many ancestries, likely reflecting the extensive migration from surrounding regions spanning hundreds to thousands of years. It also showed that significant Eurasian ancestry has entered the region at multiple time points, ranging from a few hundred years ago to about 4,000 years ago.
Although the researchers identified new genetic variants associated with disease, they say much more research is needed to understand how these genetic variants affect disease traits. This will require not just looking at genomes but also at functional effects of genomes on gene expression and protein levels.
In the future, they also plan to look at individuals from other parts of Africa, especially indigenous hunter-gatherer populations such as the Khoe-San populations in Namibia and South Africa and the rainforest hunter-gatherer populations in central Africa.
Read more at Science Daily
"This study highlights the high level of diversity in African populations that remains undiscovered despite large numbers of gene sequences that have been generated from Europeans," says co-senior author Manjinder Sandhu, who studies genomic diversity at the University of Cambridge in the UK. "We found that more than a quarter of the genetic variation we observed in the Ugandan population had not been discovered."
The participants in the study came from 25 villages in a rural part of southwestern Uganda. Using blood samples, the investigators generated genotypes from about 5,000 individuals and conducted whole-genome sequencing on about 2,000 individuals. The researchers collected information through electronic questionnaires; carried out physical measurements such as blood pressure, height, and weight; and tested the blood samples for medically important markers such as cholesterol and glucose.
The investigators made several findings related to genetic variants and health. "We found many new associations with blood traits, liver function tests, and glucose-related traits," Sandhu says. "Most of these relate to genetic variants that are either unique to Africans or rare in non-Africans. They may not have been readily discovered even in very large studies of non-African populations."
Specifically, they found that height is less genetically determined among rural Ugandans relative to what's been seen in European studies. In contrast, LDL cholesterol levels appear to be more genetically determined relative to Europeans.
"We think this might relate to differences in the impact of diet and nutrition relative to genetic influences between African and European populations," says co-first author Deepti Gurdasani, a career development fellow at Queen Mary's University of London. "For example, the genetic influences on height might be more limited by malnutrition in early childhood in these populations. On the other hand, so-called Western dietary patterns possibly have a lower influence on cholesterol levels, making these more genetically determined."
The researchers also found an association between a genetic variant that causes alpha-thalassemia among Africans and levels of glycated hemoglobin. This genetic variant, found in 22% of Africans, protects against severe malaria. It is rare in populations where malaria isn't endemic. "Because glycated hemoglobin is commonly used to diagnose diabetes, this finding suggests that it needs careful evaluation as a test for diabetes in relevant populations," says co-senior author Ayesha Motala, of KwaZulu Natal University in South Africa.
The study also revealed important findings about human history and migration. "Uganda is a melting pot of different cultures and languages, and we wanted to understand the genetic structure and history of populations within the country," says Pontiano Kaleebu, the Director of Uganda Virus Research Institute and Director of the MRC/UVRI & London School of Hygiene and Tropical Medicine Uganda Research Unit, who co-led the project. "These studies highlight the extensive movement and population expansions that have occurred within and into Africa over the past few thousand years."
Analysis revealed that the genomes of Ugandans are a mosaic of many ancestries, likely reflecting the extensive migration from surrounding regions spanning hundreds to thousands of years. It also showed that significant Eurasian ancestry has entered the region at multiple time points, ranging from a few hundred years ago to about 4,000 years ago.
Although the researchers identified new genetic variants associated with disease, they say much more research is needed to understand how these genetic variants affect disease traits. This will require not just looking at genomes but also at functional effects of genomes on gene expression and protein levels.
In the future, they also plan to look at individuals from other parts of Africa, especially indigenous hunter-gatherer populations such as the Khoe-San populations in Namibia and South Africa and the rainforest hunter-gatherer populations in central Africa.
Read more at Science Daily
A new spin on life's origin?
A research team at The University of Tokyo has reproducibly synthesized staircase-like supramolecules of a single handedness, or chirality, using standard laboratory equipment. By gradually removing the solvent from a rotating solution containing non-chiral precursors, they were able to produce helixes that twist preferentially in a particular direction. This research may lead to new and cheaper drug production methods, as well as finally addressing one of the lingering quandaries about how life began.
One of the most striking features of the molecules most important to life -- including DNA, proteins, and sugars -- is that they have a "handedness," referred to as chirality. That is, all living organisms chose to rely on one molecule, while the non-superimposable mirror image does nothing. This is a little like owning a dog that will only fetch your left-handed gloves, while completely ignoring the right-handed ones. It becomes even more puzzling when you consider that chiral pairs behave identically chemically. This makes it extremely difficult to produce just one kind of chiral molecule when starting with nonchiral precursors.
How and why early life chose one type of handedness over the other is a major question in biology, and is sometimes called "the question of homochirality." One hypothesis is that some early imbalance broke the symmetry between left- and right-handed molecules, and this change was "locked in" over evolutionary time. Now, researchers at The University of Tokyo have demonstrated that, under the right conditions, macroscopic rotation can lead to the formation of supramolecules of a particular chirality.
This was accomplished using a rotary evaporator, a standard piece of equipment in chemistry labs used for concentrating solutions by gently removing the solvent. "It was previously believed that macroscopic rotation could not cause nanoscale molecular chirality, because of the difference in scale, but we have shown that the chirality of the molecules can indeed become fixed in the direction of rotation," says first author Mizuki Kuroha.
According to her theory, some ancient biomolecules caught in a primordial vortex are responsible for the choice of handedness that we are left with today.
"Not only do these results provide insight to the origin of the homochirality of life, they also represent a pioneering look in the combination of nanoscale molecular chemistry and macroscopic fluid dynamics," says senior author Kazuyuki Ishii. This research may also enable new synthesis pathways for chiral drugs that do not require chiral molecules as inputs.
From Science Daily
One of the most striking features of the molecules most important to life -- including DNA, proteins, and sugars -- is that they have a "handedness," referred to as chirality. That is, all living organisms chose to rely on one molecule, while the non-superimposable mirror image does nothing. This is a little like owning a dog that will only fetch your left-handed gloves, while completely ignoring the right-handed ones. It becomes even more puzzling when you consider that chiral pairs behave identically chemically. This makes it extremely difficult to produce just one kind of chiral molecule when starting with nonchiral precursors.
How and why early life chose one type of handedness over the other is a major question in biology, and is sometimes called "the question of homochirality." One hypothesis is that some early imbalance broke the symmetry between left- and right-handed molecules, and this change was "locked in" over evolutionary time. Now, researchers at The University of Tokyo have demonstrated that, under the right conditions, macroscopic rotation can lead to the formation of supramolecules of a particular chirality.
This was accomplished using a rotary evaporator, a standard piece of equipment in chemistry labs used for concentrating solutions by gently removing the solvent. "It was previously believed that macroscopic rotation could not cause nanoscale molecular chirality, because of the difference in scale, but we have shown that the chirality of the molecules can indeed become fixed in the direction of rotation," says first author Mizuki Kuroha.
According to her theory, some ancient biomolecules caught in a primordial vortex are responsible for the choice of handedness that we are left with today.
"Not only do these results provide insight to the origin of the homochirality of life, they also represent a pioneering look in the combination of nanoscale molecular chemistry and macroscopic fluid dynamics," says senior author Kazuyuki Ishii. This research may also enable new synthesis pathways for chiral drugs that do not require chiral molecules as inputs.
From Science Daily
Oct 31, 2019
Two million-year-old ice provides snapshot of Earth's greenhouse gas history
Two million-year old ice from Antarctica recently uncovered by a team of researchers provides a clearer picture into the connections between greenhouse gases and climate in ancient times and will help scientists understand future climate change.
In a paper published today in Nature, a group of scientists used air trapped in the bubbles in ice as old as 2 million years to measure levels of the greenhouse gases carbon dioxide and methane. The group was led by John Higgins and Yuzhen Yan of Princeton University and Andrei Kurbatov of the University of Maine, and included Ed Brook at Oregon State University and Jeff Severinghaus at the University of California, San Diego.
This is the first time scientists were able to study an ice core that old. Previously, the oldest complete ice core provided data back to 800,000 years. Past studies using that core and others have shown that atmospheric carbon dioxide levels are directly linked to Antarctic and global temperature during the past 800,000 years. Prior to that, the connection between climate and carbon dioxide levels has not been as well understood.
The paper published today in Nature begins to change that.
During the past one million years the cycle of ice ages followed by warm periods occurred every 100,000 years. But between 2.8 million years ago and 1.2 million years ago, those cycles were shorter, about 40,000 years, and ice ages were less extreme.
The team that included Brook wanted to find out how carbon dioxide levels varied during that older time period, which until now was known only indirectly from the chemistry of sediments in the ocean and on land.
They found that the highest levels of carbon dioxide matched the levels in warm periods of more recent times. The lowest levels, however, did not reach the very low concentrations found in the ice ages of the last 800,000 years.
"One of the important results of this study is to show that carbon dioxide is linked to temperature in this earlier time period," Brook said.
This conclusion is based on studies of the chemistry of the ice, which provide an indication of temperature change in Antarctica at the same time as the carbon dioxide variations.
"That's an important baseline for understanding climate science and calibrating models that predict future change," Brook said.
The ice core with the 2 million-year-old ice comes from an area known as Allan Hills, which is about 130 miles from the U.S. Antarctic research station known as McMurdo Station. Ancient meteorites had been found on the surface in this area, leading scientists to believe there could be ancient ice in the ice sheet.
The core with the 2 million-year-old ice was drilled to a depth of 200 meters during the 2015-16 field season. It takes one to two weeks to drill and recover a core like that, and several cores were collected in the region.
The research team is on its way back to Allan Hills in the coming days for two months of additional work. They will be collecting larger quantities of the 2 million year old ice and searching for even older samples.
Read more at Science Daily
In a paper published today in Nature, a group of scientists used air trapped in the bubbles in ice as old as 2 million years to measure levels of the greenhouse gases carbon dioxide and methane. The group was led by John Higgins and Yuzhen Yan of Princeton University and Andrei Kurbatov of the University of Maine, and included Ed Brook at Oregon State University and Jeff Severinghaus at the University of California, San Diego.
This is the first time scientists were able to study an ice core that old. Previously, the oldest complete ice core provided data back to 800,000 years. Past studies using that core and others have shown that atmospheric carbon dioxide levels are directly linked to Antarctic and global temperature during the past 800,000 years. Prior to that, the connection between climate and carbon dioxide levels has not been as well understood.
The paper published today in Nature begins to change that.
During the past one million years the cycle of ice ages followed by warm periods occurred every 100,000 years. But between 2.8 million years ago and 1.2 million years ago, those cycles were shorter, about 40,000 years, and ice ages were less extreme.
The team that included Brook wanted to find out how carbon dioxide levels varied during that older time period, which until now was known only indirectly from the chemistry of sediments in the ocean and on land.
They found that the highest levels of carbon dioxide matched the levels in warm periods of more recent times. The lowest levels, however, did not reach the very low concentrations found in the ice ages of the last 800,000 years.
"One of the important results of this study is to show that carbon dioxide is linked to temperature in this earlier time period," Brook said.
This conclusion is based on studies of the chemistry of the ice, which provide an indication of temperature change in Antarctica at the same time as the carbon dioxide variations.
"That's an important baseline for understanding climate science and calibrating models that predict future change," Brook said.
The ice core with the 2 million-year-old ice comes from an area known as Allan Hills, which is about 130 miles from the U.S. Antarctic research station known as McMurdo Station. Ancient meteorites had been found on the surface in this area, leading scientists to believe there could be ancient ice in the ice sheet.
The core with the 2 million-year-old ice was drilled to a depth of 200 meters during the 2015-16 field season. It takes one to two weeks to drill and recover a core like that, and several cores were collected in the region.
The research team is on its way back to Allan Hills in the coming days for two months of additional work. They will be collecting larger quantities of the 2 million year old ice and searching for even older samples.
Read more at Science Daily
Alongside Ötzi the Iceman: A bounty of ancient mosses and liverworts
Buried alongside the famous Ötzi the Iceman are at least 75 species of bryophytes -- mosses and liverworts -- which hold clues to Ötzi's surroundings, according to a study released October 30, 2019 in the open-access journal PLOS ONE by James Dickson of the University of Glasgow, UK and colleagues at the University of Innsbruck.
Ötzi the Iceman is a remarkable 5,300-year-old human specimen found frozen in ice approximately 3,200 meters above sea level in the Italian Alps. He was frozen alongside his clothing and gear as well as an abundant assemblage of plants and fungi. In this study, Dickson and colleagues aimed to identify the mosses and liverworts preserved alongside the Iceman.
Today, 23 bryophyte species live the area near where Ötzi was found, but inside the ice the researchers identified thousands of preserved bryophyte fragments representing at least 75 species. It is the only site of such high altitude with bryophytes preserved over thousands of years. Notably, the assemblage includes a variety of mosses ranging from low-elevation to high-elevation species, as well as 10 species of liverworts, which are very rarely preserved in archaeological sites. Only 30% of the identified bryophytes appear to have been local species, with the rest having been transported to the spot in Ötzi's gut or clothing or by large mammalian herbivores whose droppings ended up frozen alongside the Iceman.
From these remains, the researchers infer that the bryophyte community in the Alps around 5,000 years ago was generally similar to that of today. Furthermore, the non-local species help to confirm the path Ötzi took to his final resting place. Several of the identified moss species thrive today in the lower Schnalstal valley, suggesting that Ötzi traveled along the valley during his ascent. This conclusion is corroborated by previous pollen research, which also pinpointed Schnalstal as the Iceman's likely route of ascent.
Dickson adds, "Most members of the public are unlikely to be knowledgeable about bryophytes (mosses and liverworts). However, no fewer than 75 species of these important investigative clues were found when the Iceman (aka Ötzi) was removed from the ice. They were recovered as mostly small scraps from the ice around him, from his clothes and gear, and even from his alimentary tract. Those findings prompted the questions: Where did the fragments come from? How precisely did they get there? How do they help our understanding of the Iceman?"
From Science Daily
Ötzi the Iceman is a remarkable 5,300-year-old human specimen found frozen in ice approximately 3,200 meters above sea level in the Italian Alps. He was frozen alongside his clothing and gear as well as an abundant assemblage of plants and fungi. In this study, Dickson and colleagues aimed to identify the mosses and liverworts preserved alongside the Iceman.
Today, 23 bryophyte species live the area near where Ötzi was found, but inside the ice the researchers identified thousands of preserved bryophyte fragments representing at least 75 species. It is the only site of such high altitude with bryophytes preserved over thousands of years. Notably, the assemblage includes a variety of mosses ranging from low-elevation to high-elevation species, as well as 10 species of liverworts, which are very rarely preserved in archaeological sites. Only 30% of the identified bryophytes appear to have been local species, with the rest having been transported to the spot in Ötzi's gut or clothing or by large mammalian herbivores whose droppings ended up frozen alongside the Iceman.
From these remains, the researchers infer that the bryophyte community in the Alps around 5,000 years ago was generally similar to that of today. Furthermore, the non-local species help to confirm the path Ötzi took to his final resting place. Several of the identified moss species thrive today in the lower Schnalstal valley, suggesting that Ötzi traveled along the valley during his ascent. This conclusion is corroborated by previous pollen research, which also pinpointed Schnalstal as the Iceman's likely route of ascent.
Dickson adds, "Most members of the public are unlikely to be knowledgeable about bryophytes (mosses and liverworts). However, no fewer than 75 species of these important investigative clues were found when the Iceman (aka Ötzi) was removed from the ice. They were recovered as mostly small scraps from the ice around him, from his clothes and gear, and even from his alimentary tract. Those findings prompted the questions: Where did the fragments come from? How precisely did they get there? How do they help our understanding of the Iceman?"
From Science Daily
After release into wild, vampire bats keep 'friends' made in captivity
Vampire bats that share food and groom each other in captivity are more likely to stick together when they're released back into the wild, find researchers in a study reported on October 31 in the journal Current Biology. While most previous evidence of "friendship" in animals comes from research in primates, these findings suggest that vampire bats can also form cooperative, friendship-like social relationships.
"The social relationships in vampire bats that we have been observing in captivity are pretty robust to changes in the social and physical environment -- even when our captive groups consist of a fairly random sample of bats from a wild colony," said Simon Ripperger of the Museum für Naturkunde, Leibniz-Institute for Evolution and Biodiversity Science in Berlin. "When we released these bats back into their wild colony, they chose to associate with the same individuals that were their cooperation partners during their time in captivity."
He and study co-lead author Gerald Carter of The Ohio State University say their findings show that repeated social interactions they've observed in the lab aren't just an artifact of captivity. Not all relationships survived the transition from the lab back into the wild. But, similar to human experience, cooperative relationships or friendships among vampire bats appear to result from a combination of social preferences together with external environment influences or circumstances.
Carter has been studying vampire bat social relationships in captivity since 2010. For the new study, he wondered whether the same relationships and networks he'd been manipulating in the lab would persist or break down after their release in the wild, where the bats could go anywhere and associate with hundreds of other individuals.
Studying social networks in wild bats at very high resolution hadn't been possible until now. To do it, Simon Ripperger and his colleagues in electrical engineering and computer sciences developed novel proximity sensors. These tiny sensors, which are lighter than a penny, allowed them to capture social networks of entire social groups of bats and update them every few seconds. By linking what they knew about the bats' relationships in captivity to what they observed in the wild, they were able to make this leap toward better understanding social bonds in vampire bats.
The researchers found that shared grooming and food sharing among female bats in captivity over 22 months predicted whom they'd interact with in the wild. While not all relationships survived, the findings suggest that the bonds made in captivity weren't just a byproduct of confinement and limited options. The researcher report that the findings are consistent with the idea that both partner fidelity and partner switching play a role in regulating the bats' relationships.
"Our finding adds to a growing body of evidence that vampire bats form social bonds that are similar to the friendships we see in some primates," Carter said. "Studying animal relationships can be a source of inspiration and insight for understanding the stability of human friendships."
Read more at Science Daily
"The social relationships in vampire bats that we have been observing in captivity are pretty robust to changes in the social and physical environment -- even when our captive groups consist of a fairly random sample of bats from a wild colony," said Simon Ripperger of the Museum für Naturkunde, Leibniz-Institute for Evolution and Biodiversity Science in Berlin. "When we released these bats back into their wild colony, they chose to associate with the same individuals that were their cooperation partners during their time in captivity."
He and study co-lead author Gerald Carter of The Ohio State University say their findings show that repeated social interactions they've observed in the lab aren't just an artifact of captivity. Not all relationships survived the transition from the lab back into the wild. But, similar to human experience, cooperative relationships or friendships among vampire bats appear to result from a combination of social preferences together with external environment influences or circumstances.
Carter has been studying vampire bat social relationships in captivity since 2010. For the new study, he wondered whether the same relationships and networks he'd been manipulating in the lab would persist or break down after their release in the wild, where the bats could go anywhere and associate with hundreds of other individuals.
Studying social networks in wild bats at very high resolution hadn't been possible until now. To do it, Simon Ripperger and his colleagues in electrical engineering and computer sciences developed novel proximity sensors. These tiny sensors, which are lighter than a penny, allowed them to capture social networks of entire social groups of bats and update them every few seconds. By linking what they knew about the bats' relationships in captivity to what they observed in the wild, they were able to make this leap toward better understanding social bonds in vampire bats.
The researchers found that shared grooming and food sharing among female bats in captivity over 22 months predicted whom they'd interact with in the wild. While not all relationships survived, the findings suggest that the bonds made in captivity weren't just a byproduct of confinement and limited options. The researcher report that the findings are consistent with the idea that both partner fidelity and partner switching play a role in regulating the bats' relationships.
"Our finding adds to a growing body of evidence that vampire bats form social bonds that are similar to the friendships we see in some primates," Carter said. "Studying animal relationships can be a source of inspiration and insight for understanding the stability of human friendships."
Read more at Science Daily
Microrobots clean up radioactive waste
According to some experts, nuclear power holds great promise for meeting the world's growing energy demands without generating greenhouse gases. But scientists need to find a way to remove radioactive isotopes, both from wastewater generated by nuclear power plants and from the environment in case of a spill. Now, researchers reporting in ACS Nano have developed tiny, self-propelled robots that remove radioactive uranium from simulated wastewater.
The accidental release of radioactive waste, such as what occurred in the Chernobyl and Fukushima nuclear plant disasters, poses large threats to the environment, humans and wildlife. Scientists have developed materials to capture, separate, remove and recover radioactive uranium from water, but the materials have limitations. One of the most promising recent approaches is the use of metal-organic frameworks (MOFs) -- compounds that can trap specific substances, including radioactive uranium, within their porous structures. Martin Pumera and colleagues wanted to add a micromotor to a rod-shaped MOF called ZIF-8 to see if it could quickly clean up radioactive waste.
To make their self-propelled microrobots, the researchers designed ZIF-8 rods with diameters about 1/15 that of a human hair. The researchers added iron atoms and iron oxide nanoparticles to stabilize the structures and make them magnetic, respectively. Catalytic platinum nanoparticles placed at one end of each rod converted hydrogen peroxide "fuel" in the water into oxygen bubbles, which propelled the microrobots at a speed of about 60 times their own length per second. In simulated radioactive wastewater, the microrobots removed 96% of the uranium in an hour. The team collected the uranium-loaded rods with a magnet and stripped off the uranium, allowing the tiny robots to be recycled. The self-propelled microrobots could someday help in the management and remediation of radioactive waste, the researchers say.
From Science Daily
The accidental release of radioactive waste, such as what occurred in the Chernobyl and Fukushima nuclear plant disasters, poses large threats to the environment, humans and wildlife. Scientists have developed materials to capture, separate, remove and recover radioactive uranium from water, but the materials have limitations. One of the most promising recent approaches is the use of metal-organic frameworks (MOFs) -- compounds that can trap specific substances, including radioactive uranium, within their porous structures. Martin Pumera and colleagues wanted to add a micromotor to a rod-shaped MOF called ZIF-8 to see if it could quickly clean up radioactive waste.
To make their self-propelled microrobots, the researchers designed ZIF-8 rods with diameters about 1/15 that of a human hair. The researchers added iron atoms and iron oxide nanoparticles to stabilize the structures and make them magnetic, respectively. Catalytic platinum nanoparticles placed at one end of each rod converted hydrogen peroxide "fuel" in the water into oxygen bubbles, which propelled the microrobots at a speed of about 60 times their own length per second. In simulated radioactive wastewater, the microrobots removed 96% of the uranium in an hour. The team collected the uranium-loaded rods with a magnet and stripped off the uranium, allowing the tiny robots to be recycled. The self-propelled microrobots could someday help in the management and remediation of radioactive waste, the researchers say.
From Science Daily
Astronomers catch wind rushing out of galaxy
Exploring the influence of galactic winds from a distant galaxy called Makani, UC San Diego's Alison Coil, Rhodes College's David Rupke and a group of collaborators from around the world made a novel discovery. Published in Nature, their study's findings provide direct evidence for the first time of the role of galactic winds -- ejections of gas from galaxies -- in creating the circumgalactic medium (CGM). It exists in the regions around galaxies, and it plays an active role in their cosmic evolution. The unique composition of Makani -- meaning wind in Hawaiian -- uniquely lent itself to the breakthrough findings.
"Makani is not a typical galaxy," noted Coil, a physics professor at UC San Diego. "It's what's known as a late-stage major merger -- two recently combined similarly massive galaxies, which came together because of the gravitational pull each felt from the other as they drew nearer. Galaxy mergers often lead to starburst events, when a substantial amount of gas present in the merging galaxies is compressed, resulting in a burst of new star births. Those new stars, in the case of Makani, likely caused the huge outflows -- either in stellar winds or at the end of their lives when they exploded as supernovae."
Coil explained that most of the gas in the universe inexplicably appears in the regions surrounding galaxies -- not in the galaxies. Typically, when astronomers observe a galaxy, they are not witnessing it undergoing dramatic events -- big mergers, the rearrangement of stars, the creation of multiple stars or driving huge, fast winds.
"While these events may occur at some point in a galaxy's life, they'd be relatively brief," noted Coil. "Here, we're actually catching it all right as it's happening through these huge outflows of gas and dust."
Coil and Rupke, the paper's first author, used data collected from the W. M. Keck Observatory's new Keck Cosmic Web Imager (KCWI) instrument, combined with images from the Hubble Space Telescope and the Atacama Large Millimeter Array (ALMA), to draw their conclusions. The KCWI data provided what the researchers call the "stunning detection" of the ionized oxygen gas to extremely large scales, well beyond the stars in the galaxy. It allowed them to distinguish a fast gaseous outflow launched from the galaxy a few million year ago, from a gas outflow launched hundreds of millions of years earlier that has since slowed significantly.
"The earlier outflow has flowed to large distances from the galaxy, while the fast, recent outflow has not had time to do so," summarized Rupke, associate professor of physics at Rhodes College.
From the Hubble, the researchers procured images of Makani's stars, showing it to be a massive, compact galaxy that resulted from a merger of two once separate galaxies. From ALMA, they could see that the outflow contains molecules as well as atoms. The data sets indicated that with a mixed population of old, middle-age and young stars, the galaxy might also contain a dust-obscured accreting supermassive black hole. This suggests to the scientists that Makani's properties and timescales are consistent with theoretical models of galactic winds.
"In terms of both their size and speed of travel, the two outflows are consistent with their creation by these past starburst events; they're also consistent with theoretical models of how large and fast winds should be if created by starbursts. So observations and theory are agreeing well here," noted Coil.
Rupke noticed that the hourglass shape of Makani's nebula is strongly reminiscent of similar galactic winds in other galaxies, but that Makani's wind is much larger than in other observed galaxies.
"This means that we can confirm it's actually moving gas from the galaxy into the circumgalactic regions around it, as well as sweeping up more gas from its surroundings as it moves out," Rupke explained. "And it's moving a lot of it -- at least one to 10 percent of the visible mass of the entire galaxy -- at very high speeds, thousands of kilometers per second."
Rupke also noted that while astronomers are converging on the idea that galactic winds are important for feeding the CGM, most of the evidence has come from theoretical models or observations that don't encompass the entire galaxy.
Read more at Science Daily
"Makani is not a typical galaxy," noted Coil, a physics professor at UC San Diego. "It's what's known as a late-stage major merger -- two recently combined similarly massive galaxies, which came together because of the gravitational pull each felt from the other as they drew nearer. Galaxy mergers often lead to starburst events, when a substantial amount of gas present in the merging galaxies is compressed, resulting in a burst of new star births. Those new stars, in the case of Makani, likely caused the huge outflows -- either in stellar winds or at the end of their lives when they exploded as supernovae."
Coil explained that most of the gas in the universe inexplicably appears in the regions surrounding galaxies -- not in the galaxies. Typically, when astronomers observe a galaxy, they are not witnessing it undergoing dramatic events -- big mergers, the rearrangement of stars, the creation of multiple stars or driving huge, fast winds.
"While these events may occur at some point in a galaxy's life, they'd be relatively brief," noted Coil. "Here, we're actually catching it all right as it's happening through these huge outflows of gas and dust."
Coil and Rupke, the paper's first author, used data collected from the W. M. Keck Observatory's new Keck Cosmic Web Imager (KCWI) instrument, combined with images from the Hubble Space Telescope and the Atacama Large Millimeter Array (ALMA), to draw their conclusions. The KCWI data provided what the researchers call the "stunning detection" of the ionized oxygen gas to extremely large scales, well beyond the stars in the galaxy. It allowed them to distinguish a fast gaseous outflow launched from the galaxy a few million year ago, from a gas outflow launched hundreds of millions of years earlier that has since slowed significantly.
"The earlier outflow has flowed to large distances from the galaxy, while the fast, recent outflow has not had time to do so," summarized Rupke, associate professor of physics at Rhodes College.
From the Hubble, the researchers procured images of Makani's stars, showing it to be a massive, compact galaxy that resulted from a merger of two once separate galaxies. From ALMA, they could see that the outflow contains molecules as well as atoms. The data sets indicated that with a mixed population of old, middle-age and young stars, the galaxy might also contain a dust-obscured accreting supermassive black hole. This suggests to the scientists that Makani's properties and timescales are consistent with theoretical models of galactic winds.
"In terms of both their size and speed of travel, the two outflows are consistent with their creation by these past starburst events; they're also consistent with theoretical models of how large and fast winds should be if created by starbursts. So observations and theory are agreeing well here," noted Coil.
Rupke noticed that the hourglass shape of Makani's nebula is strongly reminiscent of similar galactic winds in other galaxies, but that Makani's wind is much larger than in other observed galaxies.
"This means that we can confirm it's actually moving gas from the galaxy into the circumgalactic regions around it, as well as sweeping up more gas from its surroundings as it moves out," Rupke explained. "And it's moving a lot of it -- at least one to 10 percent of the visible mass of the entire galaxy -- at very high speeds, thousands of kilometers per second."
Rupke also noted that while astronomers are converging on the idea that galactic winds are important for feeding the CGM, most of the evidence has come from theoretical models or observations that don't encompass the entire galaxy.
Read more at Science Daily
Oct 30, 2019
Simulations explain giant exoplanets with eccentric, close-in orbits
As planetary systems evolve, gravitational interactions between planets can fling some of them into eccentric elliptical orbits around the host star, or even out of the system altogether. Smaller planets should be more susceptible to this gravitational scattering, yet many gas giant exoplanets have been observed with eccentric orbits very different from the roughly circular orbits of the planets in our own solar system.
Surprisingly, the planets with the highest masses tend to be those with the highest eccentricities, even though the inertia of a larger mass should make it harder to budge from its initial orbit. This counter-intuitive observation prompted astronomers at UC Santa Cruz to explore the evolution of planetary systems using computer simulations. Their results, reported in a paper published in Astrophysical Journal Letters, suggest a crucial role for a giant-impacts phase in the evolution of high-mass planetary systems, leading to collisional growth of multiple giant planets with close-in orbits.
"A giant planet is not as easily scattered into an eccentric orbit as a smaller planet, but if there are multiple giant planets close to the host star, their gravitational interactions are more likely scatter them into eccentric orbits," explained first author Renata Frelikh, a graduate student in astronomy and astrophysics at UC Santa Cruz.
Frelikh performed hundreds of simulations of planetary systems, starting each one with 10 planets in circular orbits and varying the initial total mass of the system and the masses of individual planets. As the systems evolved for 20 million simulated years, dynamical instabilities led to collisions and mergers to form larger planets as well as gravitational interactions that ejected some planets and scattered others into eccentric orbits.
Analyzing the results of these simulations collectively, the researchers found that the planetary systems with the most initial total mass produced the biggest planets and the planets with the highest eccentricities.
"Our model naturally explains the counter-intuitive correlation of mass and eccentricity," Frelikh said.
Coauthor Ruth Murray-Clay, the Gunderson professor of theoretical astrophysics at UC Santa Cruz, said the only non-standard assumption in their model is that there can be several gas giant planets in the inner part of a planetary system. "If you make that assumption, all the other behavior follows," she said.
According to the classic model of planet formation, based on our own solar system, there is not enough material in the inner part of the protoplanetary disk around a star to make gas giant planets, so only small rocky planets form in the inner part of the system and giant planets form farther out. Yet astronomers have detected many gas giants orbiting close to their host stars. Because they are relatively easy to detect, these "hot Jupiters" accounted for the majority of early exoplanet discoveries, but they may be an uncommon outcome of planet formation.
"This may be an unusual process," Murray-Clay said. "We're suggesting that it is more likely to happen when the initial mass in the disk is high, and that high-mass giant planets are produced during a phase of giant impacts."
This giant-impacts phase is analogous to the final stage in the assembly of our own solar system, when the moon was formed in the aftermath of a collision between Earth and another planet. "Because of our solar system bias, we tend to think of impacts as happening to rocky planets and ejection as happening to giant planets, but there is a whole spectrum of possible outcomes in the evolution of planetary systems," Murray-Clay said.
According to Frelikh, collisional growth of high-mass giant planets should be most efficient in the inner regions, because encounters between planets in the outer parts of the system are more likely to lead to ejections than mergers. Mergers producing high-mass planets should peak at a distance from the host star of around 3 astronomical units (AU, the distance from Earth to the sun), she said.
Read more at Science Daily
Surprisingly, the planets with the highest masses tend to be those with the highest eccentricities, even though the inertia of a larger mass should make it harder to budge from its initial orbit. This counter-intuitive observation prompted astronomers at UC Santa Cruz to explore the evolution of planetary systems using computer simulations. Their results, reported in a paper published in Astrophysical Journal Letters, suggest a crucial role for a giant-impacts phase in the evolution of high-mass planetary systems, leading to collisional growth of multiple giant planets with close-in orbits.
"A giant planet is not as easily scattered into an eccentric orbit as a smaller planet, but if there are multiple giant planets close to the host star, their gravitational interactions are more likely scatter them into eccentric orbits," explained first author Renata Frelikh, a graduate student in astronomy and astrophysics at UC Santa Cruz.
Frelikh performed hundreds of simulations of planetary systems, starting each one with 10 planets in circular orbits and varying the initial total mass of the system and the masses of individual planets. As the systems evolved for 20 million simulated years, dynamical instabilities led to collisions and mergers to form larger planets as well as gravitational interactions that ejected some planets and scattered others into eccentric orbits.
Analyzing the results of these simulations collectively, the researchers found that the planetary systems with the most initial total mass produced the biggest planets and the planets with the highest eccentricities.
"Our model naturally explains the counter-intuitive correlation of mass and eccentricity," Frelikh said.
Coauthor Ruth Murray-Clay, the Gunderson professor of theoretical astrophysics at UC Santa Cruz, said the only non-standard assumption in their model is that there can be several gas giant planets in the inner part of a planetary system. "If you make that assumption, all the other behavior follows," she said.
According to the classic model of planet formation, based on our own solar system, there is not enough material in the inner part of the protoplanetary disk around a star to make gas giant planets, so only small rocky planets form in the inner part of the system and giant planets form farther out. Yet astronomers have detected many gas giants orbiting close to their host stars. Because they are relatively easy to detect, these "hot Jupiters" accounted for the majority of early exoplanet discoveries, but they may be an uncommon outcome of planet formation.
"This may be an unusual process," Murray-Clay said. "We're suggesting that it is more likely to happen when the initial mass in the disk is high, and that high-mass giant planets are produced during a phase of giant impacts."
This giant-impacts phase is analogous to the final stage in the assembly of our own solar system, when the moon was formed in the aftermath of a collision between Earth and another planet. "Because of our solar system bias, we tend to think of impacts as happening to rocky planets and ejection as happening to giant planets, but there is a whole spectrum of possible outcomes in the evolution of planetary systems," Murray-Clay said.
According to Frelikh, collisional growth of high-mass giant planets should be most efficient in the inner regions, because encounters between planets in the outer parts of the system are more likely to lead to ejections than mergers. Mergers producing high-mass planets should peak at a distance from the host star of around 3 astronomical units (AU, the distance from Earth to the sun), she said.
Read more at Science Daily
The secrets behind a creepy photographic technique
In the 1960s, a French artist named Jean-Pierre Sudre began experimenting with an obscure 19th-century photographic process, creating dramatic black-and-white photographs with ethereal veiling effects. Sudre christened the process "mordanҫage," the French word for "etching." Since then, other photographers have used and refined mordanҫage to create unique works of art. Now, researchers reporting in the ACS journal Analytical Chemistry have unveiled the mysterious chemistry behind the process.
In mordanҫage, a fully developed black-and-white photograph is immersed in a solution containing copper (II) chloride, hydrogen peroxide and acetic acid. The solution bleaches the photo to a pale yellow color and partially lifts formerly black areas of the print away from the paper backing. Then, the photographer rinses off the mordanҫage solution and redevelops the print to restore the black color. When the photo is dried and pressed flat, black areas that had lifted from the paper form the veils. Caroline Fudala and Rebecca Jones wanted to better understand the chemical details of this process.
The researchers methodically studied the technique and determined that the hydrogen peroxide and acetic acid soften the photographic paper. This allows copper (II) chloride to permeate the paper and oxidize the metallic silver -- which colors the dark areas of the print -- to silver chloride. The softened surface layers lift off as veils. Then, during redevelopment, the veils darken when silver chloride is reduced back to metallic silver. Et voilà, a spooky photo that's just right for a scary holiday...
From Science Daily
In mordanҫage, a fully developed black-and-white photograph is immersed in a solution containing copper (II) chloride, hydrogen peroxide and acetic acid. The solution bleaches the photo to a pale yellow color and partially lifts formerly black areas of the print away from the paper backing. Then, the photographer rinses off the mordanҫage solution and redevelops the print to restore the black color. When the photo is dried and pressed flat, black areas that had lifted from the paper form the veils. Caroline Fudala and Rebecca Jones wanted to better understand the chemical details of this process.
The researchers methodically studied the technique and determined that the hydrogen peroxide and acetic acid soften the photographic paper. This allows copper (II) chloride to permeate the paper and oxidize the metallic silver -- which colors the dark areas of the print -- to silver chloride. The softened surface layers lift off as veils. Then, during redevelopment, the veils darken when silver chloride is reduced back to metallic silver. Et voilà, a spooky photo that's just right for a scary holiday...
From Science Daily
Can't stop putting your hand in the candy dish? Scientists may have found why
A national team of scientists has identified a circuit in the brain that appears to be associated with psychiatric disorders ranging from overeating to gambling, drug abuse and even Parkinson's disease.
"We discovered the brain connections that keep impulsivity in check," said Scott Kanoski, a neuroscientist and associate professor at USC Dornsife College of Letters, Arts and Sciences. "The key to this system is a neuropeptide that we've been focusing on, melanin-concentrating hormone, in studies on appetite and eating."
The study was published Tuesday in the journal Nature Communications.
Melanin-concentrating hormone (MCH) is signaled by brain cells in a portion of the hypothalamus, a cone-shaped area of the brain that sits above the pituitary gland. Research has indicated MCH is linked with appetite for food or drugs, but until now scientists hadn't fully understood how it affects impulse control.
Can't wait for donuts
The scientists conducted a series of studies on rats that demonstrated that impulsivity is a separate function from hunger and food motivation.
In one task, a rat could press a lever and receive a treat that Kanoski likened to a "little donut hole" that was high in fats and carbohydrates. The release was timed, however, which meant the rat would have to wait 20 seconds to successfully press the lever and receive another one. The rat would become eager and and would sometimes hit the lever before the time had passed, forcing the clock to reset and having to wait again for the next opportunity for a treat.
In another task, rats had a choice between two levers. One lever would release an immediate single treat. The other would release a batch of five treats -- but every 30-45 seconds.
The rats would press the lever for the single treat more frequently than the other lever, even though it would have delivered far more food.
"They don't just sit there and wait," Kanoski said. "They worked harder to achieve the same, or even fewer, number of pellets."
The struggle with impulsivity
The scientists tested lowering and raising the levels of MCH in the rats' brains through various methods.
"We would drive the system up, and then we would see the animals be more impulsive," Kanoski said. "And if we reduced function we thought they would be less impulsive, but instead we found that they were more so. Either way, they had elevated impulsivity."
Based on anatomical brain scans, the scientists were able to identify a neural pathway for impulse control. Neurons in the lateral hypothalamus signal MCH to other neurons in the ventral hippocampus, an area of the brain associated with emotions, memory and inhibitory control.
Read more at Science Daily
"We discovered the brain connections that keep impulsivity in check," said Scott Kanoski, a neuroscientist and associate professor at USC Dornsife College of Letters, Arts and Sciences. "The key to this system is a neuropeptide that we've been focusing on, melanin-concentrating hormone, in studies on appetite and eating."
The study was published Tuesday in the journal Nature Communications.
Melanin-concentrating hormone (MCH) is signaled by brain cells in a portion of the hypothalamus, a cone-shaped area of the brain that sits above the pituitary gland. Research has indicated MCH is linked with appetite for food or drugs, but until now scientists hadn't fully understood how it affects impulse control.
Can't wait for donuts
The scientists conducted a series of studies on rats that demonstrated that impulsivity is a separate function from hunger and food motivation.
In one task, a rat could press a lever and receive a treat that Kanoski likened to a "little donut hole" that was high in fats and carbohydrates. The release was timed, however, which meant the rat would have to wait 20 seconds to successfully press the lever and receive another one. The rat would become eager and and would sometimes hit the lever before the time had passed, forcing the clock to reset and having to wait again for the next opportunity for a treat.
In another task, rats had a choice between two levers. One lever would release an immediate single treat. The other would release a batch of five treats -- but every 30-45 seconds.
The rats would press the lever for the single treat more frequently than the other lever, even though it would have delivered far more food.
"They don't just sit there and wait," Kanoski said. "They worked harder to achieve the same, or even fewer, number of pellets."
The struggle with impulsivity
The scientists tested lowering and raising the levels of MCH in the rats' brains through various methods.
"We would drive the system up, and then we would see the animals be more impulsive," Kanoski said. "And if we reduced function we thought they would be less impulsive, but instead we found that they were more so. Either way, they had elevated impulsivity."
Based on anatomical brain scans, the scientists were able to identify a neural pathway for impulse control. Neurons in the lateral hypothalamus signal MCH to other neurons in the ventral hippocampus, an area of the brain associated with emotions, memory and inhibitory control.
Read more at Science Daily
Name that tune: Brain takes just 100 to 300 milliseconds to recognize familiar music
The human brain can recognise a familiar song within 100 to 300 milliseconds, highlighting the deep hold favourite tunes have on our memory, a UCL study finds.
Anecdotally the ability to recall popular songs is exemplified in game shows such as 'Name That Tune', where contestants can often identify a piece of music in just a few seconds.
For this study, published in Scientific Reports, researchers at the UCL Ear Institute wanted to find out exactly how fast the brain responded to familiar music, as well as the temporal profile of processes in the brain which allow for this.
The main participant group consisted of five men and five women who had each provided five songs, which were very familiar to them. For each participant researchers then chose one of the familiar songs and matched this to a tune, which was similar (in tempo, melody, harmony, vocals and instrumentation) but which was known to be unfamiliar to the participant.
Participants then passively listened to 100 snippets (each less than a second) of both the familiar and unfamiliar song, presented in random order. Around 400 seconds was listened to in total. Researchers used electro-encephalography (EEG) imaging, which records electrical activity in the brain, and pupillometry (a technique that measures pupil diameter -- considered a measure of arousal).
The study found the human brain recognised 'familiar' tunes from 100 milliseconds (0.1 of a second) of sound onset, with the average recognition time between 100ms and 300ms. This was first revealed by rapid pupil dilation, likely linked to increased arousal associated with the familiar sound, followed by cortical activation related to memory retrieval.
No such differences were found in a control group, compromising of international students who were unfamiliar with all the songs 'familiar' and 'unfamiliar'.
Senior author, Professor Maria Chait, (UCL Ear Institute) said: "Our results demonstrate that recognition of familiar music happens remarkably quickly.
"These findings point to very fast temporal circuitry and are consistent with the deep hold that highly familiar pieces of music have on our memory."
Professor Chait added: "Beyond basic science, understanding how the brain recognises familiar tunes is useful for various music-based therapeutic interventions.
"For instance, there is a growing interest in exploiting music to break through to dementia patients for whom memory of music appears well preserved despite an otherwise systemic failure of memory systems.
"Pinpointing the neural pathway and processes which support music identification may provide a clue to understanding the basis of this phenomena."
Study limitations
'Familiarity' is a multifaceted concept. In this study, songs were explicitly selected to evoke positive feelings and memories. Therefore, for the 'main' group the 'familiar' and 'unfamiliar' songs did not just differ in terms of recognisability but also in terms of emotional engagement and affect.
While the songs are referred to as 'familiar' and 'unfamiliar', the effects observed may also be linked with these other factors.
While care was taken in the song matching process, this was ultimately done by hand due to lack of availability of appropriate technology. Advancements in automatic processing of music may improve matching in the future.
Read more at Science Daily
Anecdotally the ability to recall popular songs is exemplified in game shows such as 'Name That Tune', where contestants can often identify a piece of music in just a few seconds.
For this study, published in Scientific Reports, researchers at the UCL Ear Institute wanted to find out exactly how fast the brain responded to familiar music, as well as the temporal profile of processes in the brain which allow for this.
The main participant group consisted of five men and five women who had each provided five songs, which were very familiar to them. For each participant researchers then chose one of the familiar songs and matched this to a tune, which was similar (in tempo, melody, harmony, vocals and instrumentation) but which was known to be unfamiliar to the participant.
Participants then passively listened to 100 snippets (each less than a second) of both the familiar and unfamiliar song, presented in random order. Around 400 seconds was listened to in total. Researchers used electro-encephalography (EEG) imaging, which records electrical activity in the brain, and pupillometry (a technique that measures pupil diameter -- considered a measure of arousal).
The study found the human brain recognised 'familiar' tunes from 100 milliseconds (0.1 of a second) of sound onset, with the average recognition time between 100ms and 300ms. This was first revealed by rapid pupil dilation, likely linked to increased arousal associated with the familiar sound, followed by cortical activation related to memory retrieval.
No such differences were found in a control group, compromising of international students who were unfamiliar with all the songs 'familiar' and 'unfamiliar'.
Senior author, Professor Maria Chait, (UCL Ear Institute) said: "Our results demonstrate that recognition of familiar music happens remarkably quickly.
"These findings point to very fast temporal circuitry and are consistent with the deep hold that highly familiar pieces of music have on our memory."
Professor Chait added: "Beyond basic science, understanding how the brain recognises familiar tunes is useful for various music-based therapeutic interventions.
"For instance, there is a growing interest in exploiting music to break through to dementia patients for whom memory of music appears well preserved despite an otherwise systemic failure of memory systems.
"Pinpointing the neural pathway and processes which support music identification may provide a clue to understanding the basis of this phenomena."
Study limitations
'Familiarity' is a multifaceted concept. In this study, songs were explicitly selected to evoke positive feelings and memories. Therefore, for the 'main' group the 'familiar' and 'unfamiliar' songs did not just differ in terms of recognisability but also in terms of emotional engagement and affect.
While the songs are referred to as 'familiar' and 'unfamiliar', the effects observed may also be linked with these other factors.
While care was taken in the song matching process, this was ultimately done by hand due to lack of availability of appropriate technology. Advancements in automatic processing of music may improve matching in the future.
Read more at Science Daily
Malaria pathogen under the X-ray microscope
Around 40 percent of humanity lives in regions affected by malaria, around 200 million people contract the disease every year, and an estimated 600,000 people die as a result. Anopheles mosquitoes that transmit malaria pathogens are spreading due to climate change. These pathogens are unicellular organisms (plasmodia) that settle inside the red blood cells of their hosts and metabolize hemoglobin there to grow and multiply.
The main avenue to deal with the disease is treatment by active compounds in the quinoline family and, more recently, from the artemisinin family. However, the exact way that active compounds keep the pathogenic plasmodia in check has so far been subject to controversy.
One thesis relates to the digestive process of the pathogenic plasmodia. Research has shown that plasmodia store large amounts of hemoglobin in their digestive vacuole, an organelle that resembles a bag. This releases iron-containing hemozoin molecules that the plasmodia cannot tolerate. The plasmodia manage to crystallize these toxic hemozoin molecules so that they can no longer poison them. The idea was that active compounds might prevent the formation of hemozoin crystals and thus boycott the detoxification process of the plasmodia.
A team led by Sergey Kapishnikov from the University of Copenhagen and the Weizmann Institute of Science in Rehovot, Israel, together with Danish, Spanish, French and Berlin colleagues, has now investigated this process in infected blood cells for the first time. The blood cells were infected with the malaria pathogen Plasmodium falciparum and then mixed with different concentrations of bromoquine from the quinoline family.
Malaria pathogens in blood cells can only be examined in vivo and in their natural environment using X-ray microscopy at synchrotron sources. Other investigation methods, such as electron microscopy, require the pathogens to be dried and cut into ultra-thin slices.
At BESSY II, Stephan Werner and Peter Guttmann together with Sergey Kapishnikov were able to examine the samples using X-ray microscopy. "The blood samples are flash-frozen for the examination so that we can observe the pathogens in vivo and also produce three-dimensional X-ray tomography images," explains Guttmann. Further X-ray microscopy studies were carried out at the ALBA synchrotron light source in Barcelona.
Fluorescence spectromicroscopy at the European Synchrotron Radiation Facility ESRF in Grenoble made it possible to map the distribution of elements in blood cells . When combined with the cellular structure revealed by the three-dimensional X-ray images, the bromoquine distribution and its mode of action could be precisely interpreted. "We see in our images that the bromoquine accumulates at the surface of hemozoin crystals. This should lead to inhibition of the crystal growth and thus disrupt the detoxification process by the plasmodia parasites," explains Kapishnikov.
Read more at Science Daily
The main avenue to deal with the disease is treatment by active compounds in the quinoline family and, more recently, from the artemisinin family. However, the exact way that active compounds keep the pathogenic plasmodia in check has so far been subject to controversy.
One thesis relates to the digestive process of the pathogenic plasmodia. Research has shown that plasmodia store large amounts of hemoglobin in their digestive vacuole, an organelle that resembles a bag. This releases iron-containing hemozoin molecules that the plasmodia cannot tolerate. The plasmodia manage to crystallize these toxic hemozoin molecules so that they can no longer poison them. The idea was that active compounds might prevent the formation of hemozoin crystals and thus boycott the detoxification process of the plasmodia.
A team led by Sergey Kapishnikov from the University of Copenhagen and the Weizmann Institute of Science in Rehovot, Israel, together with Danish, Spanish, French and Berlin colleagues, has now investigated this process in infected blood cells for the first time. The blood cells were infected with the malaria pathogen Plasmodium falciparum and then mixed with different concentrations of bromoquine from the quinoline family.
Malaria pathogens in blood cells can only be examined in vivo and in their natural environment using X-ray microscopy at synchrotron sources. Other investigation methods, such as electron microscopy, require the pathogens to be dried and cut into ultra-thin slices.
At BESSY II, Stephan Werner and Peter Guttmann together with Sergey Kapishnikov were able to examine the samples using X-ray microscopy. "The blood samples are flash-frozen for the examination so that we can observe the pathogens in vivo and also produce three-dimensional X-ray tomography images," explains Guttmann. Further X-ray microscopy studies were carried out at the ALBA synchrotron light source in Barcelona.
Fluorescence spectromicroscopy at the European Synchrotron Radiation Facility ESRF in Grenoble made it possible to map the distribution of elements in blood cells . When combined with the cellular structure revealed by the three-dimensional X-ray images, the bromoquine distribution and its mode of action could be precisely interpreted. "We see in our images that the bromoquine accumulates at the surface of hemozoin crystals. This should lead to inhibition of the crystal growth and thus disrupt the detoxification process by the plasmodia parasites," explains Kapishnikov.
Read more at Science Daily
Oct 29, 2019
The homeland of modern humans
A study has concluded that the earliest ancestors of anatomically modern humans (Homo sapiens sapiens) emerged in a southern African 'homeland' and thrived there for 70 thousand years.
The breakthrough findings are published in the prestigious journal Nature today.
The authors propose that changes in Africa's climate triggered the first human explorations, which initiated the development of humans' genetic, ethnic and cultural diversity.
This study provides a window into the first 100 thousand years of modern humans' history.
DNA as a time capsule
"It has been clear for some time that anatomically modern humans appeared in Africa roughly 200 thousand years ago. What has been long debated is the exact location of this emergence and subsequent dispersal of our earliest ancestors," says study lead Professor Vanessa Hayes from the Garvan Institute of Medical Research and University of Sydney, and Extraordinary Professor at the University of Pretoria.
"Mitochondrial DNA acts like a time capsule of our ancestral mothers, accumulating changes slowly over generations. Comparing the complete DNA code, or mitogenome, from different individuals provides information on how closely they are related."
In their study, Professor Hayes and her colleagues collected blood samples to establish a comprehensive catalogue of modern human's earliest mitogenomes from the so-called 'L0' lineage. "Our work would not have been possible without the generous contributions of local communities and study participants in Namibia and South Africa, which allowed us to uncover rare and new L0 sub-branches," says study author and public health Professor Riana Bornman from the University of Pretoria.
"We merged 198 new, rare mitogenomes to the current database of modern human's earliest known population, the L0 lineage. This allowed us to refine the evolutionary tree of our earliest ancestral branches better than ever before," says first author Dr Eva Chan from the Garvan Institute of Medical Research, who led the phylogenetic analyses.
By combining the L0 lineage timeline with the linguistic, cultural and geographic distributions of different sub-lineages, the study authors revealed that 200 thousand years ago, the first Homo sapiens sapiens maternal lineage emerged in a 'homeland' south of the Greater Zambezi River Basin region, which includes the entire expanse of northern Botswana into Namibia to the west and Zimbabwe to the east.
A homeland perfect for life to thrive
Investigating existing geological, archeological and fossil evidence, geologist Dr Andy Moore, from Rhodes University, revealed that the homeland region once held Africa's largest ever lake system, Lake Makgadikgadi.
"Prior to modern human emergence, the lake had begun to drain due to shifts in underlying tectonic plates. This would have created, a vast wetland, which is known to be one of the most productive ecosystems for sustaining life," says Dr Moore.
Modern humans' first migrations
The authors' new evolutionary timelines suggest that the ancient wetland ecosystem provided a stable ecological environment for modern humans' first ancestors to thrive for 70 thousand years.
"We observed significant genetic divergence in the modern humans' earliest maternal sub-lineages, that indicates our ancestors migrated out of the homeland between 130 and 110 thousand years ago," explains Professor Hayes. "The first migrants ventured northeast, followed by a second wave of migrants who travelled southwest. A third population remained in the homeland until today."
"In contrast to the northeasterly migrants, the southwesterly explorers appear to flourish, experiencing steady population growth," says Professor Hayes. The authors speculate that the success of this migration was most likely a result of adaptation to marine foraging, which is further supported by extensive archaeological evidence along the southern tip of Africa.
Climate effects
To investigate what may have driven these early human migrations, co-corresponding author Professor Axel Timmermann, Director of the IBS Center for Climate Physics at Pusan National University, analysed climate computer model simulations and geological data, which capture Southern Africa's climate history of the past 250 thousand years.
"Our simulations suggest that the slow wobble of Earth's axis changes summer solar radiation in the Southern Hemisphere, leading to periodic shifts in rainfall across southern Africa," says Professor Timmermann. "These shifts in climate would have opened green, vegetated corridors, first 130 thousand years ago to the northeast, and then around 110 thousand years ago to the southwest, allowing our earliest ancestors to migrate away from the homeland for the first time."
"These first migrants left behind a homeland population," remarks Professor Hayes. "Eventually adapting to the drying lands, maternal descendants of the homeland population can be found in the greater Kalahari region today."
Read more at Science Daily
The breakthrough findings are published in the prestigious journal Nature today.
The authors propose that changes in Africa's climate triggered the first human explorations, which initiated the development of humans' genetic, ethnic and cultural diversity.
This study provides a window into the first 100 thousand years of modern humans' history.
DNA as a time capsule
"It has been clear for some time that anatomically modern humans appeared in Africa roughly 200 thousand years ago. What has been long debated is the exact location of this emergence and subsequent dispersal of our earliest ancestors," says study lead Professor Vanessa Hayes from the Garvan Institute of Medical Research and University of Sydney, and Extraordinary Professor at the University of Pretoria.
"Mitochondrial DNA acts like a time capsule of our ancestral mothers, accumulating changes slowly over generations. Comparing the complete DNA code, or mitogenome, from different individuals provides information on how closely they are related."
In their study, Professor Hayes and her colleagues collected blood samples to establish a comprehensive catalogue of modern human's earliest mitogenomes from the so-called 'L0' lineage. "Our work would not have been possible without the generous contributions of local communities and study participants in Namibia and South Africa, which allowed us to uncover rare and new L0 sub-branches," says study author and public health Professor Riana Bornman from the University of Pretoria.
"We merged 198 new, rare mitogenomes to the current database of modern human's earliest known population, the L0 lineage. This allowed us to refine the evolutionary tree of our earliest ancestral branches better than ever before," says first author Dr Eva Chan from the Garvan Institute of Medical Research, who led the phylogenetic analyses.
By combining the L0 lineage timeline with the linguistic, cultural and geographic distributions of different sub-lineages, the study authors revealed that 200 thousand years ago, the first Homo sapiens sapiens maternal lineage emerged in a 'homeland' south of the Greater Zambezi River Basin region, which includes the entire expanse of northern Botswana into Namibia to the west and Zimbabwe to the east.
A homeland perfect for life to thrive
Investigating existing geological, archeological and fossil evidence, geologist Dr Andy Moore, from Rhodes University, revealed that the homeland region once held Africa's largest ever lake system, Lake Makgadikgadi.
"Prior to modern human emergence, the lake had begun to drain due to shifts in underlying tectonic plates. This would have created, a vast wetland, which is known to be one of the most productive ecosystems for sustaining life," says Dr Moore.
Modern humans' first migrations
The authors' new evolutionary timelines suggest that the ancient wetland ecosystem provided a stable ecological environment for modern humans' first ancestors to thrive for 70 thousand years.
"We observed significant genetic divergence in the modern humans' earliest maternal sub-lineages, that indicates our ancestors migrated out of the homeland between 130 and 110 thousand years ago," explains Professor Hayes. "The first migrants ventured northeast, followed by a second wave of migrants who travelled southwest. A third population remained in the homeland until today."
"In contrast to the northeasterly migrants, the southwesterly explorers appear to flourish, experiencing steady population growth," says Professor Hayes. The authors speculate that the success of this migration was most likely a result of adaptation to marine foraging, which is further supported by extensive archaeological evidence along the southern tip of Africa.
Climate effects
To investigate what may have driven these early human migrations, co-corresponding author Professor Axel Timmermann, Director of the IBS Center for Climate Physics at Pusan National University, analysed climate computer model simulations and geological data, which capture Southern Africa's climate history of the past 250 thousand years.
"Our simulations suggest that the slow wobble of Earth's axis changes summer solar radiation in the Southern Hemisphere, leading to periodic shifts in rainfall across southern Africa," says Professor Timmermann. "These shifts in climate would have opened green, vegetated corridors, first 130 thousand years ago to the northeast, and then around 110 thousand years ago to the southwest, allowing our earliest ancestors to migrate away from the homeland for the first time."
"These first migrants left behind a homeland population," remarks Professor Hayes. "Eventually adapting to the drying lands, maternal descendants of the homeland population can be found in the greater Kalahari region today."
Read more at Science Daily
Following in Darwin's footsteps: understanding the plant evolution of florist's gloxinia
More than 150 years ago, Charles Darwin's fascination with genetics and domestication catapulted the scientific world into new territory as scientists started to ask: How did a species evolve to be this way?
In a study published in Plants People Planet, a team led by Virginia Tech researchers discovered that in its 200 years of being cultivated and domesticated, florist's gloxinia, Sinningia speciosa, has reached tremendous levels of phenotypic, or physical, variation and originates from a single founder population.
"The hallmark here is that, with early stages of domestication, we see increased phenotypic variation but an overall decrease in genetic variation. So it's a paradox -- and we've made it even more of a paradox because we're showing that all of this phenotypic variation came from a single founder population," said Tomas Hasing, lead author and graduate student in the School of Plant and Environmental Sciencesin the College of Agriculture and Life Sciences.
Florist's gloxinia, a species originally documented by Darwin himself, was introduced to England in the 18th century. Since then, plant breeders have cultivated hundreds of strains by intentionally selecting for desired traits. Within 200 years -- a mere blink of an evolutionary eye -- florist's gloxinia reached the same levels of phenotypic variation as snapdragons, Antirrhinum spp. Snapdragons, however, have been cultivated for 2,000 years.
"Florist's gloxinia presents a clear domestication syndrome and rich phenotypic diversity. We already knew that it had a small, simple genome, but the complexity of its origin was a mystery that we needed to solve before we started to use it as a model," said Aureliano Bombarely, a former assistant professor in the School of Plant and Environmental Sciences. In 2014, he proposed the use of this species as a model to study genomic evolution during domestication.
To account for the plant's major aesthetic changes in such a short period of time, the team expected florist's gloxinia to have been cross-bred with other species at some point during its history. They used reduced representation sequencing of the genome to trace the origins of the plant back to its native home of Rio de Janeiro, Brazil, but found no evidence of hybridization, indicating that most varieties of florist's gloxinia come from a single founder population just outside of the city.
The discovery of the single founder population explains why florist's gloxinia has such low genetic variation -- cultivating plants in captivity allows breeders to select for different physical traits like color, shape, or size and purge unwanted genetic variation from a population. When beneficial mutations arise, breeders can increase a mutation's frequency by breeding it into the population. Ultimately, the accumulation of small changes from mutations led to the plant's high levels of phenotypic diversity.
"Most studies conducted on domesticated plants are focused on food crops, but studying how ornamental crops are domesticated expands our understanding of plant genetics and patterns. This ultimately benefits agriculture as a whole," said David Haak, assistant professor in the School of Plant and Environmental Sciences and an affiliated member of the Global Change Center, housed within the Fralin Life Sciences Institute.
The commercial cultivation of flowers, known as floriculture, was recently named the ninth highest-grossing sector of Virginia's top 20 agricultural products by generating $146 million annually. An increased understanding of plant genetics will allow floriculturists to grow and harvest flowers more efficiently and generate more income.
Read more at Science Daily
In a study published in Plants People Planet, a team led by Virginia Tech researchers discovered that in its 200 years of being cultivated and domesticated, florist's gloxinia, Sinningia speciosa, has reached tremendous levels of phenotypic, or physical, variation and originates from a single founder population.
"The hallmark here is that, with early stages of domestication, we see increased phenotypic variation but an overall decrease in genetic variation. So it's a paradox -- and we've made it even more of a paradox because we're showing that all of this phenotypic variation came from a single founder population," said Tomas Hasing, lead author and graduate student in the School of Plant and Environmental Sciencesin the College of Agriculture and Life Sciences.
Florist's gloxinia, a species originally documented by Darwin himself, was introduced to England in the 18th century. Since then, plant breeders have cultivated hundreds of strains by intentionally selecting for desired traits. Within 200 years -- a mere blink of an evolutionary eye -- florist's gloxinia reached the same levels of phenotypic variation as snapdragons, Antirrhinum spp. Snapdragons, however, have been cultivated for 2,000 years.
"Florist's gloxinia presents a clear domestication syndrome and rich phenotypic diversity. We already knew that it had a small, simple genome, but the complexity of its origin was a mystery that we needed to solve before we started to use it as a model," said Aureliano Bombarely, a former assistant professor in the School of Plant and Environmental Sciences. In 2014, he proposed the use of this species as a model to study genomic evolution during domestication.
To account for the plant's major aesthetic changes in such a short period of time, the team expected florist's gloxinia to have been cross-bred with other species at some point during its history. They used reduced representation sequencing of the genome to trace the origins of the plant back to its native home of Rio de Janeiro, Brazil, but found no evidence of hybridization, indicating that most varieties of florist's gloxinia come from a single founder population just outside of the city.
The discovery of the single founder population explains why florist's gloxinia has such low genetic variation -- cultivating plants in captivity allows breeders to select for different physical traits like color, shape, or size and purge unwanted genetic variation from a population. When beneficial mutations arise, breeders can increase a mutation's frequency by breeding it into the population. Ultimately, the accumulation of small changes from mutations led to the plant's high levels of phenotypic diversity.
"Most studies conducted on domesticated plants are focused on food crops, but studying how ornamental crops are domesticated expands our understanding of plant genetics and patterns. This ultimately benefits agriculture as a whole," said David Haak, assistant professor in the School of Plant and Environmental Sciences and an affiliated member of the Global Change Center, housed within the Fralin Life Sciences Institute.
The commercial cultivation of flowers, known as floriculture, was recently named the ninth highest-grossing sector of Virginia's top 20 agricultural products by generating $146 million annually. An increased understanding of plant genetics will allow floriculturists to grow and harvest flowers more efficiently and generate more income.
Read more at Science Daily
Faith, truth and forgiveness: How your brain processes abstract thoughts
Researchers at Carnegie Mellon University have leveraged machine learning to interpret human brain scans, allowing the team to uncover the regions of the brain behind how abstract concepts, like justice, ethics and consciousness, form. The results of this study are available online in the October 29 issue of Cerebral Cortex.
"Humans have the unique ability to construct abstract concepts that have no anchor in the physical world, but we often take this ability for granted," said Marcel Just, the D.O. Hebb University Professor of Psychology at CMU's Dietrich College of Humanities and Social Sciences and senior author on the paper. "In this study, we have shown that newly identified components of meaning used by the human brain that acts like an indexing system, similar to a library's card catalog, to compose the meaning of abstract concepts."
The ability of humans to think abstractly plays a central role in scientific and intellectual progress. Unlike concrete concepts, like hammer, abstract concepts, like ethics, have no obvious home in the parts of the brain that deal with perception or control of our bodies.
"Most of our understanding of how the brain processes objects and concepts is based on how our five senses take in information," said Robert Vargas, a CMU graduate student in Just's lab and first author on the paper. "It becomes difficult to describe the neural environment of abstract thoughts because many of the brain's mental tools to process them are themselves abstract."
In this study, Just and his team scanned the brains of nine participants using a functional MRI. The team sifted through the data using machine learning tools to identify patterns for each of the 28 abstract concepts. They applied the machine learning algorithm to correctly identified each concept (with a mean rank accuracy of 0.82, where chance level is 0.50).
Just said these abstract concepts are constructed by three dimensions of meaning in the brain. The first dimension corresponds to regions associated with language. For example, the concept of ethics might be linked to other words like rules and morals. A person must first understand the words to construct the additional meaning of ethics. The second dimension defines abstract concepts in terms of reference, either to self or an external source. For example, spirituality refers to self, while causality is external to the self. The final dimension is rooted in social constructs. There is an inherent social component to the concepts of pride and gossip.
"For me, the most exciting result of this study was that we were able to predict the neural activation patterns for individual abstract concepts across people," Vargas said. "It is wild to think that my concept of probability and spirituality is neurally similar to the next person's, even if their experience of spirituality is different."
During the scan, each concept was presented visually and the participant was allowed to think about this idea for three seconds. The participants saw the set of words six times.
The 28 concepts covered in the study span seven categories: mathematics (subtraction, equality, probability and multiplication); scientific (gravity, force, heat and acceleration); social (gossip, intimidation, forgiveness and compliment); emotion (happiness, sadness, anger and pride); law (contract, ethics, crime and exoneration); metaphysical (causality, consciousness, truth and necessity) and religiosity (deity, spirituality, sacrilege and faith).
Read more at Science Daily
"Humans have the unique ability to construct abstract concepts that have no anchor in the physical world, but we often take this ability for granted," said Marcel Just, the D.O. Hebb University Professor of Psychology at CMU's Dietrich College of Humanities and Social Sciences and senior author on the paper. "In this study, we have shown that newly identified components of meaning used by the human brain that acts like an indexing system, similar to a library's card catalog, to compose the meaning of abstract concepts."
The ability of humans to think abstractly plays a central role in scientific and intellectual progress. Unlike concrete concepts, like hammer, abstract concepts, like ethics, have no obvious home in the parts of the brain that deal with perception or control of our bodies.
"Most of our understanding of how the brain processes objects and concepts is based on how our five senses take in information," said Robert Vargas, a CMU graduate student in Just's lab and first author on the paper. "It becomes difficult to describe the neural environment of abstract thoughts because many of the brain's mental tools to process them are themselves abstract."
In this study, Just and his team scanned the brains of nine participants using a functional MRI. The team sifted through the data using machine learning tools to identify patterns for each of the 28 abstract concepts. They applied the machine learning algorithm to correctly identified each concept (with a mean rank accuracy of 0.82, where chance level is 0.50).
Just said these abstract concepts are constructed by three dimensions of meaning in the brain. The first dimension corresponds to regions associated with language. For example, the concept of ethics might be linked to other words like rules and morals. A person must first understand the words to construct the additional meaning of ethics. The second dimension defines abstract concepts in terms of reference, either to self or an external source. For example, spirituality refers to self, while causality is external to the self. The final dimension is rooted in social constructs. There is an inherent social component to the concepts of pride and gossip.
"For me, the most exciting result of this study was that we were able to predict the neural activation patterns for individual abstract concepts across people," Vargas said. "It is wild to think that my concept of probability and spirituality is neurally similar to the next person's, even if their experience of spirituality is different."
During the scan, each concept was presented visually and the participant was allowed to think about this idea for three seconds. The participants saw the set of words six times.
The 28 concepts covered in the study span seven categories: mathematics (subtraction, equality, probability and multiplication); scientific (gravity, force, heat and acceleration); social (gossip, intimidation, forgiveness and compliment); emotion (happiness, sadness, anger and pride); law (contract, ethics, crime and exoneration); metaphysical (causality, consciousness, truth and necessity) and religiosity (deity, spirituality, sacrilege and faith).
Read more at Science Daily
ESO telescope reveals what could be the smallest dwarf planet yet in the solar system
As an object in the main asteroid belt, Hygiea satisfies right away three of the four requirements to be classified as a dwarf planet: it orbits around the Sun, it is not a moon and, unlike a planet, it has not cleared the neighbourhood around its orbit. The final requirement is that it has enough mass for its own gravity to pull it into a roughly spherical shape. This is what VLT observations have now revealed about Hygiea.
"Thanks to the unique capability of the SPHERE instrument on the VLT, which is one of the most powerful imaging systems in the world, we could resolve Hygiea's shape, which turns out to be nearly spherical," says lead researcher Pierre Vernazza from the Laboratoire d'Astrophysique de Marseille in France. "Thanks to these images, Hygiea may be reclassified as a dwarf planet, so far the smallest in the Solar System."
The team also used the SPHERE observations to constrain Hygiea's size, putting its diameter at just over 430 km. Pluto, the most famous of dwarf planets, has a diameter close to 2400 km, while Ceres is close to 950 km in size.
Surprisingly, the observations also revealed that Hygiea lacks the very large impact crater that scientists expected to see on its surface, the team report in the study published today in Nature Astronomy. Hygiea is the main member of one of the largest asteroid families, with close to 7000 members that all originated from the same parent body. Astronomers expected the event that led to the formation of this numerous family to have left a large, deep mark on Hygiea.
"This result came as a real surprise as we were expecting the presence of a large impact basin, as is the case on Vesta," says Vernazza. Although the astronomers observed Hygiea's surface with a 95% coverage, they could only identify two unambiguous craters. "Neither of these two craters could have been caused by the impact that originated the Hygiea family of asteroids whose volume is comparable to that of a 100 km-sized object. They are too small," explains study co-author Miroslav Bro? of the Astronomical Institute of Charles University in Prague, Czech Republic.
The team decided to investigate further. Using numerical simulations, they deduced that Hygiea's spherical shape and large family of asteroids are likely the result of a major head-on collision with a large projectile of diameter between 75 and 150 km. Their simulations show this violent impact, thought to have occurred about 2 billion years ago, completely shattered the parent body. Once the left-over pieces reassembled, they gave Hygiea its round shape and thousands of companion asteroids. "Such a collision between two large bodies in the asteroid belt is unique in the last 3-4 billion years," says Pavel Ševe?ek, a PhD student at the Astronomical Institute of Charles University who also participated in the study.
Read more at Science Daily
Oct 28, 2019
Mutated ferns shed light on ancient mass extinction
Most researchers believe that the mass extinction 201 million years ago was caused by release of CO2 by volcanism with global warming as a consequence. Now, new data from fern spores suggest there might have been more to it than that.
At the end of the Triassic around 201 million years ago, three out of four species on Earth disappeared. Up until now, scientists believed the cause of the catastrophe to be the onset of large-scale volcanism resulting in abrupt climate change. Now, new research suggest there might be several factors in play.
An international research team led by the Geological Survey of Denmark and Greenland (GEUS) show that increased concentrations of the toxic element mercury in the environment contributed to the mass extinction. They recently published their finds in Science Advances.
"By looking at fern spores in sediments from the mass extinction, it was evident that these ferns were negatively affected by the mercury levels. Since mercury is accumulated in the food chain, it seems likely that other species have suffered as well," says lead scientist Sofie Lindström.
"These results suggest that the end-Triassic mass extinction was not just caused by greenhouse gases from volcanoes causing global climate change, but that they also emitted toxins such as mercury wreaking havoc," she says.
The mercury-volcano link
One of the co-authors of the study, Professor Hamed Sanei from Aarhus University, has previously demonstrated increased mercury levels from volcanism in a Large Igneous Province (LIP) during the most severe mass extinction known, the end-Permian crisis, where perhaps as much as 95% of life on Earth disappeared. Volcanic activity in LIPs is thought to be responsible for four of the five largest mass extinctions during the last 500 million years.
"Prior to industrialism, volcanic activity was the major release mechanism of large amounts of mercury from the Earth's crust. That makes it possible to use mercury in sediments to trace major volcanic activity in the Earth's past and in extent tie the extinctions of fossil organisms to LIP volcanism," Hamed Sanei explains.
Other previous studies have shown elevated mercury concentrations in Triassic-Jurassic boundary sediments over a very large area stretching from Argentina to Greenland and from Nevada to Austria and that made the team curious about the impact on the end-Triassic event.
"We decided to examine whether mercury could have played a role," Hamed Sanei says.
Fern spores as indicators
When looking at fern spores from core samples dating from 201 million years ago at the end of the Triassic the team indeed saw a link between increased mercury levels and mutations in the spores.
"During the mass extinction the mutated spores become increasingly common, and in turn the mutations get more and more severe. In some of my counts I found almost only mutated spores and no normal ones, which is very unusual," Sofie Lindström explains.
This rise in mutations happened during a period of increased volcanic activity in a LIP called the Central Atlantic Magmatic Province (CAMP) leading to rising mercury levels. Since mercury is a mutagenic toxin, its' increased distribution from the volcanic activity could help to explain the sudden deterioration of the ecosystem. Therefore, the fern spores could serve as indicators of increased mercury poisoning.
"This could hint to that the whole food chain might have been negatively affected," says Sofie Lindström.
Previous studies have found increased amounts of malformed pollen during the end-Permian mass extinction 252 million years ago, which like the end-Triassic crisis is blamed on volcanism. These studies have suggested that the mutations during the end-Permian crisis were caused by increased UVB radiation, due to thinning of the ozone layer from the volcanism.
"This could also be a possible explanation for the mutations that we see during the end-Triassic crisis," explains co-author Bas van de Schootbrugge from Utrecht University. "However, in our study we found only low amounts of mutated pollen, and during the end-Permian crisis spores do not appear to exhibit the same types of malformations registered during the end-Triassic mass extinction. This may indicate different causes for the plant mutations at the two events."
Not a simple explanation
However, it is important not to lock on to just one cause when looking at a global crisis such as the end-Triassic event, says Sofie Lindström:
"Generally, we prefer simple explanations to mass extinctions such as meteorite impacts or climate change, but I don't think it's that simple. As our study suggests there could very well be a cocktail effect of CO2 and global warming, toxins like mercury, and other factors as well."
Most of the prehistoric mass extinctions have indeed come in the wake of LIP volcanism, causing climate change and emitting toxic substances, Sofie Lindström says.
"Still, it is very difficult to say how big the importance of one factor is, because mass extinctions like this are very likely very complex events. Our study shows that mercury affected the ferns and likely also other plants, and it may also have had an impact on the entire food chain."
Read more at Science Daily
At the end of the Triassic around 201 million years ago, three out of four species on Earth disappeared. Up until now, scientists believed the cause of the catastrophe to be the onset of large-scale volcanism resulting in abrupt climate change. Now, new research suggest there might be several factors in play.
An international research team led by the Geological Survey of Denmark and Greenland (GEUS) show that increased concentrations of the toxic element mercury in the environment contributed to the mass extinction. They recently published their finds in Science Advances.
"By looking at fern spores in sediments from the mass extinction, it was evident that these ferns were negatively affected by the mercury levels. Since mercury is accumulated in the food chain, it seems likely that other species have suffered as well," says lead scientist Sofie Lindström.
"These results suggest that the end-Triassic mass extinction was not just caused by greenhouse gases from volcanoes causing global climate change, but that they also emitted toxins such as mercury wreaking havoc," she says.
The mercury-volcano link
One of the co-authors of the study, Professor Hamed Sanei from Aarhus University, has previously demonstrated increased mercury levels from volcanism in a Large Igneous Province (LIP) during the most severe mass extinction known, the end-Permian crisis, where perhaps as much as 95% of life on Earth disappeared. Volcanic activity in LIPs is thought to be responsible for four of the five largest mass extinctions during the last 500 million years.
"Prior to industrialism, volcanic activity was the major release mechanism of large amounts of mercury from the Earth's crust. That makes it possible to use mercury in sediments to trace major volcanic activity in the Earth's past and in extent tie the extinctions of fossil organisms to LIP volcanism," Hamed Sanei explains.
Other previous studies have shown elevated mercury concentrations in Triassic-Jurassic boundary sediments over a very large area stretching from Argentina to Greenland and from Nevada to Austria and that made the team curious about the impact on the end-Triassic event.
"We decided to examine whether mercury could have played a role," Hamed Sanei says.
Fern spores as indicators
When looking at fern spores from core samples dating from 201 million years ago at the end of the Triassic the team indeed saw a link between increased mercury levels and mutations in the spores.
"During the mass extinction the mutated spores become increasingly common, and in turn the mutations get more and more severe. In some of my counts I found almost only mutated spores and no normal ones, which is very unusual," Sofie Lindström explains.
This rise in mutations happened during a period of increased volcanic activity in a LIP called the Central Atlantic Magmatic Province (CAMP) leading to rising mercury levels. Since mercury is a mutagenic toxin, its' increased distribution from the volcanic activity could help to explain the sudden deterioration of the ecosystem. Therefore, the fern spores could serve as indicators of increased mercury poisoning.
"This could hint to that the whole food chain might have been negatively affected," says Sofie Lindström.
Previous studies have found increased amounts of malformed pollen during the end-Permian mass extinction 252 million years ago, which like the end-Triassic crisis is blamed on volcanism. These studies have suggested that the mutations during the end-Permian crisis were caused by increased UVB radiation, due to thinning of the ozone layer from the volcanism.
"This could also be a possible explanation for the mutations that we see during the end-Triassic crisis," explains co-author Bas van de Schootbrugge from Utrecht University. "However, in our study we found only low amounts of mutated pollen, and during the end-Permian crisis spores do not appear to exhibit the same types of malformations registered during the end-Triassic mass extinction. This may indicate different causes for the plant mutations at the two events."
Not a simple explanation
However, it is important not to lock on to just one cause when looking at a global crisis such as the end-Triassic event, says Sofie Lindström:
"Generally, we prefer simple explanations to mass extinctions such as meteorite impacts or climate change, but I don't think it's that simple. As our study suggests there could very well be a cocktail effect of CO2 and global warming, toxins like mercury, and other factors as well."
Most of the prehistoric mass extinctions have indeed come in the wake of LIP volcanism, causing climate change and emitting toxic substances, Sofie Lindström says.
"Still, it is very difficult to say how big the importance of one factor is, because mass extinctions like this are very likely very complex events. Our study shows that mercury affected the ferns and likely also other plants, and it may also have had an impact on the entire food chain."
Read more at Science Daily
Giant radio galaxies defy conventional wisdom
Conventional wisdom tells us that large objects appear smaller as they get farther from us, but this fundamental law of classical physics is reversed when we observe the distant universe.
Astrophysicists at the University of Kent simulated the development of the biggest objects in the universe to help explain how galaxies and other cosmic bodies were formed. By looking at the distant universe, it is possible to observe it in a past state, when it was still at a formative stage. At that time, galaxies were growing and supermassive black holes were violently expelling enormous amounts of gas and energy. This matter accumulated into pairs of reservoirs, which formed the biggest objects in the universe, so-called giant radio galaxies. These giant radio galaxies stretch across a large part of the Universe. Even moving at the speed of light, it would take several million years to cross one.
Professor Michael D. Smith of the Centre for Astrophysics and Planetary Science, and student Justin Donohoe collaborated on the research. They expected to find that as they simulated objects farther into the distant universe, they would appear smaller, but in fact they found the opposite.
Professor Smith said: 'When we look far into the distant universe, we are observing objects way in the past -- when they were young. We expected to find that these distant giants would appear as a comparatively small pair of vague lobes. To our surprise, we found that these giants still appear enormous even though they are so far away.'
Radio galaxies have long been known to be powered by twin jets which inflate their lobes and create giant cavities. The team performed simulations using the Forge supercomputer, generating three-dimensional hydrodynamics that recreated the effects of these jets. They then compared the resulting images to observations of the distant galaxies. Differences were assessed using a new classification index, the Limb Brightening Index (LB Index), which measures changes to the orientation and size of the objects.
Read more at Science Daily
Astrophysicists at the University of Kent simulated the development of the biggest objects in the universe to help explain how galaxies and other cosmic bodies were formed. By looking at the distant universe, it is possible to observe it in a past state, when it was still at a formative stage. At that time, galaxies were growing and supermassive black holes were violently expelling enormous amounts of gas and energy. This matter accumulated into pairs of reservoirs, which formed the biggest objects in the universe, so-called giant radio galaxies. These giant radio galaxies stretch across a large part of the Universe. Even moving at the speed of light, it would take several million years to cross one.
Professor Michael D. Smith of the Centre for Astrophysics and Planetary Science, and student Justin Donohoe collaborated on the research. They expected to find that as they simulated objects farther into the distant universe, they would appear smaller, but in fact they found the opposite.
Professor Smith said: 'When we look far into the distant universe, we are observing objects way in the past -- when they were young. We expected to find that these distant giants would appear as a comparatively small pair of vague lobes. To our surprise, we found that these giants still appear enormous even though they are so far away.'
Radio galaxies have long been known to be powered by twin jets which inflate their lobes and create giant cavities. The team performed simulations using the Forge supercomputer, generating three-dimensional hydrodynamics that recreated the effects of these jets. They then compared the resulting images to observations of the distant galaxies. Differences were assessed using a new classification index, the Limb Brightening Index (LB Index), which measures changes to the orientation and size of the objects.
Read more at Science Daily
Biomarker for schizophrenia can be detected in human hair
Working with model mice, post-mortem human brains, and people with schizophrenia, researchers at the RIKEN Center for Brain Science in Japan have discovered that a subtype of schizophrenia is related to abnormally high levels hydrogen sulfide in the brain. Experiments showed that this abnormality likely results from a DNA-modifying reaction during development that lasts throughout life. In addition to providing a new direction for research into drug therapies, higher than normal levels of the hydrogen sulfide-producing enzyme can act as biomarker for this type of schizophrenia.
Diagnosing disorders of thought is easier when a reliable and objective marker can be found. In the case of schizophrenia, we have known for more than 30 years that it is associated with an abnormal startle response. Normally, we are not startled as much by a burst of noise if a smaller burst -- called a prepulse -- comes a little bit earlier. This phenomenon is called prepulse inhibition (PPI) because the early pulse inhibits the startle response. In people with schizophrenia, PPI is lowed, meaning that their startle response is not dampened as much as it should be after the prepulse.
The PPI test is a good behavioral marker, and although it cannot directly help us understand the biology behind schizophrenia, it was the starting point that led to current discoveries.
The researchers at RIKEN CBS began first looked for differences in protein expression between strains of mice that exhibit extremely low or extremely high PPI. Ultimately, they found that the enzyme Mpst was expressed much more in the brains of the mouse strain with low PPI than in the strain with high PPI. Knowing that this enzyme helps produce hydrogen sulfide, the team then measured hydrogen sulfide levels and found that they were higher in the low-PPI mice.
"Nobody has ever thought about a causal link between hydrogen sulfide and schizophrenia," says team leader Takeo Toshikawa. "Once we discovered this, we had to figure out how it happens and if these findings in mice would hold true for people with schizophrenia."
First, to be sure that Mpst was the culprit, the researchers created an Mpst knockout version of the low-PPI mice and showed that their PPI was higher than that in regular low-PPI mice. Thus, reducing the amount of Mpst helped the mice become more normal. Next, they found that MPST gene expression was indeed higher in postmortem brains from people with schizophrenia than in those from unaffected people. MPST protein levels in these brains also correlated well with the severity of premortem symptoms.
Now the team had enough information to look at MPST expression as a biomarker for schizophrenia. They examined hair follicles from more than 150 people with schizophrenia and found that expression of MPST mRNA was much higher than people without schizophrenia. Even though the results were not perfect -- indicating that sulfide stress does not account for all cases of schizophrenia -- MPST levels in hair could be a good biomarker for schizophrenia before other symptoms appear.
Whether a person develops schizophrenia is related to both their genetics and the environment. Testing in mice and postmortem brains indicated that high MPST levels were associated with changes in DNA that lead to permanently altered gene expression. So, the next step was for the team to search for environmental factors that could result in permanently increased MPST production.
Because hydrogen sulfide can actually protect against inflammatory stress, the group hypothesized that inflammatory stress during early development might be the root cause. "We found that anti-oxidative markers -- including the production of hydrogen sulfide -- that compensate against oxidative stress and neuroinflammation during brain development were correlated with MPST levels in the brains of people with schizophrenia," says Yoshikawa.
He proposes that once excess hydrogen sulfide production is primed, it persists throughout life due to permanent epigenetic changes to DNA, leading to "sulfide stress" induced schizophrenia.
Read more at Science Daily
Diagnosing disorders of thought is easier when a reliable and objective marker can be found. In the case of schizophrenia, we have known for more than 30 years that it is associated with an abnormal startle response. Normally, we are not startled as much by a burst of noise if a smaller burst -- called a prepulse -- comes a little bit earlier. This phenomenon is called prepulse inhibition (PPI) because the early pulse inhibits the startle response. In people with schizophrenia, PPI is lowed, meaning that their startle response is not dampened as much as it should be after the prepulse.
The PPI test is a good behavioral marker, and although it cannot directly help us understand the biology behind schizophrenia, it was the starting point that led to current discoveries.
The researchers at RIKEN CBS began first looked for differences in protein expression between strains of mice that exhibit extremely low or extremely high PPI. Ultimately, they found that the enzyme Mpst was expressed much more in the brains of the mouse strain with low PPI than in the strain with high PPI. Knowing that this enzyme helps produce hydrogen sulfide, the team then measured hydrogen sulfide levels and found that they were higher in the low-PPI mice.
"Nobody has ever thought about a causal link between hydrogen sulfide and schizophrenia," says team leader Takeo Toshikawa. "Once we discovered this, we had to figure out how it happens and if these findings in mice would hold true for people with schizophrenia."
First, to be sure that Mpst was the culprit, the researchers created an Mpst knockout version of the low-PPI mice and showed that their PPI was higher than that in regular low-PPI mice. Thus, reducing the amount of Mpst helped the mice become more normal. Next, they found that MPST gene expression was indeed higher in postmortem brains from people with schizophrenia than in those from unaffected people. MPST protein levels in these brains also correlated well with the severity of premortem symptoms.
Now the team had enough information to look at MPST expression as a biomarker for schizophrenia. They examined hair follicles from more than 150 people with schizophrenia and found that expression of MPST mRNA was much higher than people without schizophrenia. Even though the results were not perfect -- indicating that sulfide stress does not account for all cases of schizophrenia -- MPST levels in hair could be a good biomarker for schizophrenia before other symptoms appear.
Whether a person develops schizophrenia is related to both their genetics and the environment. Testing in mice and postmortem brains indicated that high MPST levels were associated with changes in DNA that lead to permanently altered gene expression. So, the next step was for the team to search for environmental factors that could result in permanently increased MPST production.
Because hydrogen sulfide can actually protect against inflammatory stress, the group hypothesized that inflammatory stress during early development might be the root cause. "We found that anti-oxidative markers -- including the production of hydrogen sulfide -- that compensate against oxidative stress and neuroinflammation during brain development were correlated with MPST levels in the brains of people with schizophrenia," says Yoshikawa.
He proposes that once excess hydrogen sulfide production is primed, it persists throughout life due to permanent epigenetic changes to DNA, leading to "sulfide stress" induced schizophrenia.
Read more at Science Daily
Dolphins demonstrate coordinated cooperation
A pod of bottlenose dolphins. |
But much of the reporting comes from the observations of terrestrial animals, with comparably little data on aquatic species. One notable example is the dolphin. They are well known to operate in social groups -- a group of dolphins is a pod -- in a 'fission-fusion society', where groups merge and split over time. Previous studies have even suggested that dolphins may understand a partner's role in cooperative tasks.
However, due to the complex mechanics of conventional experiments it was difficult to determine how this behavior was characterized in dolphins.
Researchers at Kyoto University's Primate Research Institute, Kindai University, and Kagoshima City Aquarium decided to investigate such behavior by simplifying the previous experimental conditions. Their report was published in the journal PeerJ.
"In our investigation, we wanted to find out how bottlenose dolphins coordinate their cooperative behavior. Our setup was the Hirata's rope-pulling task: where two dolphins pull on opposite ends of a rope simultaneously to receive rewards." explains first author Chisato Yamamoto.
The Hirata task, or the cooperative pulling paradigm, has been used to demonstrate that a significant number of animals -- including chimpanzees, dogs, and elephants -- have cooperative abilities.
And it appears dolphins are just as cooperative. In their test, the researchers first sent out the initiators in the direction of the task, then and after a few seconds a follower was sent. They observed that the initiator waited for their partner to reach the task, and the follower would coordinate their swimming speed to match the initiator's behavior.
"Having initiators and followers coordinate behavior for a task has previously been observed in chimpanzees and orangutans," continues Yamamoto. "But dolphins appear to be more flexible in their coordination, capable of changing their actions depending on where their partner is."
Team leader Masaki Tomonaga explains that this coordination is likely rooted in their patterns of affiliative behavior, a method of social interactions that functions to reinforce social bonds with a group.
Read more at Science Daily
Oct 27, 2019
By targeting flu-enabling protein, antibody may protect against wide-ranging strains
Influenza virus illustration |
The study, which Scripps Research conducted jointly with Washington University School of Medicine in St. Louis and Icahn School of Medicine at Mount Sinai in New York, points to a new approach to tackle severe cases of the flu, including pandemics. The research is published in the Oct. 25 issue of Science.
Scripps Research's Ian Wilson, DPhil, one of three senior co-authors, says the antibody at the center of the study binds to a protein called neuraminidase, which is essential for the flu virus to replicate in the body.
The protein, located on the surface of the virus, enables infected host cells to release the virus so it can spread to other cells. Tamiflu, the most widely used drug for severe flu infection, works by inactivating neuraminidase. However, many forms of neuraminidase exist, depending on the flu strain, and such drugs aren't always effective -- particularly as resistance to the drugs is developing.
"There are many strains of influenza virus that circulate so every year we have to design and produce a new vaccine to match the most common strains of that year," says co-senior author Ali Ellebedy, PhD, an assistant professor of pathology and immunology at Washington University. "Now imagine if we could have one vaccine that protected against all influenza strains, including human, swine and other highly lethal avian influenza viruses. This antibody could be the key to design of a truly universal vaccine."
Ellebedy discovered the antibody -- an immune molecule that recognizes and attaches to a foreign molecule -- in blood taken from a patient hospitalized with flu at Barnes-Jewish Hospital in St. Louis in the winter of 2017.
Ellebedy was working on a study analyzing the immune response to flu infection in humans in collaboration with the Washington University Emergency Care and Research Core, which was sending him blood samples from consenting flu patients. He quickly noticed that a particular blood sample was unusual: In addition to containing antibodies against hemagglutinin, the major protein on the surface of the virus, it contained other antibodies that were clearly targeting something else.
"At the time we were just starting, and I was setting up my lab so we didn't have the tools to look at what else the antibodies could be targeting," says Ellebedy, an assistant professor of medicine and of molecular microbiology.
He sent three of the antibodies to co-senior author Florian Krammer, PhD, a microbiology professor at the Icahn School of Medicine at Mount Sinai. An expert on neuraminidase, Krammer tested the antibodies against his extensive library of neuraminidase proteins. At least one of the three antibodies blocked neuraminidase activity in all known types of neuraminidase in flu viruses, representing a variety of human and nonhuman strains.
"The breadth of the antibodies really came as a surprise to us," says Krammer. "Typically, anti-neuraminidase antibodies can be broad within a subtype, like H1N1, but an antibody with potent activity across subtypes was unheard of. At first, we did not believe our results. Especially the ability of the antibodies to cross between influenza A and influenza B viruses is just mind-boggling. It is amazing what the human immune system is capable of if presented with the right antigens."
To find out whether the antibodies could be used to treat severe cases of flu, Krammer and colleagues tested them in mice that were given a lethal dose of influenza virus. All three antibodies were effective against many strains, and one antibody, called "1G01," protected against all 12 strains tested, which included all three groups of human flu virus as well as avian and other nonhuman strains.
"All the mice survived, even if they were given the antibody 72 hours after infection," Ellebedy says. "They definitely got sick and lost weight, but we still saved them. It was remarkable. It made us think that you might be able to use this antibody in an intensive care scenario when you have someone sick with flu and it's too late to use Tamiflu."
Tamiflu must be administered within 24 hours of symptoms. A drug that could be used later would help many people diagnosed after the Tamiflu window has closed. But before the researchers could even think of designing such a drug based on the antibody, they needed to understand how it was interfering with neuraminidase.
They turned to Scripps Research's Wilson, known globally for his work as a structural biologist. Wilson is Chair of the Institute's Department of Integrative Structural and Computational Biology, and has made numerous seminal findings that have shaped efforts to develop universal vaccines for flu and other complex viruses such as HIV.
Wilson and Xueyong Zhu, PhD, a staff scientist in Wilson's lab, mapped the structures of the antibodies while they were bound to neuraminidase. They found that the antibodies each had a loop that slid inside the active site of neuraminidase like a stick between gears. The loops prevented neuraminidase from releasing new virus particles from the surface of cells, thereby breaking the cycle of viral production in host cells.
"We were surprised at how these antibodies managed to insert a single loop into the conserved active site without contacting the surrounding hypervariable regions, thereby achieving much greater breadth against the neuraminidase of different influenza viruses than we have seen before," Wilson says.
The structures showed that the antibodies provide such broad protection because they target the conserved residues in the active site of the neuraminidase protein. That site stays much the same across distantly related flu strains because even minor changes could abolish the protein's ability to do its job, thereby preventing the virus from replicating.
Read more at Science Daily
Engineers develop a new way to remove carbon dioxide from air
A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.
Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.
The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal Energy and Environmental Science, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.
The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.
As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called polyanthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged -- during which the device can provide part of the power needed for the whole system -- and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.
"The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent's affinity to carbon dioxide," explains Voskian. In other words, the electrode material, by its nature, "has either a high affinity or no affinity whatsoever," depending on the battery's state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.
"This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2," Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.
In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.
The process this system uses for capturing and releasing carbon dioxide "is revolutionary" he says. "All of this is at ambient conditions -- there's no need for thermal, pressure, or chemical input. It's just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity."
"In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles," Hatton says. "This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations."
In a working plant -- for example, in a power plant where exhaust gas is being produced continuously -- two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.
The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. "We have developed very cost-effective techniques," he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.
Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.
Read more at Science Daily
Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.
The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal Energy and Environmental Science, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.
The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.
As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called polyanthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged -- during which the device can provide part of the power needed for the whole system -- and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.
"The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent's affinity to carbon dioxide," explains Voskian. In other words, the electrode material, by its nature, "has either a high affinity or no affinity whatsoever," depending on the battery's state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.
"This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2," Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.
In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.
The process this system uses for capturing and releasing carbon dioxide "is revolutionary" he says. "All of this is at ambient conditions -- there's no need for thermal, pressure, or chemical input. It's just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity."
"In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles," Hatton says. "This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations."
In a working plant -- for example, in a power plant where exhaust gas is being produced continuously -- two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.
The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. "We have developed very cost-effective techniques," he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.
Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.
Read more at Science Daily
Subscribe to:
Posts (Atom)