Some of the effects of air pollution on health are well documented -lung cancer, stroke, respiratory diseases, and a long etcetera- but for others there is less scientific evidence. Such is the case of bone health: there are only a few studies and results are inconclusive. Now, a study in India led by the Barcelona Institute for Global Health (ISGlobal), an institution supported by "la Caixa," has found an association between exposure to air pollution and poor bone health.
Osteoporosis is a disease in which the density and quality of the bone is reduced. Globally, it is responsible for a substantial burden of disease and its prevalence is expected to increase due to aging of the population.
The new study performed by the CHAI Project, led by ISGlobal and published in Jama Network Open, analysed the association between air pollution and bone health in over 3,700 people from 28 villages outside the city of Hyberabad, in southern India.
The authors used a locally-developed model to estimate outdoor exposure at residence to air pollution by fine particulate matter (suspended particles with a diameter of 2.5 ?m or less) and black carbon. The participants also filled a questionnaire on the type of fuel used for cooking. The authors linked this information with bone health assessed using a special type of radiography that measures bone density, called dual-energy x-ray absorptiometry, and measured bone mass at the lumbar spine and the left hip.
The results showed that exposure to ambient air pollution, particularly to fine particles, was associated with lower levels of bone mass. No correlation was found with use of biomass fuel for cooking.
"This study contributes to the limited and inconclusive literature on air pollution and bone health," explains Otavio T. Ranzani, ISGlobal researcher and first author of the study. Regarding the possible mechanisms underlying this association, he says "inhalation of polluting particles could lead to bone mass loss through the oxidative stress and inflammation caused by air pollution."
Annual average exposure to ambient PM2.5 was 32.8 ?g/m3, far above the maximum levels recommended by the World Health Organisation (10 ?g/m3). 58% of participants used biomass fuel for cooking.
Read more at Science Daily
Jan 4, 2020
Breakthrough study on molecular interactions could improve development of new medicines
A first-of-its-kind study on molecular interactions by biomedical engineers in the University of Minnesota's College of Science and Engineering will make it easier and more efficient for scientists to develop new medicines and other therapies for diseases such as cancer, HIV and autoimmune diseases.
The study resulted in a mathematical framework that simulates the effects of the key parameters that control interactions between molecules that have multiple binding sites, as is the case for many medicines. Researchers plan to use this computational model to develop a web-based app that other researchers can use to speed the development of new therapies for diseases.
The research is published in the Proceedings of the National Academy of Sciences (PNAS).
"The big advance with this study is that usually researchers use a trial-and-error experimental method in the lab for studying these kinds of molecular interactions, but here we developed a mathematical model where we know the parameters so we can make accurate predictions using a computer," said Casim Sarkar, a University of Minnesota biomedical engineering associate professor and senior author of the study. "This computational model will make research much more efficient and could accelerate the creation of new therapies for many kinds of diseases."
The research team studied three main parameters of molecular interactions -- binding strength of each site, rigidity of the linkages between the sites, and the size of the linkage arrays. They looked at how these three parameters can be "dialed up" or "dialed down" to control how molecule chains with two or three binding sites interact with one another. The team then confirmed their model predictions in lab experiments.
"At a fundamental level, many diseases can be traced to a molecule not binding correctly," said Wesley Errington, a University of Minnesota biomedical engineering postdoctoral researcher and lead author of the study. "By understanding how we can manipulate these 'dials' that control molecular behavior, we have developed a new programming language that can be used to predict how molecules will bind."
The need for a mathematical framework to decode this programming language is highlighted by the researchers' finding that, even when the interacting molecule chains have just three binding sites each, there are a total of 78 unique binding configurations, most of which cannot be experimentally observed. By dialing the parameters in this new mathematical model, researchers can quickly understand how these different binding configurations are affected, and tune them for a wide range of biological and medical applications.
Read more at Science Daily
The study resulted in a mathematical framework that simulates the effects of the key parameters that control interactions between molecules that have multiple binding sites, as is the case for many medicines. Researchers plan to use this computational model to develop a web-based app that other researchers can use to speed the development of new therapies for diseases.
The research is published in the Proceedings of the National Academy of Sciences (PNAS).
"The big advance with this study is that usually researchers use a trial-and-error experimental method in the lab for studying these kinds of molecular interactions, but here we developed a mathematical model where we know the parameters so we can make accurate predictions using a computer," said Casim Sarkar, a University of Minnesota biomedical engineering associate professor and senior author of the study. "This computational model will make research much more efficient and could accelerate the creation of new therapies for many kinds of diseases."
The research team studied three main parameters of molecular interactions -- binding strength of each site, rigidity of the linkages between the sites, and the size of the linkage arrays. They looked at how these three parameters can be "dialed up" or "dialed down" to control how molecule chains with two or three binding sites interact with one another. The team then confirmed their model predictions in lab experiments.
"At a fundamental level, many diseases can be traced to a molecule not binding correctly," said Wesley Errington, a University of Minnesota biomedical engineering postdoctoral researcher and lead author of the study. "By understanding how we can manipulate these 'dials' that control molecular behavior, we have developed a new programming language that can be used to predict how molecules will bind."
The need for a mathematical framework to decode this programming language is highlighted by the researchers' finding that, even when the interacting molecule chains have just three binding sites each, there are a total of 78 unique binding configurations, most of which cannot be experimentally observed. By dialing the parameters in this new mathematical model, researchers can quickly understand how these different binding configurations are affected, and tune them for a wide range of biological and medical applications.
Read more at Science Daily
Jan 3, 2020
Climate signals detected in global weather
In October this year, weather researchers in Utah measured the lowest temperature ever recorded in the month of October in the US (excluding Alaska): -37.1°C. The previous low-temperature record for October was -35°C, and people wondered what had happened to climate change.
Until now, climate researchers have responded that climate is not the same thing as weather. Climate is what we expect in the long term, whereas weather is what we get in the short term -- and since local weather conditions are highly variable, it can be very cold in one location for a short time despite long-term global warming. In short, the variability of local weather masks long-term trends in global climate.
A paradigm shift
Now, however, a group led by ETH professor Reto Knutti has conducted a new analysis of temperature measurements and models. The scientists concluded that the weather-is-not-climate paradigm is no longer applicable in that form. According to the researchers, the climate signal -- that is, the long-term warming trend -- can actually be discerned in daily weather data, such as surface air temperature and humidity, provided that global spatial patterns are taken into account.
In plain English, this means that -- despite global warming -- there may well be a record low temperature in October in the US. If it is simultaneously warmer than average in other regions, however, this deviation is almost completely eliminated. "Uncovering the climate change signal in daily weather conditions calls for a global perspective, not a regional one," says Sebastian Sippel, a postdoc working in Knutti's research group and lead author of a study recently published in Nature Climate Change.
Statistical learning techniques extract climate change signature
In order to detect the climate signal in daily weather records, Sippel and his colleagues used statistical learning techniques to combine simulations with climate models and data from measuring stations. Statistical learning techniques can extract a "fingerprint" of climate change from the combination of temperatures of various regions and the ratio of expected warming and variability. By systematically evaluating the model simulations, they can identify the climate fingerprint in the global measurement data on any single day since spring 2012.
A comparison of the variability of local and global daily mean temperatures shows why the global perspective is important. Whereas locally measured daily mean temperatures can fluctuate widely (even after the seasonal cycle is removed), global daily mean values show a very narrow range.
If the distribution of global daily mean values from 1951 to 1980 are then compared with those from 2009 to 2018, the two distributions (bell curves) barely overlap. The climate signal is thus prominent in the global values but obscured in the local values, since the distribution of daily mean values overlaps quite considerably in the two periods.
Application to the hydrological cycle
The findings could have broad implications for climate science. "Weather at the global level carries important information about climate," says Knutti. "This information could, for example, be used for further studies that quantify changes in the probability of extreme weather events, such as regional cold spells. These studies are based on model calculations, and our approach could then provide a global context of the climate change fingerprint in observations made during regional cold spells of this kind. This gives rise to new opportunities for the communication of regional weather events against the backdrop of global warming."
Read more at Science Daily
Until now, climate researchers have responded that climate is not the same thing as weather. Climate is what we expect in the long term, whereas weather is what we get in the short term -- and since local weather conditions are highly variable, it can be very cold in one location for a short time despite long-term global warming. In short, the variability of local weather masks long-term trends in global climate.
A paradigm shift
Now, however, a group led by ETH professor Reto Knutti has conducted a new analysis of temperature measurements and models. The scientists concluded that the weather-is-not-climate paradigm is no longer applicable in that form. According to the researchers, the climate signal -- that is, the long-term warming trend -- can actually be discerned in daily weather data, such as surface air temperature and humidity, provided that global spatial patterns are taken into account.
In plain English, this means that -- despite global warming -- there may well be a record low temperature in October in the US. If it is simultaneously warmer than average in other regions, however, this deviation is almost completely eliminated. "Uncovering the climate change signal in daily weather conditions calls for a global perspective, not a regional one," says Sebastian Sippel, a postdoc working in Knutti's research group and lead author of a study recently published in Nature Climate Change.
Statistical learning techniques extract climate change signature
In order to detect the climate signal in daily weather records, Sippel and his colleagues used statistical learning techniques to combine simulations with climate models and data from measuring stations. Statistical learning techniques can extract a "fingerprint" of climate change from the combination of temperatures of various regions and the ratio of expected warming and variability. By systematically evaluating the model simulations, they can identify the climate fingerprint in the global measurement data on any single day since spring 2012.
A comparison of the variability of local and global daily mean temperatures shows why the global perspective is important. Whereas locally measured daily mean temperatures can fluctuate widely (even after the seasonal cycle is removed), global daily mean values show a very narrow range.
If the distribution of global daily mean values from 1951 to 1980 are then compared with those from 2009 to 2018, the two distributions (bell curves) barely overlap. The climate signal is thus prominent in the global values but obscured in the local values, since the distribution of daily mean values overlaps quite considerably in the two periods.
Application to the hydrological cycle
The findings could have broad implications for climate science. "Weather at the global level carries important information about climate," says Knutti. "This information could, for example, be used for further studies that quantify changes in the probability of extreme weather events, such as regional cold spells. These studies are based on model calculations, and our approach could then provide a global context of the climate change fingerprint in observations made during regional cold spells of this kind. This gives rise to new opportunities for the communication of regional weather events against the backdrop of global warming."
Read more at Science Daily
How fish fins evolved just before the transition to land
Research on fossilized fish from the late Devonian period, roughly 375 million years ago, details the evolution of fins as they began to transition into limbs fit for walking on land.
The new study by paleontologists from the University of Chicago, published this week in the Proceedings of the National Academy of Sciences, uses CT scanning to examine the shape and structure of fin rays while still encased in surrounding rock. The imaging tools allowed the researchers to construct digital 3D models of the entire fin of the fishapod Tiktaalik roseae and its relatives in the fossil record for the first time. They could then use these models to infer how the fins worked and changed as they evolved into limbs.
Much of the research on fins during this key transitional stage focuses on the large, distinct bones and pieces of cartilage that correspond to those of our upper arm, forearm, wrist, and digits. Known as the "endoskeleton," researchers trace how these bones changed to become recognizable arms, legs and fingers in tetrapods, or four-legged creatures.
The delicate rays and spines of a fish's fins form a second, no less important "dermal" skeleton, which was also undergoing evolutionary changes in this period. These pieces are often overlooked because they can fall apart when the animals are fossilized or because they are removed intentionally by fossil preparators to reveal the larger bones of the endoskeleton. Dermal rays form most of the surface area of many fish fins but were completely lost in the earliest creatures with limbs.
"We're trying to understand the general trends and evolution of the dermal skeleton before all those other changes happened and fully-fledged limbs evolved," said Thomas Stewart, PhD, a postdoctoral researcher who led the new study. "If you want to understand how animals were evolving to use their fins in this part of history, this is an important data set."
Seeing ancient fins in 3D
Stewart and his colleagues worked with three late Devonian fishes with primitive features of tetrapods: Sauripterus taylori, Eusthenopteron foordi and Tiktaalik roseae, which was discovered in 2006 by a team led by UChicago paleontologist Neil Shubin, PhD, the senior author of the new study. Sauripterus and Eusthenopteron were believed to have been fully aquatic and used their pectoral fins for swimming, although they may have been able to prop themselves up on the bottom of lakes and streams. Tiktaalik may have been able to support most of its weight with its fins and perhaps even used them to venture out of the water for short trips across shallows and mudflats.
"By seeing the entire fin of Tiktaalik we gain a clearer picture of how it propped itself up and moved about. The fin had a kind of palm that could lie flush against the muddy bottoms of rivers and streams," Shubin said.
Stewart and Shubin worked with undergraduate student Ihna Yoo and Justin Lemberg, PhD, another researcher in Shubin's lab, to scan specimens of these fossils while they were still encased in rock. Using imaging software, they then reconstructed 3D models that allowed them to move, rotate and visualize the dermal skeleton as if it were completely extracted from the surrounding material.
The models showed that the fin rays of these animals were simplified, and the overall size of the fin web was smaller than that of their fishier predecessors. Surprisingly, they also saw that the top and bottom of the fins were becoming asymmetric. Fin rays are actually formed by pairs of bones. In Eusthenopteron, for example, the dorsal, or top, fin ray was slightly larger and longer than the ventral, or bottom one. Tiktaalik's dorsal rays were several times larger than its ventral rays, suggesting that it had muscles that extended on the underside of its fins, like the fleshy base of the palm, to help support its weight.
"This provides further information that allows us to understand how an animal like Tiktaalik was using its fins in this transition," Stewart said. "Animals went from swimming freely and using their fins to control the flow of water around them, to becoming adapted to pushing off against the surface at the bottom of the water."
Stewart and his colleagues also compared the dermal skeletons of living fish like sturgeon and lungfish to understand the patterns they were seeing in the fossils. They saw some of the same asymmetrical differences between the top and bottom of the fins, suggesting that those changes played a larger role in the evolution of fishes.
Read more at Science Daily
The new study by paleontologists from the University of Chicago, published this week in the Proceedings of the National Academy of Sciences, uses CT scanning to examine the shape and structure of fin rays while still encased in surrounding rock. The imaging tools allowed the researchers to construct digital 3D models of the entire fin of the fishapod Tiktaalik roseae and its relatives in the fossil record for the first time. They could then use these models to infer how the fins worked and changed as they evolved into limbs.
Much of the research on fins during this key transitional stage focuses on the large, distinct bones and pieces of cartilage that correspond to those of our upper arm, forearm, wrist, and digits. Known as the "endoskeleton," researchers trace how these bones changed to become recognizable arms, legs and fingers in tetrapods, or four-legged creatures.
The delicate rays and spines of a fish's fins form a second, no less important "dermal" skeleton, which was also undergoing evolutionary changes in this period. These pieces are often overlooked because they can fall apart when the animals are fossilized or because they are removed intentionally by fossil preparators to reveal the larger bones of the endoskeleton. Dermal rays form most of the surface area of many fish fins but were completely lost in the earliest creatures with limbs.
"We're trying to understand the general trends and evolution of the dermal skeleton before all those other changes happened and fully-fledged limbs evolved," said Thomas Stewart, PhD, a postdoctoral researcher who led the new study. "If you want to understand how animals were evolving to use their fins in this part of history, this is an important data set."
Seeing ancient fins in 3D
Stewart and his colleagues worked with three late Devonian fishes with primitive features of tetrapods: Sauripterus taylori, Eusthenopteron foordi and Tiktaalik roseae, which was discovered in 2006 by a team led by UChicago paleontologist Neil Shubin, PhD, the senior author of the new study. Sauripterus and Eusthenopteron were believed to have been fully aquatic and used their pectoral fins for swimming, although they may have been able to prop themselves up on the bottom of lakes and streams. Tiktaalik may have been able to support most of its weight with its fins and perhaps even used them to venture out of the water for short trips across shallows and mudflats.
"By seeing the entire fin of Tiktaalik we gain a clearer picture of how it propped itself up and moved about. The fin had a kind of palm that could lie flush against the muddy bottoms of rivers and streams," Shubin said.
Stewart and Shubin worked with undergraduate student Ihna Yoo and Justin Lemberg, PhD, another researcher in Shubin's lab, to scan specimens of these fossils while they were still encased in rock. Using imaging software, they then reconstructed 3D models that allowed them to move, rotate and visualize the dermal skeleton as if it were completely extracted from the surrounding material.
The models showed that the fin rays of these animals were simplified, and the overall size of the fin web was smaller than that of their fishier predecessors. Surprisingly, they also saw that the top and bottom of the fins were becoming asymmetric. Fin rays are actually formed by pairs of bones. In Eusthenopteron, for example, the dorsal, or top, fin ray was slightly larger and longer than the ventral, or bottom one. Tiktaalik's dorsal rays were several times larger than its ventral rays, suggesting that it had muscles that extended on the underside of its fins, like the fleshy base of the palm, to help support its weight.
"This provides further information that allows us to understand how an animal like Tiktaalik was using its fins in this transition," Stewart said. "Animals went from swimming freely and using their fins to control the flow of water around them, to becoming adapted to pushing off against the surface at the bottom of the water."
Stewart and his colleagues also compared the dermal skeletons of living fish like sturgeon and lungfish to understand the patterns they were seeing in the fossils. They saw some of the same asymmetrical differences between the top and bottom of the fins, suggesting that those changes played a larger role in the evolution of fishes.
Read more at Science Daily
A close look at thin ice
On frigid days, water vapor in the air can transform directly into solid ice, depositing a thin layer on surfaces such as a windowpane or car windshield. Though commonplace, this process is one that has kept physicists and chemists busy figuring out the details for decades.
In a new Nature paper, an international team of scientists describe the first-ever visualization of the atomic structure of two-dimensional ice as it formed. Insights from the findings, which were driven by computer simulations that inspired experimental work, may one day inform the design of materials that make ice removal a simpler and less costly process.
"One of the things that I find very exciting is that this challenges the traditional view of how ice grows," says Joseph S. Francisco, an atmospheric chemist at the University of Pennsylvania and an author on the paper.
"Knowing the structure is very important," adds coauthor Chongqin Zhu, a postdoctoral fellow in Francisco's group who led much of the computational work for the study. "Low-dimensional water is ubiquitous in nature and plays a critical role in an incredibly broad spectrum of sciences, including materials science, chemistry, biology, and atmospheric science.
"It also has practical significance. For example, removing ice is critical when it comes to things like wind turbines, which cannot function when they are covered in ice. If we understand the interaction between water and surfaces, then we might be able to develop new materials to make this ice removal easier."
In recent years, Francisco's lab has devoted considerable attention to studying the behavior of water, and specifically ice, at the interface of solid surfaces. What they've learned about ice's growth mechanisms and structures in this context helps them understand how ice behaves in more complex scenarios, like when interacting with other chemicals and water vapor in the atmosphere.
"We're interested in the chemistry of ice at the transition with the gas phase, as that's relevant to the reactions that are happening in our atmosphere," Francisco explains.
To understand basic principles of ice growth, researchers have entered this area of study by investigating two-dimensional structures: layers of ice that are only several water molecules thick.
In previous studies of two-dimensional ice, using computational methods and simulations, Francisco, Zhu, and colleagues showed that ice grows differently depending on whether a surface repels or attracts water, and the structure of that surface.
In the current work, they sought real-world verification of their simulations, reaching out to a team at Peking University to see if they could obtain images of two-dimensional ice.
The Peking team employed super-powerful atomic force microscopy, which uses a mechanical probe to "feel" the material being studied, translating the feedback into nanoscale-resolution images. Atomic force microscopy is capable of capturing structural information with a minimum of disruption to the material itself, allowing the scientists to identify even unstable intermediate structures that arose during the process of ice formation.
Virtually all naturally occurring ice on Earth is known as hexagonal ice for its six-sided structure. This is why snowflakes all have six-fold symmetry. One plane of hexagonal ice has a similar structure to that of two-dimensional ice and can terminate in two types of edges -- "zigzag" or "armchair." Usually this plane of natural ice terminates with a zigzag edges.
However, when ice is grown in two dimensions, researchers find that the pattern of growth is different. The current work, for the first time, shows that the armchair edges can be stabilized and that their growth follows a novel reaction pathway.
"This is a totally different mechanism from what was known," Zhu says.
Although the zigzag growth patterns were previously believed to only have six-membered rings of water molecules, both Zhu's calculations and the atomic force microscopy revealed an intermediate stage where five-membered rings were present.
This result, the researchers say, may help explain the experimental observations reported in their 2017 PNAS paper, which found that ice could grow in two different ways on a surface, depending on the properties of that surface.
In addition to lending insight into future design of materials conducive to ice removal, the techniques used in the work are also applicable to probe the growth of a large family of two-dimensional materials beyond two-dimensional ices, thus opening a new avenue of visualizing the structure and dynamics of low-dimensional matter.
For chemist Jeffrey Saven, a professor in Penn Arts & Sciences who was not directly involved in the current work, the collaboration between the theorists in Francisco's group and their colleagues in China called to mind a parable he learned from a mentor during his training.
"An experimentalist is talking with theorists about data collected in the lab. The mediocre theorist says, 'I can't really explain your data.' The good theorist says, 'I have a theory that fits your data.' The great theorist says, 'That's interesting, but here is the experiment you should be doing and why.'"
To build on this successful partnership, Zhu, Francisco, and their colleagues are embarking on theoretical and experimental work to begin to fill in the gaps related to how two-dimensional ice builds into three dimensions.
"The two-dimensional work is fundamental to laying the background," says Francisco. "And having the calculations verified by experiments is so good, because that allows us to go back to the calculations and take the next bold step toward three dimensions."
Read more at Science Daily
In a new Nature paper, an international team of scientists describe the first-ever visualization of the atomic structure of two-dimensional ice as it formed. Insights from the findings, which were driven by computer simulations that inspired experimental work, may one day inform the design of materials that make ice removal a simpler and less costly process.
"One of the things that I find very exciting is that this challenges the traditional view of how ice grows," says Joseph S. Francisco, an atmospheric chemist at the University of Pennsylvania and an author on the paper.
"Knowing the structure is very important," adds coauthor Chongqin Zhu, a postdoctoral fellow in Francisco's group who led much of the computational work for the study. "Low-dimensional water is ubiquitous in nature and plays a critical role in an incredibly broad spectrum of sciences, including materials science, chemistry, biology, and atmospheric science.
"It also has practical significance. For example, removing ice is critical when it comes to things like wind turbines, which cannot function when they are covered in ice. If we understand the interaction between water and surfaces, then we might be able to develop new materials to make this ice removal easier."
In recent years, Francisco's lab has devoted considerable attention to studying the behavior of water, and specifically ice, at the interface of solid surfaces. What they've learned about ice's growth mechanisms and structures in this context helps them understand how ice behaves in more complex scenarios, like when interacting with other chemicals and water vapor in the atmosphere.
"We're interested in the chemistry of ice at the transition with the gas phase, as that's relevant to the reactions that are happening in our atmosphere," Francisco explains.
To understand basic principles of ice growth, researchers have entered this area of study by investigating two-dimensional structures: layers of ice that are only several water molecules thick.
In previous studies of two-dimensional ice, using computational methods and simulations, Francisco, Zhu, and colleagues showed that ice grows differently depending on whether a surface repels or attracts water, and the structure of that surface.
In the current work, they sought real-world verification of their simulations, reaching out to a team at Peking University to see if they could obtain images of two-dimensional ice.
The Peking team employed super-powerful atomic force microscopy, which uses a mechanical probe to "feel" the material being studied, translating the feedback into nanoscale-resolution images. Atomic force microscopy is capable of capturing structural information with a minimum of disruption to the material itself, allowing the scientists to identify even unstable intermediate structures that arose during the process of ice formation.
Virtually all naturally occurring ice on Earth is known as hexagonal ice for its six-sided structure. This is why snowflakes all have six-fold symmetry. One plane of hexagonal ice has a similar structure to that of two-dimensional ice and can terminate in two types of edges -- "zigzag" or "armchair." Usually this plane of natural ice terminates with a zigzag edges.
However, when ice is grown in two dimensions, researchers find that the pattern of growth is different. The current work, for the first time, shows that the armchair edges can be stabilized and that their growth follows a novel reaction pathway.
"This is a totally different mechanism from what was known," Zhu says.
Although the zigzag growth patterns were previously believed to only have six-membered rings of water molecules, both Zhu's calculations and the atomic force microscopy revealed an intermediate stage where five-membered rings were present.
This result, the researchers say, may help explain the experimental observations reported in their 2017 PNAS paper, which found that ice could grow in two different ways on a surface, depending on the properties of that surface.
In addition to lending insight into future design of materials conducive to ice removal, the techniques used in the work are also applicable to probe the growth of a large family of two-dimensional materials beyond two-dimensional ices, thus opening a new avenue of visualizing the structure and dynamics of low-dimensional matter.
For chemist Jeffrey Saven, a professor in Penn Arts & Sciences who was not directly involved in the current work, the collaboration between the theorists in Francisco's group and their colleagues in China called to mind a parable he learned from a mentor during his training.
"An experimentalist is talking with theorists about data collected in the lab. The mediocre theorist says, 'I can't really explain your data.' The good theorist says, 'I have a theory that fits your data.' The great theorist says, 'That's interesting, but here is the experiment you should be doing and why.'"
To build on this successful partnership, Zhu, Francisco, and their colleagues are embarking on theoretical and experimental work to begin to fill in the gaps related to how two-dimensional ice builds into three dimensions.
"The two-dimensional work is fundamental to laying the background," says Francisco. "And having the calculations verified by experiments is so good, because that allows us to go back to the calculations and take the next bold step toward three dimensions."
Read more at Science Daily
How the brain can create sound information via lip-reading
Brain activity synchronizes with sound waves, even without audible sound, through lip-reading, according to new research published in JNeurosci.
Listening to speech activates our auditory cortex to synchronize with the rhythm of incoming sound waves. Lip-reading is a useful aid to comprehend unintelligible speech, but we still don't know how lip-reading helps the brain process sound.
Bourguignon et al. used magnetoencephalography to measure brain activity in healthy adults while they listened to a story or watched a silent video of a woman speaking. The participants' auditory cortices synchronized with sound waves produced by the woman in the video, even though they could not hear it.
The synchronization resembled that in those who actually did listen to the story, indicating the brain can glean auditory information from the visual information available to them through lip-reading. The researchers suggest this ability arises from activity in the visual cortex synchronizing with lip movement. This signal is sent to other brain areas that translate the movement information into sound information, creating the sound wave synchronization.
From Science Daily
Listening to speech activates our auditory cortex to synchronize with the rhythm of incoming sound waves. Lip-reading is a useful aid to comprehend unintelligible speech, but we still don't know how lip-reading helps the brain process sound.
Bourguignon et al. used magnetoencephalography to measure brain activity in healthy adults while they listened to a story or watched a silent video of a woman speaking. The participants' auditory cortices synchronized with sound waves produced by the woman in the video, even though they could not hear it.
The synchronization resembled that in those who actually did listen to the story, indicating the brain can glean auditory information from the visual information available to them through lip-reading. The researchers suggest this ability arises from activity in the visual cortex synchronizing with lip movement. This signal is sent to other brain areas that translate the movement information into sound information, creating the sound wave synchronization.
From Science Daily
Half the amount of chemo prevents testicular cancer from coming back, new trial shows
Testicular cancer can be prevented from coming back using half the amount of chemotherapy that is currently used, a new clinical trial has shown.
In many men who have had surgery for an aggressive form of testicular cancer, the disease can come back elsewhere in their bodies and need intensive treatment, often within two years after initial diagnosis.
The new trial showed that giving men one cycle of chemotherapy was as effective at preventing men's testicular cancer from coming back as the two cycles used as standard.
Crucially, lowering the overall exposure to chemotherapy reduced the debilitating side effects which can have a lifelong impact on patients' health.
The 111 trial has already begun to change clinical practice, reducing the number of hospital admissions, and lowering the costs of treatment.
The trial, led by The Institute of Cancer Research, London, and University Hospitals Birmingham NHS Foundation Trust, involved nearly 250 men with early-stage testicular cancer at high risk of their cancer returning after surgery.
The research was published in the journal European Urology today (Thursday), and was funded by Cancer Research UK and the Queen Elizabeth Hospital Birmingham Charity.
Testicular cancer is the most common cancer affecting young men, with many patients being diagnosed in their twenties or thirties.
After surgery, patients are currently offered two cycles of chemotherapy to destroy any cancer cells that may have already spread, or a watch-and-wait approach -- where they receive no treatment unless their cancer comes back, at which point they are given three cycles of chemo.
Survival rates are very high, but as men are diagnosed young, if they choose to have chemotherapy they may have to live with long-term side effects for many decades.
In the new study, patients were given one three-week cycle of a chemotherapy known as BEP -- a combination of the drugs bleomycin, etoposide and the platinum agent cisplatin.
The researchers looked at the percentage of men whose testicular cancer returned within two years of being treated with one cycle of chemotherapy, and compared these relapse rates with established data from previous studies in patients who were given two cycles.
The researchers found that only three men -- 1.3 per cent -- saw their testicular cancer return after finishing treatment -- a nearly identical rate to previous studies using two cycles of BEP chemotherapy.
In the new study, 41 per cent of men receiving one cycle of chemotherapy experienced one or more serious side effects while receiving treatment, such as an increased risk of infection, sepsis or vomiting. But only a small number -- six patients, or 2.6 per cent -- experienced long-term side effects such as damage to their hearing.
It is well established that lower chemotherapy doses are related to reduced rates of side effects, and the researchers are confident that the rates found in this study are substantially lower than those currently seen in the clinic.
Professor Robert Huddart, Professor of Urological Cancer at The Institute of Cancer Research, London, and Consultant in Urological Oncology at The Royal Marsden NHS Foundation Trust, said:
"Men with testicular cancer who are at high risk of recurrence have generally been treated with two cycles of chemotherapy -- but our new study found that one cycle was enough to stop their tumour from coming back.
"Reducing the overall dose of chemotherapy could spare young men who have their whole lives ahead of them from long-term side effects, and also means they will need fewer hospital visits for their treatment.
"This new trial is already changing clinical practice on a global scale, and is set to improve patients' quality of life as well as reducing the cost of testicular cancer treatment.
"Reducing the number of cycles and the dosage of chemotherapy for testicular cancer could save the NHS money, and free up valuable hospital time and resources."
Kris Taylor, 35, from the West-Midlands, was treated as part of the 111 trial at the Queen Elizabeth Hospital Birmingham after having surgery for his testicular cancer. He said:
"I was playing football semi-professionally at the time I was diagnosed. Even though my prognosis was good, knowing that you have cancer is really scary, and the key thing for me was to get back to normality as soon as possible. I'd already had to have time off for surgery, so, when I was offered the chance to have less chemo but with no greater risk the cancer would return, I jumped at it.
"The side effects of the treatment were really difficult, but I was straight back on the pitch as soon as it finished -- five years on, and I'm still fighting fit. It's great to know that others may now be able to benefit from the trial's findings. Being able to reduce the amount of chemotherapy a person receives can make such a big difference to their quality of life in both the short-term and the long-term."
Professor Emma Hall, Deputy Director of the Clinical Trials and Statistics Unit at The Institute of Cancer Research, London, said:
"We tend to be focused on whether we can cure a cancer or not, but for a disease like testicular cancer which affects young people, it is also crucial to ensure treatment does not leave patients with a lifetime of adverse effects.
"There is an important balance to be struck in giving men enough chemotherapy to stop their testicular cancer from coming back, without giving them so much that they suffer unnecessary side effects.
"Our study has found strong evidence to suggest that testicular cancer chemotherapy can be safely reduced from two cycles to just one -- making their treatment shorter, kinder and cheaper."
Martin Ledwick, Cancer Research UK's head information nurse, said:
"Thanks to advances in treatments, survival for testicular cancer is very high, but the chemotherapy can cause unpleasant, sometimes lasting side effects. That's why it's such good news to see that we can cut down the amount of treatment we give.
Read more at Science Daily
In many men who have had surgery for an aggressive form of testicular cancer, the disease can come back elsewhere in their bodies and need intensive treatment, often within two years after initial diagnosis.
The new trial showed that giving men one cycle of chemotherapy was as effective at preventing men's testicular cancer from coming back as the two cycles used as standard.
Crucially, lowering the overall exposure to chemotherapy reduced the debilitating side effects which can have a lifelong impact on patients' health.
The 111 trial has already begun to change clinical practice, reducing the number of hospital admissions, and lowering the costs of treatment.
The trial, led by The Institute of Cancer Research, London, and University Hospitals Birmingham NHS Foundation Trust, involved nearly 250 men with early-stage testicular cancer at high risk of their cancer returning after surgery.
The research was published in the journal European Urology today (Thursday), and was funded by Cancer Research UK and the Queen Elizabeth Hospital Birmingham Charity.
Testicular cancer is the most common cancer affecting young men, with many patients being diagnosed in their twenties or thirties.
After surgery, patients are currently offered two cycles of chemotherapy to destroy any cancer cells that may have already spread, or a watch-and-wait approach -- where they receive no treatment unless their cancer comes back, at which point they are given three cycles of chemo.
Survival rates are very high, but as men are diagnosed young, if they choose to have chemotherapy they may have to live with long-term side effects for many decades.
In the new study, patients were given one three-week cycle of a chemotherapy known as BEP -- a combination of the drugs bleomycin, etoposide and the platinum agent cisplatin.
The researchers looked at the percentage of men whose testicular cancer returned within two years of being treated with one cycle of chemotherapy, and compared these relapse rates with established data from previous studies in patients who were given two cycles.
The researchers found that only three men -- 1.3 per cent -- saw their testicular cancer return after finishing treatment -- a nearly identical rate to previous studies using two cycles of BEP chemotherapy.
In the new study, 41 per cent of men receiving one cycle of chemotherapy experienced one or more serious side effects while receiving treatment, such as an increased risk of infection, sepsis or vomiting. But only a small number -- six patients, or 2.6 per cent -- experienced long-term side effects such as damage to their hearing.
It is well established that lower chemotherapy doses are related to reduced rates of side effects, and the researchers are confident that the rates found in this study are substantially lower than those currently seen in the clinic.
Professor Robert Huddart, Professor of Urological Cancer at The Institute of Cancer Research, London, and Consultant in Urological Oncology at The Royal Marsden NHS Foundation Trust, said:
"Men with testicular cancer who are at high risk of recurrence have generally been treated with two cycles of chemotherapy -- but our new study found that one cycle was enough to stop their tumour from coming back.
"Reducing the overall dose of chemotherapy could spare young men who have their whole lives ahead of them from long-term side effects, and also means they will need fewer hospital visits for their treatment.
"This new trial is already changing clinical practice on a global scale, and is set to improve patients' quality of life as well as reducing the cost of testicular cancer treatment.
"Reducing the number of cycles and the dosage of chemotherapy for testicular cancer could save the NHS money, and free up valuable hospital time and resources."
Kris Taylor, 35, from the West-Midlands, was treated as part of the 111 trial at the Queen Elizabeth Hospital Birmingham after having surgery for his testicular cancer. He said:
"I was playing football semi-professionally at the time I was diagnosed. Even though my prognosis was good, knowing that you have cancer is really scary, and the key thing for me was to get back to normality as soon as possible. I'd already had to have time off for surgery, so, when I was offered the chance to have less chemo but with no greater risk the cancer would return, I jumped at it.
"The side effects of the treatment were really difficult, but I was straight back on the pitch as soon as it finished -- five years on, and I'm still fighting fit. It's great to know that others may now be able to benefit from the trial's findings. Being able to reduce the amount of chemotherapy a person receives can make such a big difference to their quality of life in both the short-term and the long-term."
Professor Emma Hall, Deputy Director of the Clinical Trials and Statistics Unit at The Institute of Cancer Research, London, said:
"We tend to be focused on whether we can cure a cancer or not, but for a disease like testicular cancer which affects young people, it is also crucial to ensure treatment does not leave patients with a lifetime of adverse effects.
"There is an important balance to be struck in giving men enough chemotherapy to stop their testicular cancer from coming back, without giving them so much that they suffer unnecessary side effects.
"Our study has found strong evidence to suggest that testicular cancer chemotherapy can be safely reduced from two cycles to just one -- making their treatment shorter, kinder and cheaper."
Martin Ledwick, Cancer Research UK's head information nurse, said:
"Thanks to advances in treatments, survival for testicular cancer is very high, but the chemotherapy can cause unpleasant, sometimes lasting side effects. That's why it's such good news to see that we can cut down the amount of treatment we give.
Read more at Science Daily
Jan 2, 2020
Life could have emerged from lakes with high phosphorus
Life as we know it requires phosphorus. It's one of the six main chemical elements of life, it forms the backbone of DNA and RNA molecules, acts as the main currency for energy in all cells and anchors the lipids that separate cells from their surrounding environment.
But how did a lifeless environment on the early Earth supply this key ingredient?
"For 50 years, what's called 'the phosphate problem,' has plagued studies on the origin of life," said first author Jonathan Toner, a University of Washington research assistant professor of Earth and space sciences.
The problem is that chemical reactions that make the building blocks of living things need a lot of phosphorus, but phosphorus is scarce. A new UW study, published Dec. 30 in the Proceedings of the National Academy of Sciences, finds an answer to this problem in certain types of lakes.
The study focuses on carbonate-rich lakes, which form in dry environments within depressions that funnel water draining from the surrounding landscape. Because of high evaporation rates, the lake waters concentrate into salty and alkaline, or high-pH, solutions. Such lakes, also known as alkaline or soda lakes, are found on all seven continents.
The researchers first looked at phosphorus measurements in existing carbonate-rich lakes, including Mono Lake in California, Lake Magadi in Kenya and Lonar Lake in India.
While the exact concentration depends on where the samples were taken and during what season, the researchers found that carbonate-rich lakes have up to 50,000 times phosphorus levels found in seawater, rivers and other types of lakes. Such high concentrations point to the existence of some common, natural mechanism that accumulates phosphorus in these lakes.
Today these carbonate-rich lakes are biologically rich and support life ranging from microbes to Lake Magadi's famous flocks of flamingoes. These living things affect the lake chemistry. So researchers did lab experiments with bottles of carbonate-rich water at different chemical compositions to understand how the lakes accumulate phosphorus, and how high phosphorus concentrations could get in a lifeless environment.
The reason these waters have high phosphorus is their carbonate content. In most lakes, calcium, which is much more abundant on Earth, binds to phosphorus to make solid calcium phosphate minerals, which life can't access. But in carbonate-rich waters, the carbonate outcompetes phosphate to bind with calcium, leaving some of the phosphate unattached. Lab tests that combined ingredients at different concentrations show that calcium binds to carbonate and leaves the phosphate freely available in the water.
"It's a straightforward idea, which is its appeal," Toner said. "It solves the phosphate problem in an elegant and plausible way."
Phosphate levels could climb even higher, to a million times levels in seawater, when lake waters evaporate during dry seasons, along shorelines, or in pools separated from the main body of the lake.
"The extremely high phosphate levels in these lakes and ponds would have driven reactions that put phosphorus into the molecular building blocks of RNA, proteins, and fats, all of which were needed to get life going," said co-author David Catling, a UW professor of Earth & space sciences.
The carbon dioxide-rich air on the early Earth, some four billion years ago, would have been ideal for creating such lakes and allowing them to reach maximum levels of phosphorus. Carbonate-rich lakes tend to form in atmospheres with high carbon dioxide. Plus, carbon dioxide dissolves in water to create acid conditions that efficiently release phosphorus from rocks.
"The early Earth was a volcanically active place, so you would have had lots of fresh volcanic rock reacting with carbon dioxide and supplying carbonate and phosphorus to lakes," Toner said. "The early Earth could have hosted many carbonate-rich lakes, which would have had high enough phosphorus concentrations to get life started."
Read more at Science Daily
But how did a lifeless environment on the early Earth supply this key ingredient?
"For 50 years, what's called 'the phosphate problem,' has plagued studies on the origin of life," said first author Jonathan Toner, a University of Washington research assistant professor of Earth and space sciences.
The problem is that chemical reactions that make the building blocks of living things need a lot of phosphorus, but phosphorus is scarce. A new UW study, published Dec. 30 in the Proceedings of the National Academy of Sciences, finds an answer to this problem in certain types of lakes.
The study focuses on carbonate-rich lakes, which form in dry environments within depressions that funnel water draining from the surrounding landscape. Because of high evaporation rates, the lake waters concentrate into salty and alkaline, or high-pH, solutions. Such lakes, also known as alkaline or soda lakes, are found on all seven continents.
The researchers first looked at phosphorus measurements in existing carbonate-rich lakes, including Mono Lake in California, Lake Magadi in Kenya and Lonar Lake in India.
While the exact concentration depends on where the samples were taken and during what season, the researchers found that carbonate-rich lakes have up to 50,000 times phosphorus levels found in seawater, rivers and other types of lakes. Such high concentrations point to the existence of some common, natural mechanism that accumulates phosphorus in these lakes.
Today these carbonate-rich lakes are biologically rich and support life ranging from microbes to Lake Magadi's famous flocks of flamingoes. These living things affect the lake chemistry. So researchers did lab experiments with bottles of carbonate-rich water at different chemical compositions to understand how the lakes accumulate phosphorus, and how high phosphorus concentrations could get in a lifeless environment.
The reason these waters have high phosphorus is their carbonate content. In most lakes, calcium, which is much more abundant on Earth, binds to phosphorus to make solid calcium phosphate minerals, which life can't access. But in carbonate-rich waters, the carbonate outcompetes phosphate to bind with calcium, leaving some of the phosphate unattached. Lab tests that combined ingredients at different concentrations show that calcium binds to carbonate and leaves the phosphate freely available in the water.
"It's a straightforward idea, which is its appeal," Toner said. "It solves the phosphate problem in an elegant and plausible way."
Phosphate levels could climb even higher, to a million times levels in seawater, when lake waters evaporate during dry seasons, along shorelines, or in pools separated from the main body of the lake.
"The extremely high phosphate levels in these lakes and ponds would have driven reactions that put phosphorus into the molecular building blocks of RNA, proteins, and fats, all of which were needed to get life going," said co-author David Catling, a UW professor of Earth & space sciences.
The carbon dioxide-rich air on the early Earth, some four billion years ago, would have been ideal for creating such lakes and allowing them to reach maximum levels of phosphorus. Carbonate-rich lakes tend to form in atmospheres with high carbon dioxide. Plus, carbon dioxide dissolves in water to create acid conditions that efficiently release phosphorus from rocks.
"The early Earth was a volcanically active place, so you would have had lots of fresh volcanic rock reacting with carbon dioxide and supplying carbonate and phosphorus to lakes," Toner said. "The early Earth could have hosted many carbonate-rich lakes, which would have had high enough phosphorus concentrations to get life started."
Read more at Science Daily
Scientists link La Niña climate cycle to increased diarrhea
A study in Botswana by Columbia University Mailman School of Public Health scientists finds that spikes in cases of life-threatening diarrhea in young children are associated with La Niña climate conditions. The findings published in the journal Nature Communications could provide the basis for an early-warning system that would allow public health officials to prepare for periods of increased diarrhea cases as long as seven months ahead of time.
In low- and middle-income countries, diarrhea is the second leading cause of death in children younger than five years of age, with 72 percent of deaths occurring in the first two years of life. Rates of under-5 diarrhea in Africa are particularly high, with an estimated incidence of 3.3 episodes of diarrhea per child each year and one-quarter of all child deaths caused by diarrhea.
The El Niño-Southern Oscillation (ENSO) is a coupled ocean-atmosphere system spanning the equatorial Pacific Ocean that oscillates in a 3-to-7-year cycle between two extremes, El Niño (warmer ocean temperatures) and La Niña (cooler ocean temperatures). The ENSO cycle affects local weather patterns around the world, including temperatures, winds, and precipitation.
Researchers analyzed associations between ENSO and climate conditions and cases of under-5 diarrhea in the Chobe region in northeastern Botswana. They found that La Niña is associated with cooler temperatures, increased rainfall, and higher flooding during the rainy season. In turn, La Niña conditions lagged 0-7 months are associated with about a 30-percent increase in incidence of under-5 diarrhea in the early rainy season from December through February
"These findings demonstrate the potential use of the El Niño-Southern Oscillation as a long-lead prediction tool for childhood diarrhea in southern Africa," says first author Alexandra K. Heaney, a former doctoral student in environmental health sciences at Columbia Mailman and now a postdoc at University of California, Berkeley. "Advanced stockpiling of medical supplies, preparation of hospital beds, and organization of healthcare workers could dramatically improve the ability of health facilities to manage high diarrheal disease incidence."
Previously, El Niño events have been linked to diarrhea outbreaks in Peru, Bangladesh, China, and Japan, but until now studies of the effects of ENSO on diarrheal disease in Africa have been limited to cholera -- a pathogen responsible for only a small fraction of diarrheal cases in Africa.
Infectious diarrhea is caused by many different pathogens (viruses, bacteria, and protozoa) and meteorological conditions can have a critical influence on pathogen exposures, in particular, those associated with waterborne transmission. For example, extreme rainfall events may contaminate drinking water by flushing diarrhea-causing pathogens from pastures and dwellings into drinking water supplies, and drought conditions can concentrate animal activity increasing the movement of diarrhea-causing pathogens into surface water resources.
Water Treatment Systems Appear To Be Strained
The researchers speculate that centralized water disinfection processes currently used in the Chobe region may be insufficient to deal with changes in water quality brought on by extremes of wet and dry weather, although they caution that further confirmatory studies are needed.
Earlier research by Columbia Mailman researchers in the Chobe region found that cases of diarrhea in young children spiked during extreme climate conditions, in both the wet and dry seasons. A second study reported on a method to forecast childhood diarrheal disease there. Because climate conditions vary from region to region, forecasts for infectious diseases must be region-specific. In other studies, the scientists have created forecasts for influenza, Ebola, and West Nile Virus. During the influenza season in the United States, they publish weekly regional forecasts with predictions on whether cases are expected to rise or fall and by how much.
Read more at Science Daily
In low- and middle-income countries, diarrhea is the second leading cause of death in children younger than five years of age, with 72 percent of deaths occurring in the first two years of life. Rates of under-5 diarrhea in Africa are particularly high, with an estimated incidence of 3.3 episodes of diarrhea per child each year and one-quarter of all child deaths caused by diarrhea.
The El Niño-Southern Oscillation (ENSO) is a coupled ocean-atmosphere system spanning the equatorial Pacific Ocean that oscillates in a 3-to-7-year cycle between two extremes, El Niño (warmer ocean temperatures) and La Niña (cooler ocean temperatures). The ENSO cycle affects local weather patterns around the world, including temperatures, winds, and precipitation.
Researchers analyzed associations between ENSO and climate conditions and cases of under-5 diarrhea in the Chobe region in northeastern Botswana. They found that La Niña is associated with cooler temperatures, increased rainfall, and higher flooding during the rainy season. In turn, La Niña conditions lagged 0-7 months are associated with about a 30-percent increase in incidence of under-5 diarrhea in the early rainy season from December through February
"These findings demonstrate the potential use of the El Niño-Southern Oscillation as a long-lead prediction tool for childhood diarrhea in southern Africa," says first author Alexandra K. Heaney, a former doctoral student in environmental health sciences at Columbia Mailman and now a postdoc at University of California, Berkeley. "Advanced stockpiling of medical supplies, preparation of hospital beds, and organization of healthcare workers could dramatically improve the ability of health facilities to manage high diarrheal disease incidence."
Previously, El Niño events have been linked to diarrhea outbreaks in Peru, Bangladesh, China, and Japan, but until now studies of the effects of ENSO on diarrheal disease in Africa have been limited to cholera -- a pathogen responsible for only a small fraction of diarrheal cases in Africa.
Infectious diarrhea is caused by many different pathogens (viruses, bacteria, and protozoa) and meteorological conditions can have a critical influence on pathogen exposures, in particular, those associated with waterborne transmission. For example, extreme rainfall events may contaminate drinking water by flushing diarrhea-causing pathogens from pastures and dwellings into drinking water supplies, and drought conditions can concentrate animal activity increasing the movement of diarrhea-causing pathogens into surface water resources.
Water Treatment Systems Appear To Be Strained
The researchers speculate that centralized water disinfection processes currently used in the Chobe region may be insufficient to deal with changes in water quality brought on by extremes of wet and dry weather, although they caution that further confirmatory studies are needed.
Earlier research by Columbia Mailman researchers in the Chobe region found that cases of diarrhea in young children spiked during extreme climate conditions, in both the wet and dry seasons. A second study reported on a method to forecast childhood diarrheal disease there. Because climate conditions vary from region to region, forecasts for infectious diseases must be region-specific. In other studies, the scientists have created forecasts for influenza, Ebola, and West Nile Virus. During the influenza season in the United States, they publish weekly regional forecasts with predictions on whether cases are expected to rise or fall and by how much.
Read more at Science Daily
Learning from the bears
Grizzly bears spend many months in hibernation, but their muscles do not suffer from the lack of movement. In the journal Scientific Reports, a team led by Michael Gotthardt reports on how they manage to do this. The grizzly bears' strategy could help prevent muscle atrophy in humans as well.
A grizzly bear only knows three seasons during the year. Its time of activity starts between March and May. Around September the bear begins to eat large quantities of food. And sometime between November and January, it falls into hibernation. From a physiological point of view, this is the strangest time of all. The bear's metabolism and heart rate drop rapidly. It excretes neither urine nor feces. The amount of nitrogen in the blood increases drastically and the bear becomes resistant to the hormone insulin.
A person could hardly survive this four-month phase in a healthy state. Afterwards, he or she would most likely have to cope with thromboses or psychological changes. Above all, the muscles would suffer from this prolonged period of disuse. Anyone who has ever had an arm or leg in a cast for a few weeks or has had to lie in bed for a long time due to an illness has probably experienced this.
A little sluggish, but otherwise fine
Not so the grizzly bear. In the spring, the bear wakes up from hibernation, perhaps still a bit sluggish at first, but otherwise well. Many scientists have long been interested in the bear's strategies for adapting to its three seasons.
A team led by Professor Michael Gotthardt, head of the Neuromuscular and Cardiovascular Cell Biology group at the Max Delbrueck Center for Molecular Medicine (MDC) in Berlin, has now investigated how the bear's muscles manage to survive hibernation virtually unharmed. The scientists from Berlin, Greifswald and the United States were particularly interested in the question of which genes in the bear's muscle cells are transcribed and converted into proteins, and what effect this has on the cells.
Understanding and copying the tricks of nature
"Muscle atrophy is a real human problem that occurs in many circumstances. We are still not very good at preventing it," says the lead author of the study, Dr. Douaa Mugahid, once a member of Gotthardt's research group and now a postdoctoral researcher in the laboratory of Professor Marc Kirschner of the Department of Systems Biology at Harvard Medical School in Boston.
"For me, the beauty of our work was to learn how nature has perfected a way to maintain muscle functions under the difficult conditions of hibernation," says Mugahid. "If we can better understand these strategies, we will be able to develop novel and non-intuitive methods to better prevent and treat muscle atrophy in patients."
Gene sequencing and mass spectrometry
To understand the bears' tricks, the team led by Mugahid and Gotthardt examined muscle samples from grizzly bears both during and between the times of hibernation, which they had received from Washington State University. "By combining cutting-edge sequencing techniques with mass spectrometry, we wanted to determine which genes and proteins are upregulated or shut down both during and between the times of hibernation," explains Gotthardt.
"This task proved to be tricky -- because neither the full genome nor the proteome, i.e., the totality of all proteins of the grizzly bear, were known," says the MDC scientist. In a further step, he and his team compared the findings with observations of humans, mice and nematode worms.
Non-essential amino acids allowed muscle cells to grow
As the researchers reported in the journal "Scientific Reports," they found proteins in their experiments that strongly influence a bear's amino acid metabolism during hibernation. As a result, its muscle cells contain higher amounts of certain non-essential amino acids (NEAAs).
"In experiments with isolated muscle cells of humans and mice that exhibit muscle atrophy, cell growth could also be stimulated by NEAAs," says Gotthardt, adding that "it is known, however, from earlier clinical studies that the administration of amino acids in the form of pills or powders is not enough to prevent muscle atrophy in elderly or bedridden people."
"Obviously, it is important for the muscle to produce these amino acids itself -- otherwise the amino acids might not reach the places where they are needed," speculates the MDC scientist. A therapeutic starting point, he says, could be the attempt to induce the human muscle to produce NEAAs itself by activating corresponding metabolic pathways with suitable agents during longer rest periods.
Tissue samples from bedridden patients
In order to find out which signaling pathways need to be activated in the muscle, Gotthardt and his team compared the activity of genes in grizzly bears, humans and mice. The required data came from elderly or bedridden patients and from mice suffering from muscle atrophy -- for example, as a result of reduced movement after the application of a plaster cast. "We wanted to find out which genes are regulated differently between animals that hibernate and those that do not," explains Gotthardt.
However, the scientists came across a whole series of such genes. To narrow down the possible candidates that could prove to be a starting point for muscle atrophy therapy, the team subsequently carried out experiments with nematode worms. "In worms, individual genes can be deactivated relatively easily and one can quickly see what effects this has on muscle growth," explains Gotthardt.
Read more at Science Daily
A grizzly bear only knows three seasons during the year. Its time of activity starts between March and May. Around September the bear begins to eat large quantities of food. And sometime between November and January, it falls into hibernation. From a physiological point of view, this is the strangest time of all. The bear's metabolism and heart rate drop rapidly. It excretes neither urine nor feces. The amount of nitrogen in the blood increases drastically and the bear becomes resistant to the hormone insulin.
A person could hardly survive this four-month phase in a healthy state. Afterwards, he or she would most likely have to cope with thromboses or psychological changes. Above all, the muscles would suffer from this prolonged period of disuse. Anyone who has ever had an arm or leg in a cast for a few weeks or has had to lie in bed for a long time due to an illness has probably experienced this.
A little sluggish, but otherwise fine
Not so the grizzly bear. In the spring, the bear wakes up from hibernation, perhaps still a bit sluggish at first, but otherwise well. Many scientists have long been interested in the bear's strategies for adapting to its three seasons.
A team led by Professor Michael Gotthardt, head of the Neuromuscular and Cardiovascular Cell Biology group at the Max Delbrueck Center for Molecular Medicine (MDC) in Berlin, has now investigated how the bear's muscles manage to survive hibernation virtually unharmed. The scientists from Berlin, Greifswald and the United States were particularly interested in the question of which genes in the bear's muscle cells are transcribed and converted into proteins, and what effect this has on the cells.
Understanding and copying the tricks of nature
"Muscle atrophy is a real human problem that occurs in many circumstances. We are still not very good at preventing it," says the lead author of the study, Dr. Douaa Mugahid, once a member of Gotthardt's research group and now a postdoctoral researcher in the laboratory of Professor Marc Kirschner of the Department of Systems Biology at Harvard Medical School in Boston.
"For me, the beauty of our work was to learn how nature has perfected a way to maintain muscle functions under the difficult conditions of hibernation," says Mugahid. "If we can better understand these strategies, we will be able to develop novel and non-intuitive methods to better prevent and treat muscle atrophy in patients."
Gene sequencing and mass spectrometry
To understand the bears' tricks, the team led by Mugahid and Gotthardt examined muscle samples from grizzly bears both during and between the times of hibernation, which they had received from Washington State University. "By combining cutting-edge sequencing techniques with mass spectrometry, we wanted to determine which genes and proteins are upregulated or shut down both during and between the times of hibernation," explains Gotthardt.
"This task proved to be tricky -- because neither the full genome nor the proteome, i.e., the totality of all proteins of the grizzly bear, were known," says the MDC scientist. In a further step, he and his team compared the findings with observations of humans, mice and nematode worms.
Non-essential amino acids allowed muscle cells to grow
As the researchers reported in the journal "Scientific Reports," they found proteins in their experiments that strongly influence a bear's amino acid metabolism during hibernation. As a result, its muscle cells contain higher amounts of certain non-essential amino acids (NEAAs).
"In experiments with isolated muscle cells of humans and mice that exhibit muscle atrophy, cell growth could also be stimulated by NEAAs," says Gotthardt, adding that "it is known, however, from earlier clinical studies that the administration of amino acids in the form of pills or powders is not enough to prevent muscle atrophy in elderly or bedridden people."
"Obviously, it is important for the muscle to produce these amino acids itself -- otherwise the amino acids might not reach the places where they are needed," speculates the MDC scientist. A therapeutic starting point, he says, could be the attempt to induce the human muscle to produce NEAAs itself by activating corresponding metabolic pathways with suitable agents during longer rest periods.
Tissue samples from bedridden patients
In order to find out which signaling pathways need to be activated in the muscle, Gotthardt and his team compared the activity of genes in grizzly bears, humans and mice. The required data came from elderly or bedridden patients and from mice suffering from muscle atrophy -- for example, as a result of reduced movement after the application of a plaster cast. "We wanted to find out which genes are regulated differently between animals that hibernate and those that do not," explains Gotthardt.
However, the scientists came across a whole series of such genes. To narrow down the possible candidates that could prove to be a starting point for muscle atrophy therapy, the team subsequently carried out experiments with nematode worms. "In worms, individual genes can be deactivated relatively easily and one can quickly see what effects this has on muscle growth," explains Gotthardt.
Read more at Science Daily
Many younger patients with stomach cancer have a distinct disease
Many people under 60 who develop stomach cancer have a "genetically and clinically distinct" disease, new Mayo Clinic research has discovered. Compared to stomach cancer in older adults, this new, early onset form often grows and spreads more quickly, has a worse prognosis, and is more resistant to traditional chemotherapy treatments, the study finds. The research was published recently in the journal Surgery.
While rates of stomach cancer in older patients have been declining for decades, this early onset cancer is increasing and now makes up more than 30% of stomach cancer diagnoses.
"I think this is an alarming trend, as stomach cancer is a devastating disease," says senior author Travis Grotz, M.D., a Mayo Clinic surgical oncologist. "There is little awareness in the U.S. of the signs and symptoms of stomach cancer, and many younger patients may be diagnosed late -- when treatment is less effective."
The research team studied 75,225 cases using several cancer databases to review stomach cancer statistics from 1973 to 2015. Today, the average age of someone diagnosed with stomach cancer is 68, but people in their 30s, 40s and 50s are more at risk than they used to be.
Although there's no clear cutoff age for the definition of early onset and late-onset stomach cancer, the researchers found the distinctions held true whether they used an age cutoff of 60, 50 or 40 years. The researchers found that the incidence of late-onset stomach cancer decreased by 1.8% annually during the study period, while the early onset disease decreased by 1.9% annually from 1973 to 1995 and then increased by 1.5% through 2013. The proportion of early onset gastric cancer has doubled from 18% of all cases in 1995 to now more than 30% of all gastric cancer cases.
"Typically, we see stomach cancer being diagnosed in patients in their 70s, but increasingly we are seeing 30- to 50-year-old patients being diagnosed," Dr. Grotz says.
The increased rate of the early onset disease is not from earlier detection or screening, Dr. Grotz adds. "There is no universal screening for stomach cancer, and the younger patients actually presented with later-stage disease than the older patients," he says.
In addition to being more deadly, early onset stomach cancer is also genetically and molecularly distinct, researchers found. Furthermore, traditional risk factors for developing stomach cancer among older Americans, such as smoking tobacco, did not appear to correlate with its early onset counterpart.
"Hopefully, studies like this will raise awareness and increase physician suspicion of stomach cancer, particularly in younger patients," Dr. Grotz says. Younger patients who feel full before finishing a meal, or have reflux, abdominal pain, unintentional weight loss and difficulty eating should see their health care provider, he adds.
Stomach cancer is the 16th most common cancer in the U.S., according to the American Cancer Society. It has a five-year survival rate of 31.5%, and there will be an estimated 27,510 new cases in 2019, according to the National Cancer Institute. The World Health Organization reports that cancer was the second leading cause of death globally in 2018 and that stomach cancer was the third most common cause of cancer death that year.
Read more at Science Daily
While rates of stomach cancer in older patients have been declining for decades, this early onset cancer is increasing and now makes up more than 30% of stomach cancer diagnoses.
"I think this is an alarming trend, as stomach cancer is a devastating disease," says senior author Travis Grotz, M.D., a Mayo Clinic surgical oncologist. "There is little awareness in the U.S. of the signs and symptoms of stomach cancer, and many younger patients may be diagnosed late -- when treatment is less effective."
The research team studied 75,225 cases using several cancer databases to review stomach cancer statistics from 1973 to 2015. Today, the average age of someone diagnosed with stomach cancer is 68, but people in their 30s, 40s and 50s are more at risk than they used to be.
Although there's no clear cutoff age for the definition of early onset and late-onset stomach cancer, the researchers found the distinctions held true whether they used an age cutoff of 60, 50 or 40 years. The researchers found that the incidence of late-onset stomach cancer decreased by 1.8% annually during the study period, while the early onset disease decreased by 1.9% annually from 1973 to 1995 and then increased by 1.5% through 2013. The proportion of early onset gastric cancer has doubled from 18% of all cases in 1995 to now more than 30% of all gastric cancer cases.
"Typically, we see stomach cancer being diagnosed in patients in their 70s, but increasingly we are seeing 30- to 50-year-old patients being diagnosed," Dr. Grotz says.
The increased rate of the early onset disease is not from earlier detection or screening, Dr. Grotz adds. "There is no universal screening for stomach cancer, and the younger patients actually presented with later-stage disease than the older patients," he says.
In addition to being more deadly, early onset stomach cancer is also genetically and molecularly distinct, researchers found. Furthermore, traditional risk factors for developing stomach cancer among older Americans, such as smoking tobacco, did not appear to correlate with its early onset counterpart.
"Hopefully, studies like this will raise awareness and increase physician suspicion of stomach cancer, particularly in younger patients," Dr. Grotz says. Younger patients who feel full before finishing a meal, or have reflux, abdominal pain, unintentional weight loss and difficulty eating should see their health care provider, he adds.
Stomach cancer is the 16th most common cancer in the U.S., according to the American Cancer Society. It has a five-year survival rate of 31.5%, and there will be an estimated 27,510 new cases in 2019, according to the National Cancer Institute. The World Health Organization reports that cancer was the second leading cause of death globally in 2018 and that stomach cancer was the third most common cause of cancer death that year.
Read more at Science Daily
Dec 31, 2019
Happy New Year
Dec 30, 2019
Mosquitoes can sense toxins through their legs
Researchers at LSTM have identified a completely new mechanism by which mosquitoes that carry malaria are becoming resistant to insecticide.
After studying both Anopheles gambiae and Anopheles coluzzii, two major malaria vectors in West Africa, they found that a particular family of binding proteins situated in the insect's legs were highly expressed in resistant populations.
First author on a paper published today in the journal Nature, Dr Victoria Ingham, explains: "We have found a completely new insecticide resistance mechanism that we think is contributing to the lower than expected efficacy of bed nets. The protein, which is based in the legs, comes into direct contact with the insecticide as the insect lands on the net, making it an excellent potential target for future additives to nets to overcome this potent resistance mechanism."
Examining the Anopheline mosquitoes, the team demonstrated that the binding protein, SAP2, was found elevated in resistant populations and further elevated following contact with pyrethroids, the insecticide class used on all bed nets. They found that when levels of this protein were reduced, by partial silencing of the gene, susceptibility to pyrethroids were restored; conversely when the protein was expressed at elevated levels, previously susceptible mosquitoes became resistant to pyrethroids.
The increase in insecticide resistance across mosquito populations has led to the introduction of new insecticide treated bed nets containing the synergist piperonyl butoxide (PBO) as well as pyrethroid insecticides. The synergist targets one of the most widespread and previously most potent resistance mechanisms caused by the cytochrome P450s. However, mosquitoes are continually evolving new resistance mechanisms and the discovery of this new resistance mechanism provides an excellent opportunity to identify additional synergists that could be used to restore susceptibility
Professor Hilary Ranson is senior author on the paper. She said: "Long-lasting insecticide treated bed nets remain one of the key interventions in malaria control. It is vital that we understand and mitigate for resistance within mosquito populations in order to ensure that the dramatic reductions in disease rates in previous decades are not reversed. This newly discovered resistance mechanism could provide us with an important target for both the monitoring of insecticide resistance and the development of novel compounds able to block pyrethroid resistance and prevent the spread of malaria."
From Science Daily
After studying both Anopheles gambiae and Anopheles coluzzii, two major malaria vectors in West Africa, they found that a particular family of binding proteins situated in the insect's legs were highly expressed in resistant populations.
First author on a paper published today in the journal Nature, Dr Victoria Ingham, explains: "We have found a completely new insecticide resistance mechanism that we think is contributing to the lower than expected efficacy of bed nets. The protein, which is based in the legs, comes into direct contact with the insecticide as the insect lands on the net, making it an excellent potential target for future additives to nets to overcome this potent resistance mechanism."
Examining the Anopheline mosquitoes, the team demonstrated that the binding protein, SAP2, was found elevated in resistant populations and further elevated following contact with pyrethroids, the insecticide class used on all bed nets. They found that when levels of this protein were reduced, by partial silencing of the gene, susceptibility to pyrethroids were restored; conversely when the protein was expressed at elevated levels, previously susceptible mosquitoes became resistant to pyrethroids.
The increase in insecticide resistance across mosquito populations has led to the introduction of new insecticide treated bed nets containing the synergist piperonyl butoxide (PBO) as well as pyrethroid insecticides. The synergist targets one of the most widespread and previously most potent resistance mechanisms caused by the cytochrome P450s. However, mosquitoes are continually evolving new resistance mechanisms and the discovery of this new resistance mechanism provides an excellent opportunity to identify additional synergists that could be used to restore susceptibility
Professor Hilary Ranson is senior author on the paper. She said: "Long-lasting insecticide treated bed nets remain one of the key interventions in malaria control. It is vital that we understand and mitigate for resistance within mosquito populations in order to ensure that the dramatic reductions in disease rates in previous decades are not reversed. This newly discovered resistance mechanism could provide us with an important target for both the monitoring of insecticide resistance and the development of novel compounds able to block pyrethroid resistance and prevent the spread of malaria."
From Science Daily
Using deep learning to predict disease-associated mutations
During the past years, artificial intelligence (AI) -- the capability of a machine to mimic human behavior -- has become a key player in high-techs like drug development projects. AI tools help scientists to uncover the secret behind the big biological data using optimized computational algorithms. AI methods such as deep neural network improves decision making in biological and chemical applications i.e., prediction of disease-associated proteins, discovery of novel biomarkers and de novo design of small molecule drug leads. These state-of-the-art approaches help scientists to develop a potential drug more efficiently and economically.
A research team led by Professor Hongzhe Sun from the Department of Chemistry at the University of Hong Kong (HKU), in collaboration with Professor Junwen Wang from Mayo Clinic, Arizona in the United States (a former HKU colleague), implemented a robust deep learning approach to predict disease-associated mutations of the metal-binding sites in a protein. This is the first deep learning approach for the prediction of disease-associated metal-relevant site mutations in metalloproteins, providing a new platform to tackle human diseases. The research findings were recently published in a top scientific journal Nature Machine Intelligence.
Metal ions play pivotal roles either structurally or functionally in the (patho)physiology of human biological systems. Metals such as zinc, iron and copper are essential for all lives and their concentration in cells must be strictly regulated. A deficiency or an excess of these physiological metal ions can cause severe disease in humans. It was discovered that a mutation in human genome are strongly associated with different diseases. If these mutations happen in the coding region of DNA, it might disrupt metal-binding sites of the proteins and consequently initiate severe diseases in humans. Understanding of disease-associated mutations at the metal-binding sites of proteins will facilitate discovery of new drugs.
The team first integrated omics data from different databases to build a comprehensive training dataset. By looking at the statistics from the collected data, the team found that different metals have different disease associations. A mutation in zinc-binding sites has a major role in breast, liver, kidney, immune system and prostate diseases. By contrast, the mutations in calcium- and magnesium-binding sites are associated with muscular and immune system diseases, respectively. For iron-binding sites, mutations are more associated with metabolic diseases. Furthermore, mutations of manganese- and copper-binding sites are associated with cardiovascular diseases with the latter being associated with nervous system disease as well. They used a novel approach to extract spatial features from the metal binding sites using an energy-based affinity grid map. These spatial features have been merged with physicochemical sequential features to train the model. The final results show using the spatial features enhanced the performance of the prediction with an area under the curve (AUC) of 0.90 and an accuracy of 0.82. Given the limited advanced techniques and platforms in the field of metallomics and metalloproteins, the proposed deep learning approach offers a method to integrate the experimental data with bioinformatics analysis. The approach will help scientist to predict DNA mutations which are associated with disease like cancer, cardiovascular diseases and genetic disorders.
Read more at Science Daily
A research team led by Professor Hongzhe Sun from the Department of Chemistry at the University of Hong Kong (HKU), in collaboration with Professor Junwen Wang from Mayo Clinic, Arizona in the United States (a former HKU colleague), implemented a robust deep learning approach to predict disease-associated mutations of the metal-binding sites in a protein. This is the first deep learning approach for the prediction of disease-associated metal-relevant site mutations in metalloproteins, providing a new platform to tackle human diseases. The research findings were recently published in a top scientific journal Nature Machine Intelligence.
Metal ions play pivotal roles either structurally or functionally in the (patho)physiology of human biological systems. Metals such as zinc, iron and copper are essential for all lives and their concentration in cells must be strictly regulated. A deficiency or an excess of these physiological metal ions can cause severe disease in humans. It was discovered that a mutation in human genome are strongly associated with different diseases. If these mutations happen in the coding region of DNA, it might disrupt metal-binding sites of the proteins and consequently initiate severe diseases in humans. Understanding of disease-associated mutations at the metal-binding sites of proteins will facilitate discovery of new drugs.
The team first integrated omics data from different databases to build a comprehensive training dataset. By looking at the statistics from the collected data, the team found that different metals have different disease associations. A mutation in zinc-binding sites has a major role in breast, liver, kidney, immune system and prostate diseases. By contrast, the mutations in calcium- and magnesium-binding sites are associated with muscular and immune system diseases, respectively. For iron-binding sites, mutations are more associated with metabolic diseases. Furthermore, mutations of manganese- and copper-binding sites are associated with cardiovascular diseases with the latter being associated with nervous system disease as well. They used a novel approach to extract spatial features from the metal binding sites using an energy-based affinity grid map. These spatial features have been merged with physicochemical sequential features to train the model. The final results show using the spatial features enhanced the performance of the prediction with an area under the curve (AUC) of 0.90 and an accuracy of 0.82. Given the limited advanced techniques and platforms in the field of metallomics and metalloproteins, the proposed deep learning approach offers a method to integrate the experimental data with bioinformatics analysis. The approach will help scientist to predict DNA mutations which are associated with disease like cancer, cardiovascular diseases and genetic disorders.
Read more at Science Daily
Evolution: Revelatory relationship
A new study of the ecology of an enigmatic group of novel unicellular organisms by scientists from Ludwig-Maximilians-Universitaet (LMU) in Munich supports the idea hydrogen played an important role in the evolution of Eukaryota, the first nucleated cells.
One of the most consequential developments in the history of biological evolution occurred approximately 2 billion years ago with the appearance of the first eukaryotes -- unicellular organisms that contain a distinct nucleus. This first eukaryotic lineage would subsequently give rise to all higher organisms including plants and animals, but its origins remain obscure. Some years ago, microbiologists analyzed DNA sequences from marine sediments, which shed new light on the problem. These sediments were recovered from a hydrothermal vent at a site known as Loki's Castle (named for the Norse god of fire) on the Mid-Atlantic Ridge in the Arctic Ocean. Sequencing of the DNA molecules they contained revealed that they were derived from a previously unknown group of microorganisms.
Although the cells from which the DNA originated could not be isolated and characterized directly, the sequence data showed them to be closely related to the Archaea. The researchers therefore named the new group Lokiarchaeota.
Archaea, together with the phylum Bacteria, are the oldest known lineages of single-celled organisms. Strikingly, the genomes of the Lokiarchaeota indicated that they might exhibit structural and biochemical features that are otherwise specific to eukaryotes. This suggests that the Lokiarchaeota might be related to the last common ancestor of eukaryotes. Indeed, phylogenomic analysis of the Lokiarchaeota DNA from Loki's Castle strongly suggested that they were derived from descendants of one of the last common ancestors of Eukaryota and Archaea. Professor William Orsi of the Department of Earth and Environmental Sciences at LMU, in cooperation with scientists at Oldenburg University and the Max Planck Institute for Marine Microbiology, has now been able to examine the activity and metabolism of the Lokiarchaeota directly. The results support the suggested relationship between Lokiarchaeota and eukaryotes, and provide hints as to the nature of the environment in which the first eukaryotes evolved. The new findings appear in the journal Nature Microbiology.
The most likely scenario for the emergence of eukaryotes is that they arose from a symbiosis in which the host was an archaeal cell and the symbiont was a bacterium. According to this theory, the bacterial symbiont subsequently gave rise to the mitochondria -- the intracellular organelles that are responsible for energy production in eukaryotic cells. One hypothesis proposes that the archaeal host was dependent on hydrogen for its metabolism, and that the precursor of the mitochondria produced it. This "hydrogen hypothesis" posits that the two partner cells presumably lived in an anoxic environment that was rich in hydrogen, and if they were separated from the hydrogen source they would have become more dependent on one another for survival potentially leading to an endosymbiotic event. "If the Lokiarchaeota, as the descendants of this putative ur-archaeon, are also dependent on hydrogen, this would support the hydrogen hypothesis," says Orsi. "However, up to now, the ecology of these Archaea in their natural habitat was a matter of speculation."
Orsi and his team have now, for the first time, characterized the cellular metabolism of Lokiarchaeota recovered from sediment cores obtained from the seabottom in an extensive oxygen-depleted region off the coast of Namibia. They did so by analyzing the RNA present in these samples. RNA molecules are copied from the genomic DNA, and serve as blueprints for the synthesis of proteins. Their sequences therefore reflect patterns and levels of gene activity. The sequence analyses revealed that Lokiarchaeota in these samples outnumbered bacteria by 100- to 1000-fold. "That strongly indicates that these sediments are a favorable habitat for them, promoting their activity," says Orsi.
Read more at Science Daily
One of the most consequential developments in the history of biological evolution occurred approximately 2 billion years ago with the appearance of the first eukaryotes -- unicellular organisms that contain a distinct nucleus. This first eukaryotic lineage would subsequently give rise to all higher organisms including plants and animals, but its origins remain obscure. Some years ago, microbiologists analyzed DNA sequences from marine sediments, which shed new light on the problem. These sediments were recovered from a hydrothermal vent at a site known as Loki's Castle (named for the Norse god of fire) on the Mid-Atlantic Ridge in the Arctic Ocean. Sequencing of the DNA molecules they contained revealed that they were derived from a previously unknown group of microorganisms.
Although the cells from which the DNA originated could not be isolated and characterized directly, the sequence data showed them to be closely related to the Archaea. The researchers therefore named the new group Lokiarchaeota.
Archaea, together with the phylum Bacteria, are the oldest known lineages of single-celled organisms. Strikingly, the genomes of the Lokiarchaeota indicated that they might exhibit structural and biochemical features that are otherwise specific to eukaryotes. This suggests that the Lokiarchaeota might be related to the last common ancestor of eukaryotes. Indeed, phylogenomic analysis of the Lokiarchaeota DNA from Loki's Castle strongly suggested that they were derived from descendants of one of the last common ancestors of Eukaryota and Archaea. Professor William Orsi of the Department of Earth and Environmental Sciences at LMU, in cooperation with scientists at Oldenburg University and the Max Planck Institute for Marine Microbiology, has now been able to examine the activity and metabolism of the Lokiarchaeota directly. The results support the suggested relationship between Lokiarchaeota and eukaryotes, and provide hints as to the nature of the environment in which the first eukaryotes evolved. The new findings appear in the journal Nature Microbiology.
The most likely scenario for the emergence of eukaryotes is that they arose from a symbiosis in which the host was an archaeal cell and the symbiont was a bacterium. According to this theory, the bacterial symbiont subsequently gave rise to the mitochondria -- the intracellular organelles that are responsible for energy production in eukaryotic cells. One hypothesis proposes that the archaeal host was dependent on hydrogen for its metabolism, and that the precursor of the mitochondria produced it. This "hydrogen hypothesis" posits that the two partner cells presumably lived in an anoxic environment that was rich in hydrogen, and if they were separated from the hydrogen source they would have become more dependent on one another for survival potentially leading to an endosymbiotic event. "If the Lokiarchaeota, as the descendants of this putative ur-archaeon, are also dependent on hydrogen, this would support the hydrogen hypothesis," says Orsi. "However, up to now, the ecology of these Archaea in their natural habitat was a matter of speculation."
Orsi and his team have now, for the first time, characterized the cellular metabolism of Lokiarchaeota recovered from sediment cores obtained from the seabottom in an extensive oxygen-depleted region off the coast of Namibia. They did so by analyzing the RNA present in these samples. RNA molecules are copied from the genomic DNA, and serve as blueprints for the synthesis of proteins. Their sequences therefore reflect patterns and levels of gene activity. The sequence analyses revealed that Lokiarchaeota in these samples outnumbered bacteria by 100- to 1000-fold. "That strongly indicates that these sediments are a favorable habitat for them, promoting their activity," says Orsi.
Read more at Science Daily
How cells learn to 'count'
One of the wonders of cell biology is its symmetry. Mammalian cells have one nucleus and one cell membrane, and most humans have 23 pairs of chromosomes. Trillions of mammalian cells achieve this uniformity -- but some consistently break this mold to fulfill unique functions. Now, a team of Johns Hopkins Medicine researchers have found how these outliers take shape.
In experiments with genetically engineered mice, a research team has ruled out a mechanism that scientists have long believed controls the number of hairlike structures, called cilia, protruding on the outside of each mammalian cell. They concluded that control of the cilia count might rely instead on a process more commonly seen in non-mammalian species.
The experiments, described Dec. 2 in Nature Cell Biology and led by Andrew Holland, Ph.D., associate professor of molecular biology and genetics at the Johns Hopkins University School of Medicine, may eventually help scientists learn more about human diseases related to cilia function, such as respiratory infections, infertility and hydrocephaly.
Cilia are ancient structures that first appeared on single-celled organisms as small hairlike "fingers" that act as motors to move the cell or antennae to sense the environment. Nearly all human cells have at least one cilium that senses physical or chemical cues. However, some specialized cell types in humans, such as those lining the respiratory and reproductive tracts, have hundreds of cilia on their surface that beat in waves to move fluids through the system.
"Our main question was how these multicilliated cells become so dramatically different than the rest of the cells in our body," says Holland. "Most cells make exactly one cilium per cell, but these highly specialized cells give up on this tight numerical control and make hundreds of cilia."
In an effort to answer the question, Holland and his team took a closer look at the base of cilia, the place where the organelles attach and grow from the surface of the cell. This base is a microscopic, cylinder-shaped structure called a centriole.
In single-ciliated cells, Holland says, centrioles are created before a cell divides. A cell contains two-parent centrioles that each duplicate so that both new cells gets one pair of centrioles -- the oldest of these two centrioles then goes on to form the base of the cilium. However, multicilliated cells create unique structures, called deuterosomes, that act as a copy machine to enable the production of tens to hundreds of centrioles, allowing these cells to create many cilia.
"Deuterosomes are only present in multicilliated cells, and scientists have long thought they are central for determining how many centrioles and cilia are formed," says Holland.
To test this, Holland and his team developed a mouse model that lacked the gene that creates deuterosomes. Then, they analyzed the tissues that carry multicilliated cells and counted their cilia.
The researchers were surprised to find that the genetically engineered mice had the same number of cilia on cells as the mice with deuterosomes, ruling out the central role of deuterosomes in controlling the number of cilia. For example, the multicilliated cells lining the trachea all had 200-300 cillia per cell. The researchers also found that cells without deuterosomes could make new centrioles just as quickly as cells with them.
With this surprising result in hand, the researchers engineered mouse cells that lacked both deuterosomes and parent centrioles, and then counted the number of cilia formed in multicilliated cells.
"We figured that with no parent centrioles and no deuterosomes, the multicilliated cells would be unable to create the proper number of new cilia," says Holland.
Remarkably, Holland says, even the lack of parent centrioles had no effect on the final cilia number. Most cells in both normal and genetically engineered groups created between 50 and 90 cilia.
"This finding changes the dogma of what we believed to be the driving force behind centriole assembly," explains Holland. "Instead of needing a platform to grow on, centrioles can be created spontaneously."
While uncommon in mammals, the so-called de novo generation of centrioles is not new to the animal kingdom. Some species, such as the small flatworm planaria, lack parent centrioles entirely, and rely on de novo centriole generation to create the cilia they use to move.
In further experiments on genetically engineered mice, Holland found that all the spontaneously created centrioles were assembled within a region of the cell rich with fibrogranular material -- the protein components necessary to build a centriole.
He says he suspects that proteins found in that little-understood area of the cell contain the essential elements necessary to construct centrioles and ultimately control the number of cilia that are formed. Everything else, the deuterosomes and even the parent centrioles, are "not strictly necessary," he says.
"We think that the deuterosomes function to relieve pressure on the parent centrioles from the demands of making many new centrioles, freeing up parent centrioles to fulfill other functions," says Holland.
Read more at Science Daily
In experiments with genetically engineered mice, a research team has ruled out a mechanism that scientists have long believed controls the number of hairlike structures, called cilia, protruding on the outside of each mammalian cell. They concluded that control of the cilia count might rely instead on a process more commonly seen in non-mammalian species.
The experiments, described Dec. 2 in Nature Cell Biology and led by Andrew Holland, Ph.D., associate professor of molecular biology and genetics at the Johns Hopkins University School of Medicine, may eventually help scientists learn more about human diseases related to cilia function, such as respiratory infections, infertility and hydrocephaly.
Cilia are ancient structures that first appeared on single-celled organisms as small hairlike "fingers" that act as motors to move the cell or antennae to sense the environment. Nearly all human cells have at least one cilium that senses physical or chemical cues. However, some specialized cell types in humans, such as those lining the respiratory and reproductive tracts, have hundreds of cilia on their surface that beat in waves to move fluids through the system.
"Our main question was how these multicilliated cells become so dramatically different than the rest of the cells in our body," says Holland. "Most cells make exactly one cilium per cell, but these highly specialized cells give up on this tight numerical control and make hundreds of cilia."
In an effort to answer the question, Holland and his team took a closer look at the base of cilia, the place where the organelles attach and grow from the surface of the cell. This base is a microscopic, cylinder-shaped structure called a centriole.
In single-ciliated cells, Holland says, centrioles are created before a cell divides. A cell contains two-parent centrioles that each duplicate so that both new cells gets one pair of centrioles -- the oldest of these two centrioles then goes on to form the base of the cilium. However, multicilliated cells create unique structures, called deuterosomes, that act as a copy machine to enable the production of tens to hundreds of centrioles, allowing these cells to create many cilia.
"Deuterosomes are only present in multicilliated cells, and scientists have long thought they are central for determining how many centrioles and cilia are formed," says Holland.
To test this, Holland and his team developed a mouse model that lacked the gene that creates deuterosomes. Then, they analyzed the tissues that carry multicilliated cells and counted their cilia.
The researchers were surprised to find that the genetically engineered mice had the same number of cilia on cells as the mice with deuterosomes, ruling out the central role of deuterosomes in controlling the number of cilia. For example, the multicilliated cells lining the trachea all had 200-300 cillia per cell. The researchers also found that cells without deuterosomes could make new centrioles just as quickly as cells with them.
With this surprising result in hand, the researchers engineered mouse cells that lacked both deuterosomes and parent centrioles, and then counted the number of cilia formed in multicilliated cells.
"We figured that with no parent centrioles and no deuterosomes, the multicilliated cells would be unable to create the proper number of new cilia," says Holland.
Remarkably, Holland says, even the lack of parent centrioles had no effect on the final cilia number. Most cells in both normal and genetically engineered groups created between 50 and 90 cilia.
"This finding changes the dogma of what we believed to be the driving force behind centriole assembly," explains Holland. "Instead of needing a platform to grow on, centrioles can be created spontaneously."
While uncommon in mammals, the so-called de novo generation of centrioles is not new to the animal kingdom. Some species, such as the small flatworm planaria, lack parent centrioles entirely, and rely on de novo centriole generation to create the cilia they use to move.
In further experiments on genetically engineered mice, Holland found that all the spontaneously created centrioles were assembled within a region of the cell rich with fibrogranular material -- the protein components necessary to build a centriole.
He says he suspects that proteins found in that little-understood area of the cell contain the essential elements necessary to construct centrioles and ultimately control the number of cilia that are formed. Everything else, the deuterosomes and even the parent centrioles, are "not strictly necessary," he says.
"We think that the deuterosomes function to relieve pressure on the parent centrioles from the demands of making many new centrioles, freeing up parent centrioles to fulfill other functions," says Holland.
Read more at Science Daily
Dec 29, 2019
'Lost crops' could have fed as many as maize
Make some room in the garden, you storied three sisters: the winter squash, climbing beans and the vegetable we know as corn. Grown together, newly examined "lost crops" could have produced enough seed to feed as many indigenous people as traditionally grown maize, according to new research from Washington University in St. Louis.
But there are no written or oral histories to describe them. The domesticated forms of the lost crops are thought to be extinct.
Writing in the Journal of Ethnobiology, Natalie Muellert, assistant professor of archaeology in Arts & Sciences, describes how she painstakingly grew and calculated yield estimates for two annual plants that were cultivated in eastern North America for thousands of years -- and then abandoned.
Growing goosefoot (Chenopodium, sp.) and erect knotweed (Polygonum erectum) together is more productive than growing either one alone, Mueller discovered. Planted in tandem, along with the other known lost crops, they could have fed thousands.
Archaeologists found the first evidence of the lost crops in rock shelters in Kentucky and Arkansas in the 1930s. Seed caches and dried leaves were their only clues. Over the past 25 years, pioneering research by Gayle Fritz, professor emerita of archaeology at Washington University, helped to establish the fact that a previously unknown crop complex had supported local societies for millennia before maize -- a.k.a. corn -- was adopted as a staple crop.
But how, exactly, to grow them?
The lost crops include a small but diverse group of native grasses, seed plants, squashes and sunflowers -- of which only the squashes and sunflowers are still cultivated. For the rest, there is plenty of evidence that the lost crops were purposefully tended -- not just harvested from free-living stands in the wild -- but there are no instructions left.
"There are many Native American practitioners of ethnobotanical knowledge: farmers and people who know about medicinal plants, and people who know about wild foods. Their knowledge is really important," Mueller said. "But as far as we know, there aren't any people who hold knowledge about the lost crops and how they were grown.
"It's possible that there are communities or individuals who have knowledge about these plants, and it just isn't published or known by the academic community," she said. "But the way that I look at it, we can't talk to the people who grew these crops.
"So our group of people who are working with the living plants is trying to participate in the same kind of ecosystem that they participated in -- and trying to reconstruct their experience that way."
That means no greenhouse, no pesticides and no special fertilizers.
"You have not just the plants but also everything else that comes along with them, like the bugs that are pollinating them and the pests that are eating them. The diseases that affect them. The animals that they attract, and the seed dispersers," Mueller said. "There are all of these different kinds of ecological elements to the system, and we can interact with all of them."
Her new paper reported on two experiments designed to investigate germination requirements and yields for the lost crops.
Mueller discovered that a polyculture of goosefoot and erect knotweed is more productive than either grown separately as a monoculture. Grown together, the two plants have higher yields than global averages for closely related domesticated crops (think: quinoa and buckwheat), and they are within the range of those for traditionally grown maize.
"The main reason that I'm really interested in yield is because there's a debate within archeology about why these plants were abandoned," Mueller said. "We haven't had a lot of evidence about it one way or the other. But a lot of people have just kind of assumed that maize would be a lot more productive because we grow maize now, and it's known to be one of the most productive crops in the world per unit area."
Mueller wanted to quantify yield in this experiment so that she could directly compare yield for these plants to maize for the first time.
But it didn't work out perfectly. She was only able to obtain yield estimates for two of the five lost crops that she tried to grow -- but not for the plants known as maygrass, little barley and sumpweed.
Read more at Science Daily
But there are no written or oral histories to describe them. The domesticated forms of the lost crops are thought to be extinct.
Writing in the Journal of Ethnobiology, Natalie Muellert, assistant professor of archaeology in Arts & Sciences, describes how she painstakingly grew and calculated yield estimates for two annual plants that were cultivated in eastern North America for thousands of years -- and then abandoned.
Growing goosefoot (Chenopodium, sp.) and erect knotweed (Polygonum erectum) together is more productive than growing either one alone, Mueller discovered. Planted in tandem, along with the other known lost crops, they could have fed thousands.
Archaeologists found the first evidence of the lost crops in rock shelters in Kentucky and Arkansas in the 1930s. Seed caches and dried leaves were their only clues. Over the past 25 years, pioneering research by Gayle Fritz, professor emerita of archaeology at Washington University, helped to establish the fact that a previously unknown crop complex had supported local societies for millennia before maize -- a.k.a. corn -- was adopted as a staple crop.
But how, exactly, to grow them?
The lost crops include a small but diverse group of native grasses, seed plants, squashes and sunflowers -- of which only the squashes and sunflowers are still cultivated. For the rest, there is plenty of evidence that the lost crops were purposefully tended -- not just harvested from free-living stands in the wild -- but there are no instructions left.
"There are many Native American practitioners of ethnobotanical knowledge: farmers and people who know about medicinal plants, and people who know about wild foods. Their knowledge is really important," Mueller said. "But as far as we know, there aren't any people who hold knowledge about the lost crops and how they were grown.
"It's possible that there are communities or individuals who have knowledge about these plants, and it just isn't published or known by the academic community," she said. "But the way that I look at it, we can't talk to the people who grew these crops.
"So our group of people who are working with the living plants is trying to participate in the same kind of ecosystem that they participated in -- and trying to reconstruct their experience that way."
That means no greenhouse, no pesticides and no special fertilizers.
"You have not just the plants but also everything else that comes along with them, like the bugs that are pollinating them and the pests that are eating them. The diseases that affect them. The animals that they attract, and the seed dispersers," Mueller said. "There are all of these different kinds of ecological elements to the system, and we can interact with all of them."
Her new paper reported on two experiments designed to investigate germination requirements and yields for the lost crops.
Mueller discovered that a polyculture of goosefoot and erect knotweed is more productive than either grown separately as a monoculture. Grown together, the two plants have higher yields than global averages for closely related domesticated crops (think: quinoa and buckwheat), and they are within the range of those for traditionally grown maize.
"The main reason that I'm really interested in yield is because there's a debate within archeology about why these plants were abandoned," Mueller said. "We haven't had a lot of evidence about it one way or the other. But a lot of people have just kind of assumed that maize would be a lot more productive because we grow maize now, and it's known to be one of the most productive crops in the world per unit area."
Mueller wanted to quantify yield in this experiment so that she could directly compare yield for these plants to maize for the first time.
But it didn't work out perfectly. She was only able to obtain yield estimates for two of the five lost crops that she tried to grow -- but not for the plants known as maygrass, little barley and sumpweed.
Read more at Science Daily
Powder, not gas: A safer, more effective way to create a star on Earth
Scientists have found that sprinkling a type of powder into fusion plasma could aid in harnessing the ultra-hot gas within a tokamak facility to produce heat to create electricity without producing greenhouse gases or long-term radioactive waste.
A major issue with operating ring-shaped fusion facilities known as tokamaks is keeping the plasma that fuels fusion reactions free of impurities that could reduce the efficiency of the reactions. Now, scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have found that sprinkling a type of powder into the plasma could aid in harnessing the ultra-hot gas within a tokamak facility to produce heat to create electricity without producing greenhouse gases or long-term radioactive waste.
Fusion, the power that drives the sun and stars, combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
"The main goal of the experiment was to see if we could lay down a layer of boron using a powder injector," said PPPL physicist Robert Lunsford, lead author of the paper reporting the results in Nuclear Fusion. "So far, the experiment appears to have been successful."
¬The boron prevents an element known as tungsten from leaching out of the tokamak walls into the plasma, where it can cool the plasma particles and make fusion reactions less efficient. A layer of boron is applied to plasma-facing surfaces in a process known as "boronization." Scientists want to keep the plasma as hot as possible -- at least ten times hotter than the surface of the sun -- to maximize the fusion reactions and therefore the heat to create electricity.
Using powder to provide boronization is also far safer than using a boron gas called diborane, the method used today. "Diborane gas is explosive, so everybody has to leave the building housing the tokamak during the process," Lunsford said. "On the other hand, if you could just drop some boron powder into the plasma, that would be a lot easier to manage. While diborane gas is explosive and toxic, boron powder is inert," he added. "This new technique would be less intrusive and definitely less dangerous."
Another advantage is that while physicists must halt tokamak operations during the boron gas process, boron powder can be added to the plasma while the machine is running. This feature is important because to provide a constant source of electricity, future fusion facilities will have to run for long, uninterrupted periods of time. "This is one way to get to a steady-state fusion machine," Lunsford said. "You can add more boron without having to completely shut down the machine."
There are other reasons to use a powder dropper to coat the inner surfaces of a tokamak. For example, the researchers discovered that injecting boron powder has the same benefit as puffing nitrogen gas into the plasma -- both techniques increase the heat at the plasma edge, which increases how well the plasma stays confined within the magnetic fields.
The powder dropper technique also gives scientists an easy way to create low-density fusion plasmas, important because low density allows plasma instabilities to be suppressed by magnetic pulses, a relatively simple way to improve fusion reactions. Scientists could use powder to create low-density plasmas at any time, rather than waiting for a gaseous boronization. Being able to create a wide range of plasma conditions easily in this way would enable physicists to explore the behavior of plasma more thoroughly.
Read more at Science Daily
A major issue with operating ring-shaped fusion facilities known as tokamaks is keeping the plasma that fuels fusion reactions free of impurities that could reduce the efficiency of the reactions. Now, scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have found that sprinkling a type of powder into the plasma could aid in harnessing the ultra-hot gas within a tokamak facility to produce heat to create electricity without producing greenhouse gases or long-term radioactive waste.
Fusion, the power that drives the sun and stars, combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
"The main goal of the experiment was to see if we could lay down a layer of boron using a powder injector," said PPPL physicist Robert Lunsford, lead author of the paper reporting the results in Nuclear Fusion. "So far, the experiment appears to have been successful."
¬The boron prevents an element known as tungsten from leaching out of the tokamak walls into the plasma, where it can cool the plasma particles and make fusion reactions less efficient. A layer of boron is applied to plasma-facing surfaces in a process known as "boronization." Scientists want to keep the plasma as hot as possible -- at least ten times hotter than the surface of the sun -- to maximize the fusion reactions and therefore the heat to create electricity.
Using powder to provide boronization is also far safer than using a boron gas called diborane, the method used today. "Diborane gas is explosive, so everybody has to leave the building housing the tokamak during the process," Lunsford said. "On the other hand, if you could just drop some boron powder into the plasma, that would be a lot easier to manage. While diborane gas is explosive and toxic, boron powder is inert," he added. "This new technique would be less intrusive and definitely less dangerous."
Another advantage is that while physicists must halt tokamak operations during the boron gas process, boron powder can be added to the plasma while the machine is running. This feature is important because to provide a constant source of electricity, future fusion facilities will have to run for long, uninterrupted periods of time. "This is one way to get to a steady-state fusion machine," Lunsford said. "You can add more boron without having to completely shut down the machine."
There are other reasons to use a powder dropper to coat the inner surfaces of a tokamak. For example, the researchers discovered that injecting boron powder has the same benefit as puffing nitrogen gas into the plasma -- both techniques increase the heat at the plasma edge, which increases how well the plasma stays confined within the magnetic fields.
The powder dropper technique also gives scientists an easy way to create low-density fusion plasmas, important because low density allows plasma instabilities to be suppressed by magnetic pulses, a relatively simple way to improve fusion reactions. Scientists could use powder to create low-density plasmas at any time, rather than waiting for a gaseous boronization. Being able to create a wide range of plasma conditions easily in this way would enable physicists to explore the behavior of plasma more thoroughly.
Read more at Science Daily
Subscribe to:
Posts (Atom)