A new blood test in development has shown ability to screen for numerous types of cancer with a high degree of accuracy, a trial of the test shows. Dana-Farber Cancer Institute investigators will present the results of the multi-center trial during a session today at the European Society for Medical Oncology (ESMO) 2019 Congress.
The test, developed by GRAIL, Inc., uses next-generation sequencing technology to probe DNA for tiny chemical tags (methylation) that influence whether genes are active or inactive. When applied to nearly 3,600 blood samples -- some from patients with cancer, some from people who had not been diagnosed with cancer at the time of the blood draw -- the test successfully picked up a cancer signal from the cancer patient samples, and correctly identified the tissue from where the cancer began (the tissue of origin). The test's specificity -- its ability to return a positive result only when cancer is actually present -- was high, as was its ability to pinpoint the organ or tissue of origin, researchers found.
The new test looks for DNA, which cancer cells shed into the bloodstream when they die. In contrast to "liquid biopsies," which detect genetic mutations or other cancer-related alterations in DNA, the technology focuses on modifications to DNA known as methyl groups. Methyl groups are chemical units that can be attached to DNA, in a process called methylation, to control which genes are "on" and which are "off." Abnormal patterns of methylation turn out to be, in many cases, more indicative of cancer -- and cancer type -- than mutations are. The new test zeroes in on portions of the genome where abnormal methylation patterns are found in cancer cells.
"Our previous work indicated that methylation-based assays outperform traditional DNA-sequencing approaches to detecting multiple forms of cancer in blood samples," said the study's lead author, Geoffrey Oxnard, MD, of Dana-Farber. "The results of the new study demonstrate that such assays are a feasible way of screening people for cancer."
In the study, investigators analyzed cell-free DNA (DNA that had once been confined to cells but had entered the bloodstream upon the cells' death) in 3,583 blood samples, including 1,530 from patients diagnosed with cancer and 2,053 from people without cancer. The patient samples comprised more than 20 types of cancer, including hormone receptor-negative breast, colorectal, esophageal, gallbladder, gastric, head and neck, lung, lymphoid leukemia, multiple myeloma, ovarian, and pancreatic cancer.
The overall specificity was 99.4%, meaning only 0.6% of the results incorrectly indicated that cancer was present. The sensitivity of the assay for detecting a pre-specified high mortality cancers (the percent of blood samples from these patients that tested positive for cancer) was 76%. Within this group, the sensitivity was 32% for patients with stage I cancer; 76% for those with stage II; 85% for stage III; and 93% for stage IV. Sensitivity across all cancer types was 55%, with similar increases in detection by stage. For the 97% of samples that returned a tissue of origin result, the test correctly identified the organ or tissue of origin in 89% of cases.
Read more at Science Daily
Sep 28, 2019
Your energy-efficient washing machine could be harboring pathogens
For the first time ever, investigators have identified a washing machine as a reservoir of multidrug-resistant pathogens. The pathogens, a single clone of Klebsiella oxytoca, were transmitted repeatedly to newborns in a neonatal intensive care unit at a German children's hospital. The transmission was stopped only when the washing machine was removed from the hospital. The research is published this week in Applied and Environmental Microbiology, a journal of the American Society for Microbiology.
"This is a highly unusual case for a hospital, in that it involved a household type washing machine," said first author Ricarda M. Schmithausen, PhD. Hospitals normally use special washing machines and laundry processes that wash at high temperatures and with disinfectants, according to the German hospital hygiene guidelines, or they use designated external laundries.
The research has implications for household use of washers, said Dr. Schmithausen, Senior Physician, Institute for Hygiene and Public Health, WHO Collaboration Center, University Hospital, University of Bonn, Germany. Water temperatures used in home washers have been declining, to save energy, to well below 60°C (140°F), rendering them less lethal to pathogens. Resistance genes, as well as different microorganisms, can persist in domestic washing machines at those reduced temperatures, according to the report.
"If elderly people requiring nursing care with open wounds or bladder catheters, or younger people with suppurating injuries or infections live in the household, laundry should be washed at higher temperatures, or with efficient disinfectants, to avoid transmission of dangerous pathogens," said Martin Exner, MD, Chairman and Director of the Institute for Hygiene and Public Health, WHO Collaboration Center, University Hospital/University of Bonn. "This is a growing challenge for hygienists, as the number of people receiving nursing care from family members is constantly increasing."
At the hospital where the washing machine transmitted K. oxytoca, standard screening procedures revealed the presence of the pathogens on infants in the ICU. The researchers ultimately traced the source of the pathogens to the washing machine, after they had failed to find contamination in the incubators or to find carriers among healthcare workers who came into contact with the infants.
The newborns were in the ICU due mostly to premature birth or unrelated infection.The clothes that transmitted K. oxytoca from the washer to the infants were knitted caps and socks to help keep them warm in incubators, as newborns can quickly become cold, even in incubators, said Dr. Exner.
The investigators assume that the pathogens "were disseminated to the clothing after the washing process, via residual water on the rubber mantle [of the washer] and/or via the final rinsing process, which ran unheated and detergent-free water through the detergent compartment," implicating the design of the washers, as well as the low heat, according to the report. The study implies that changes in washing machine design and processing are required to prevent the accumulation of residual water where microbial growth can occur and contaminate clothes.
However, it still remains unclear how, and via what source the pathogens got into the washing machine.
The infants in the intensive care units (ICU) were colonized, but not infected by K. oxytoca. Colonization means that pathogens are harmlessly present, either because they have not yet invaded tissues where they can cause disease, or because the immune system is effectively repelling them.
Read more at Science Daily
"This is a highly unusual case for a hospital, in that it involved a household type washing machine," said first author Ricarda M. Schmithausen, PhD. Hospitals normally use special washing machines and laundry processes that wash at high temperatures and with disinfectants, according to the German hospital hygiene guidelines, or they use designated external laundries.
The research has implications for household use of washers, said Dr. Schmithausen, Senior Physician, Institute for Hygiene and Public Health, WHO Collaboration Center, University Hospital, University of Bonn, Germany. Water temperatures used in home washers have been declining, to save energy, to well below 60°C (140°F), rendering them less lethal to pathogens. Resistance genes, as well as different microorganisms, can persist in domestic washing machines at those reduced temperatures, according to the report.
"If elderly people requiring nursing care with open wounds or bladder catheters, or younger people with suppurating injuries or infections live in the household, laundry should be washed at higher temperatures, or with efficient disinfectants, to avoid transmission of dangerous pathogens," said Martin Exner, MD, Chairman and Director of the Institute for Hygiene and Public Health, WHO Collaboration Center, University Hospital/University of Bonn. "This is a growing challenge for hygienists, as the number of people receiving nursing care from family members is constantly increasing."
At the hospital where the washing machine transmitted K. oxytoca, standard screening procedures revealed the presence of the pathogens on infants in the ICU. The researchers ultimately traced the source of the pathogens to the washing machine, after they had failed to find contamination in the incubators or to find carriers among healthcare workers who came into contact with the infants.
The newborns were in the ICU due mostly to premature birth or unrelated infection.The clothes that transmitted K. oxytoca from the washer to the infants were knitted caps and socks to help keep them warm in incubators, as newborns can quickly become cold, even in incubators, said Dr. Exner.
The investigators assume that the pathogens "were disseminated to the clothing after the washing process, via residual water on the rubber mantle [of the washer] and/or via the final rinsing process, which ran unheated and detergent-free water through the detergent compartment," implicating the design of the washers, as well as the low heat, according to the report. The study implies that changes in washing machine design and processing are required to prevent the accumulation of residual water where microbial growth can occur and contaminate clothes.
However, it still remains unclear how, and via what source the pathogens got into the washing machine.
The infants in the intensive care units (ICU) were colonized, but not infected by K. oxytoca. Colonization means that pathogens are harmlessly present, either because they have not yet invaded tissues where they can cause disease, or because the immune system is effectively repelling them.
Read more at Science Daily
Sep 27, 2019
Many gas giant exoplanets waiting to be discovered
There is an as-yet-unseen population of Jupiter-like planets orbiting nearby Sun-like stars, awaiting discovery by future missions like NASA's WFIRST space telescope, according to new models of gas giant planet formation by Carnegie's Alan Boss, described in an upcoming publication in The Astrophysical Journal. His models are supported by a new Science paper on the surprising discovery of a gas giant planet orbiting a low-mass star.
"Astronomers have struck a bonanza in searching for and detecting exoplanets of every size and stripe since the first confirmed exoplanet, a hot Jupiter, was discovered in 1995," Boss explained. "Literally thousands upon thousands have been found to date, with masses ranging from less than that of Earth, to many times the mass of Jupiter."
But there are still gaping holes in scientists' knowledge about exoplanets that orbit their stars at distances similar to those at which our Solar System's gas giants orbit the Sun. In terms of mass and orbital period, planets like Jupiter represent a particularly small population of the known exoplanets, but it's not yet clear if this is due to biases in the observational techniques used to find them -- which favor planets with short-period-orbits over those with long-period-orbits -- or if this represents an actual deficit in exoplanet demographics.
All the recent exoplanet discoveries have led to a renewed focus on theoretical planet formation models. Two primary mechanisms exist for predicting how gas giant planets form from the rotating disk of gas and dust that surrounds a young star -- bottom-up, called core accretion, and top-down, called disk instability.
The former refers to slowly building a planet through the collisions of increasingly larger material -- solid dust grains, pebbles, boulders, and eventually planetesimals. The latter refers to a rapidly triggered process that occurs when the disk is massive and cool enough to form spiral arms and then dense clumps of self-gravitating gas and dust contract and coalesce into a baby planet.
While core accretion is considered the consensus planet-formation mechanism, Boss has long been a proponent of the competing disk instability mechanism, dating back to a seminal 1997 Science paper.
The just-published discovery by an Institute for Space Studies of Catalonia-led team of a star that's a tenth the mass of our Sun and hosts at least one gas giant planet is challenging the core-accretion method.
The mass of a disk should be proportional to the mass of the young star around which it rotates. The fact that at least one gas giant -- possibly two -- was found around a star that's so much smaller than our Sun indicates that either the original disk was enormous, or that core accretion does not work in this system. Orbital periods for lower mass stars are longer, which prevents core accretion from forming gas giants before the disk gas disappears, as core accretion is a much slower process than disk instability, according to Boss.
"It's a great vindication for the disk instability method and a demonstration how one unusual discovery can swing the pendulum on our understanding of how planets form," said one of the IEEC research team's members, Guillem Anglada-Escudé, himself a former Carnegie postdoc.
Boss' latest simulations follow the three-dimensional evolution of hot disks that start out in a stable configuration. On a variety of time scales, these disks cool down and form spiral arms, eventually resulting in dense clumps representing newborn protoplanets. Their masses and distances from the host star are similar to that of Jupiter and Saturn.
Read more at Science Daily
"Astronomers have struck a bonanza in searching for and detecting exoplanets of every size and stripe since the first confirmed exoplanet, a hot Jupiter, was discovered in 1995," Boss explained. "Literally thousands upon thousands have been found to date, with masses ranging from less than that of Earth, to many times the mass of Jupiter."
But there are still gaping holes in scientists' knowledge about exoplanets that orbit their stars at distances similar to those at which our Solar System's gas giants orbit the Sun. In terms of mass and orbital period, planets like Jupiter represent a particularly small population of the known exoplanets, but it's not yet clear if this is due to biases in the observational techniques used to find them -- which favor planets with short-period-orbits over those with long-period-orbits -- or if this represents an actual deficit in exoplanet demographics.
All the recent exoplanet discoveries have led to a renewed focus on theoretical planet formation models. Two primary mechanisms exist for predicting how gas giant planets form from the rotating disk of gas and dust that surrounds a young star -- bottom-up, called core accretion, and top-down, called disk instability.
The former refers to slowly building a planet through the collisions of increasingly larger material -- solid dust grains, pebbles, boulders, and eventually planetesimals. The latter refers to a rapidly triggered process that occurs when the disk is massive and cool enough to form spiral arms and then dense clumps of self-gravitating gas and dust contract and coalesce into a baby planet.
While core accretion is considered the consensus planet-formation mechanism, Boss has long been a proponent of the competing disk instability mechanism, dating back to a seminal 1997 Science paper.
The just-published discovery by an Institute for Space Studies of Catalonia-led team of a star that's a tenth the mass of our Sun and hosts at least one gas giant planet is challenging the core-accretion method.
The mass of a disk should be proportional to the mass of the young star around which it rotates. The fact that at least one gas giant -- possibly two -- was found around a star that's so much smaller than our Sun indicates that either the original disk was enormous, or that core accretion does not work in this system. Orbital periods for lower mass stars are longer, which prevents core accretion from forming gas giants before the disk gas disappears, as core accretion is a much slower process than disk instability, according to Boss.
"It's a great vindication for the disk instability method and a demonstration how one unusual discovery can swing the pendulum on our understanding of how planets form," said one of the IEEC research team's members, Guillem Anglada-Escudé, himself a former Carnegie postdoc.
Boss' latest simulations follow the three-dimensional evolution of hot disks that start out in a stable configuration. On a variety of time scales, these disks cool down and form spiral arms, eventually resulting in dense clumps representing newborn protoplanets. Their masses and distances from the host star are similar to that of Jupiter and Saturn.
Read more at Science Daily
Anxiety disorders linked to disturbances in the cells' powerhouses
The powerhouse of the cell, the mitochondria, provides energy for cellular functions. But those activities can become disturbed when chronic stress leads to anxiety symptoms in mice and humans. Iiris Hovatta of the University of Helsinki and colleagues report these findings in a new study published 26th September in PLOS Genetics.
Chronic stress due to stressful life events, such as divorce, unemployment, loss of a loved one and war, are a major risk factor for developing panic attacks and anxiety disorders. Not all people who experience stressful life events go on to develop a disorder, however, and scientists are trying to identify the pathways that lead some people to be resilient to stress, while others become vulnerable to anxiety. In the current study, researchers studied mice that developed symptoms of anxiety and depression, such as avoiding social interactions, after being exposed to high levels of stress. Using a multi-pronged approach, they tracked changes in gene activity and protein production in a key region of the brain for stress-response and anxiety. The analysis pointed to a number of changes in the mitochondria in the brain cells of mice exposed to frequent stress, compared to the non-stressed mice. Furthermore, testing of blood samples collected from patients with panic disorder after a panic attack also showed differences in mitochondrial pathways, suggesting that changes to cellular energy metabolism may be a common way that animals respond to stress.
The discovery that high levels of stress may substantially impact the functioning of the powerhouses of the cell opens up new avenues of research into stress-related diseases. "Very little is known about how chronic stress may affect cellular energy metabolism and thereby influence anxiety symptoms," said author Iiris Hovatta. "The underlying mechanisms may offer a key to new targets for therapeutic interventions of stress-related diseases."
Further studies of what causes these changes to the mitochondria may provide much needed insight into the molecular basis of panic disorder and other anxiety disorders. This is a critical step in developing better therapies to treat anxiety.
From Science Daily
Chronic stress due to stressful life events, such as divorce, unemployment, loss of a loved one and war, are a major risk factor for developing panic attacks and anxiety disorders. Not all people who experience stressful life events go on to develop a disorder, however, and scientists are trying to identify the pathways that lead some people to be resilient to stress, while others become vulnerable to anxiety. In the current study, researchers studied mice that developed symptoms of anxiety and depression, such as avoiding social interactions, after being exposed to high levels of stress. Using a multi-pronged approach, they tracked changes in gene activity and protein production in a key region of the brain for stress-response and anxiety. The analysis pointed to a number of changes in the mitochondria in the brain cells of mice exposed to frequent stress, compared to the non-stressed mice. Furthermore, testing of blood samples collected from patients with panic disorder after a panic attack also showed differences in mitochondrial pathways, suggesting that changes to cellular energy metabolism may be a common way that animals respond to stress.
The discovery that high levels of stress may substantially impact the functioning of the powerhouses of the cell opens up new avenues of research into stress-related diseases. "Very little is known about how chronic stress may affect cellular energy metabolism and thereby influence anxiety symptoms," said author Iiris Hovatta. "The underlying mechanisms may offer a key to new targets for therapeutic interventions of stress-related diseases."
Further studies of what causes these changes to the mitochondria may provide much needed insight into the molecular basis of panic disorder and other anxiety disorders. This is a critical step in developing better therapies to treat anxiety.
From Science Daily
Sleep varies by age, geographical location and gender
In an exceptionally extensive worldwide study on sleep, nearly a quarter of a million nights of sleep were measured among sleepers ranging between 16 and 30 years of age.
The findings indicate that there are differences in the duration and timing of sleep by age, geographical region and gender. The timing of sleep was delayed among 16-24-year-old subjects, but in older subjects sleep was again timed earlier.
"It was interesting to find that the circadian rhythm shifts later even in people over 20 years of age. It was already previously known that sleep timing is delayed in adolescence. What was clearly highlighted in this study is how long into adulthood this actually carries on," says Liisa Kuula, a postdoctoral researcher at the University of Helsinki.
People in Europe and North America slept the longest, while the shortest sleep was observed in Asian countries. Sleep was timed the latest in the Middle East, while the earliest sleep rhythm was found in Oceania.
Young women slept more than young men, and the former also went to sleep earlier.
"Geographical differences were relatively small but similar to those seen in prior, smaller-scale studies. The need for sleep does not vary greatly between cultures, but differences arise in terms of the time reserved for sleeping," Kuula notes.
In the study, published in the Sleep Medicine journal, the sleeping habits of more than 17,000 adolescents and young adults were monitored for two weeks. The monitoring was carried out with the help of Polar Electro devices worn by the study subjects, measuring sleep with accelerometers, among other technologies. The subjects gave consent for using their personal data for research purposes, with the data being processed in anonymised form.
"We gained an exceptionally diverse and extensive dataset which provides important basic knowledge on sleep among different age groups across the globe. Validated consumer devices may hold the potential for investigations more comprehensive than those conducted with conventional data collection methods," Kuula says.
From Science Daily
The findings indicate that there are differences in the duration and timing of sleep by age, geographical region and gender. The timing of sleep was delayed among 16-24-year-old subjects, but in older subjects sleep was again timed earlier.
"It was interesting to find that the circadian rhythm shifts later even in people over 20 years of age. It was already previously known that sleep timing is delayed in adolescence. What was clearly highlighted in this study is how long into adulthood this actually carries on," says Liisa Kuula, a postdoctoral researcher at the University of Helsinki.
People in Europe and North America slept the longest, while the shortest sleep was observed in Asian countries. Sleep was timed the latest in the Middle East, while the earliest sleep rhythm was found in Oceania.
Young women slept more than young men, and the former also went to sleep earlier.
"Geographical differences were relatively small but similar to those seen in prior, smaller-scale studies. The need for sleep does not vary greatly between cultures, but differences arise in terms of the time reserved for sleeping," Kuula notes.
In the study, published in the Sleep Medicine journal, the sleeping habits of more than 17,000 adolescents and young adults were monitored for two weeks. The monitoring was carried out with the help of Polar Electro devices worn by the study subjects, measuring sleep with accelerometers, among other technologies. The subjects gave consent for using their personal data for research purposes, with the data being processed in anonymised form.
"We gained an exceptionally diverse and extensive dataset which provides important basic knowledge on sleep among different age groups across the globe. Validated consumer devices may hold the potential for investigations more comprehensive than those conducted with conventional data collection methods," Kuula says.
From Science Daily
Molecule links weight gain to gut bacteria
UT Southwestern researchers have found a key driver of the crosstalk that helps synchronize the absorption of nutrients in the gut with the rhythms of the Earth's day-night light cycle.
Their findings could have far-ranging implications for obesity in affluent countries and malnutrition in impoverished countries.
In the study, published this week by Science, Dr. Lora Hooper and her research team found that the commensal, or good, bacteria that live in the guts of mammals program the metabolic rhythms that govern the body's absorption of dietary fat. Dr. Hooper, Chair of Immunology and a Howard Hughes Medical Institute Investigator, is senior author of the study.
The study also found that microbes program these so-called circadian rhythms by activating a protein named histone deacetylase 3 (HDAC3), which is made by cells that line the gut. Those cells act as intermediaries between bacteria that aid in digestion of food and proteins that enable absorption of nutrients.
The study, done in mice, revealed that HDAC3 turns on genes involved in the absorption of fat. They found that HDAC3 interacts with the biological clock machinery within the gut to refine the rhythmic ebb and flow of proteins that enhance absorption of fat. This regulation occurs in the daytime in humans, who eat during the day, and at night in mice, which eat at night.
"The microbiome actually communicates with our metabolic machinery to make fat absorption more efficient. But when fat is overabundant, this communication can result in obesity. Whether the same thing is going on in other mammals, including humans, is the subject of future studies," added lead author Dr. Zheng Kuang, a postdoctoral fellow in the Hooper laboratory.
To go back in time, the story really starts with a few mice and crosstalk between two laboratories at UT Southwestern.
Dr. Hooper, who runs the University's colony of germ-free mice, which are raised in environments that have no microbes, is also a Professor of Immunology and Microbiology and a member of the Center for the Genetics of Host Defense. She holds the Jonathan W. Uhr, M.D. Distinguished Chair in Immunology, and is a Nancy Cain and Jeffrey A. Marcus Scholar in Medical Research, in Honor of Bill S. Vowell.
Histone modifications -- which are made by enzymes like HDAC3 -- control the expression of genes that in turn make proteins that carry out the work of the cell. Not long ago, the Hooper laboratory decided to do a mouse study of histone modifications that seemed to rise and fall along with circadian rhythms.
In comparing normal, bacteria-laden mice with germ-free ones, researchers discovered some histone modifications -- including those made by HDAC3 -- were circadian in normal mice, but held steady at a flat level in germ-free mice.
That's when Dr. Hooper contacted Dr. Eric Olson, Chair of Molecular Biology and Director of the Hamon Center for Regenerative Science and Medicine, who had done studies on HDAC3 in a different tissue, the heart. The two laboratories collaborated to develop a mouse that lacked HDAC3 only in the gut lining.
The mice they generated seemed unremarkable while eating a normal chow diet. However, when the researchers fed the mice a high fat, high sugar diet similar to one commonly consumed in the United States -- they found something very different.
"We call it the junk food diet. I describe it as like driving through a fast food restaurant for a burger and fries and then stopping off at the donut shop," she said. "Most mice on that diet become obese. To our surprise, those that had no HDAC3 in their intestinal lining were able to eat a high fat, high sugar diet and stay lean."
Next, they compared the HDAC3-deficient mice to the germ-free mice. The researchers found that both groups of mice showed the same flat, nonrhythmic histone modifications, confirming HDAC3's importance in circadian rhythms.
Every cell in the body has a molecular clock that governs bodily processes. The mouse study revealed that HDAC3 attaches to that cellular clock machinery to ensure absorption of fat is highest when mammals are awake and eating.
"Our results suggest that the microbiome and the circadian clock have evolved to work together to regulate metabolism," she said.
Why would a system evolve to make us fat? Dr. Hooper believes it could have evolved to enable mammals to use energy efficiently in order to boost immunity in an environment with food scarcity.
"This regulatory interaction probably didn't evolve to make us obese, but when combined with today's calorie-rich diets, obesity arises," she said, adding that this is speculation and the team is still working to understand all the components of the pathway.
Read more at Science Daily
Their findings could have far-ranging implications for obesity in affluent countries and malnutrition in impoverished countries.
In the study, published this week by Science, Dr. Lora Hooper and her research team found that the commensal, or good, bacteria that live in the guts of mammals program the metabolic rhythms that govern the body's absorption of dietary fat. Dr. Hooper, Chair of Immunology and a Howard Hughes Medical Institute Investigator, is senior author of the study.
The study also found that microbes program these so-called circadian rhythms by activating a protein named histone deacetylase 3 (HDAC3), which is made by cells that line the gut. Those cells act as intermediaries between bacteria that aid in digestion of food and proteins that enable absorption of nutrients.
The study, done in mice, revealed that HDAC3 turns on genes involved in the absorption of fat. They found that HDAC3 interacts with the biological clock machinery within the gut to refine the rhythmic ebb and flow of proteins that enhance absorption of fat. This regulation occurs in the daytime in humans, who eat during the day, and at night in mice, which eat at night.
"The microbiome actually communicates with our metabolic machinery to make fat absorption more efficient. But when fat is overabundant, this communication can result in obesity. Whether the same thing is going on in other mammals, including humans, is the subject of future studies," added lead author Dr. Zheng Kuang, a postdoctoral fellow in the Hooper laboratory.
To go back in time, the story really starts with a few mice and crosstalk between two laboratories at UT Southwestern.
Dr. Hooper, who runs the University's colony of germ-free mice, which are raised in environments that have no microbes, is also a Professor of Immunology and Microbiology and a member of the Center for the Genetics of Host Defense. She holds the Jonathan W. Uhr, M.D. Distinguished Chair in Immunology, and is a Nancy Cain and Jeffrey A. Marcus Scholar in Medical Research, in Honor of Bill S. Vowell.
Histone modifications -- which are made by enzymes like HDAC3 -- control the expression of genes that in turn make proteins that carry out the work of the cell. Not long ago, the Hooper laboratory decided to do a mouse study of histone modifications that seemed to rise and fall along with circadian rhythms.
In comparing normal, bacteria-laden mice with germ-free ones, researchers discovered some histone modifications -- including those made by HDAC3 -- were circadian in normal mice, but held steady at a flat level in germ-free mice.
That's when Dr. Hooper contacted Dr. Eric Olson, Chair of Molecular Biology and Director of the Hamon Center for Regenerative Science and Medicine, who had done studies on HDAC3 in a different tissue, the heart. The two laboratories collaborated to develop a mouse that lacked HDAC3 only in the gut lining.
The mice they generated seemed unremarkable while eating a normal chow diet. However, when the researchers fed the mice a high fat, high sugar diet similar to one commonly consumed in the United States -- they found something very different.
"We call it the junk food diet. I describe it as like driving through a fast food restaurant for a burger and fries and then stopping off at the donut shop," she said. "Most mice on that diet become obese. To our surprise, those that had no HDAC3 in their intestinal lining were able to eat a high fat, high sugar diet and stay lean."
Next, they compared the HDAC3-deficient mice to the germ-free mice. The researchers found that both groups of mice showed the same flat, nonrhythmic histone modifications, confirming HDAC3's importance in circadian rhythms.
Every cell in the body has a molecular clock that governs bodily processes. The mouse study revealed that HDAC3 attaches to that cellular clock machinery to ensure absorption of fat is highest when mammals are awake and eating.
"Our results suggest that the microbiome and the circadian clock have evolved to work together to regulate metabolism," she said.
Why would a system evolve to make us fat? Dr. Hooper believes it could have evolved to enable mammals to use energy efficiently in order to boost immunity in an environment with food scarcity.
"This regulatory interaction probably didn't evolve to make us obese, but when combined with today's calorie-rich diets, obesity arises," she said, adding that this is speculation and the team is still working to understand all the components of the pathway.
Read more at Science Daily
Otherworldly worms with three sexes discovered in Mono Lake
Mono Lake, Tufa State Natural Reserve, near Lee Vining, California. |
Mono Lake, located in the Eastern Sierras of California, is three times as salty as the ocean and has an alkaline pH of 10. Before this study, only two other species (other than bacteria and algae) were known to live in the lake -- brine shrimp and diving flies. In this new work, the team discovered eight more species, all belonging to a class of microscopic worms called nematodes, thriving in and around Mono Lake.
The work was done primarily in the laboratory of Paul Sternberg, Bren Professor of Biology. A paper describing the research appears online on September 26 in the journal Current Biology.
The Sternberg laboratory has had a long interest in nematodes, particularly Caenorhabditis elegans, which uses only 300 neurons to exhibit complex behaviors, such as sleeping, learning, smelling, and moving. That simplicity makes it a useful model organism with which to study fundamental neuroscience questions. Importantly, C. elegans can easily thrive in the laboratory under normal room temperatures and pressures.
As nematodes are considered the most abundant type of animal on the planet, former Sternberg lab graduate students Pei-Yin Shih (PhD '19) and James Siho Lee (PhD '19) thought they might find them in the harsh environment of Mono Lake. The eight species they found are diverse, ranging from benign microbe-grazers to parasites and predators. Importantly, all are resilient to the arsenic-laden conditions in the lake and are thus considered extremophiles -- organisms that thrive in conditions unsuitable for most life forms.
When comparing the new Auanema species to sister species in the same genus, the researchers found that the similar species also demonstrated high arsenic resistance, even though they do not live in environments with high arsenic levels. In another surprising discovery, Auanema sp. itself was found to be able to thrive in the laboratory under normal, non-extreme conditions. Only a few known extremophiles in the world can be studied in a laboratory setting.
This suggests that nematodes may have a genetic predisposition for resiliency and flexibility in adapting to harsh and benign environments alike.
"Extremophiles can teach us so much about innovative strategies for dealing with stress," says Shih. "Our study shows we still have much to learn about how these 1000-celled animals have mastered survival in extreme environments."
The researchers plan to determine if there are particular biochemical and genetic factors that enable nematodes' success and to sequence the genome of Auanema sp. to look for genes that may enable arsenic resistance. Arsenic-contaminated drinking water is a major global health concern; understanding how eukaryotes like nematodes deal with arsenic will help answer questions about how the toxin moves through and affects cells and bodies.
But beyond human health, studying extreme species like the nematodes of Mono Lake contributes to a bigger, global picture of the planet, says Lee.
"It's tremendously important that we appreciate and develop a curiosity for biodiversity," he adds, noting that the team had to receive special permits for their field work at the lake. "The next innovation for biotechnology could be out there in the wild. A new biodegradable sunscreen, for example, was discovered from extremophilic bacteria and algae. We have to protect and responsibly utilize wildlife."
Read more at Science Daily
Sep 26, 2019
Scientists watch a black hole shredding a star
A NASA satellite searching space for new planets gave astronomers an unexpected glimpse at a black hole ripping a star to shreds.
It is one of the most detailed looks yet at the phenomenon, called a tidal disruption event (or TDE), and the first for NASA's Transiting Exoplanet Survey Satellite (more commonly called TESS.)
The milestone was reached with the help of a worldwide network of robotic telescopes headquartered at The Ohio State University called ASAS-SN (All-Sky Automated Survey for Supernovae). Astronomers from the Carnegie Observatories, Ohio State and others published their findings today in The Astrophysical Journal.
"We've been closely monitoring the regions of the sky where TESS is observing with our ASAS-SN telescopes, but we were very lucky with this event in that the patch of the sky where TESS is continuously observing is small, and in that this happened to be one of the brightest TDEs we've seen," said Patrick Vallely, a co-author of the study and National Science Foundation Graduate Research Fellow at Ohio State. "Due to the quick ASAS-SN discovery and the incredible TESS data, we were able to see this TDE much earlier than we've seen others -- it gives us some new insight into how TDEs form."
Tidal disruption events happen when a star gets too close to a black hole. Depending on a number of factors, including the size of the star, the size of the black hole and how close the star is to the black hole, the black hole can either absorb the star or tear it apart into a long, spaghetti-like strand.
"TESS data let us see exactly when this destructive event, named ASASSN-19bt, started to get brighter, which we've never been able to do before," said Thomas Holoien, a Carnegie Fellow at the Carnegie Observatories in Pasadena, California, who earned his PhD at Ohio State. "Because we discovered the tidal disruption quickly with the ground-based ASAS-SN, we were able to trigger multiwavelength follow-up observations in the first few days. The early data will be incredibly helpful for modeling the physics of these outbursts."
ASAS-SN was the first system to see that a black hole was ripping a star apart. Holoien was working at the Las Campanas Observatory in Chile on Jan. 29, 2019, when he got an alert from one of ASAS-SN's robotic telescopes in South Africa. Holoien trained two Las Campanas telescopes on the tidal disruption event and then requested follow-up observations by other telescopes around the world.
TESS already happened to be monitoring the exact part of the sky where the ASAS-SN telescope discovered the tidal disruption event. It was not just good luck that the telescopes and satellite aligned -- after TESS launched in July 2018, the team behind ASAS-SN devoted more of the ASAS-SN telescopes' time to the parts of the sky that TESS was observing.
But it was fortunate that the tidal disruption event happened in the systems' lines of sight, said Chris Kochanek, professor of astronomy at Ohio State.
Tidal disruptions are rare, occurring once every 10,000 to 100,000 years in a galaxy the size of the Milky Way. Supernovae, by comparison, happen every 100 years or so. Scientists have observed about 40 tidal disruption events throughout history (ASAS-SN sees a few per year). The events are rare, Kochanek said, mostly because stars need to be very close to a black hole -- about the distance Earth is from our own sun -- in order to create one.
"Imagine that you are standing on top of a skyscraper downtown, and you drop a marble off the top, and you are trying to get it to go down a hole in a manhole cover," he said. "It's harder than that."
And because ASAS-SN caught the tidal disruption event early, Holoien was able to train additional telescopes on the event, capturing a more detailed look than might have been possible before. Astronomers could then look at data from TESS -- which, because it came from a satellite in space, was not available until a few weeks after the event -- to see whether they could spot the event in the lead-up. Data from TESS meant that they could see signs of the tidal disruption event in data from about 10 days before it occurred.
"The early TESS data allow us to see light very close to the black hole, much closer than we've been able to see before," Vallely said. "They also show us that ASASSN-19bt's rise in brightness was very smooth, which helps us tell that the event was a tidal disruption and not another type of outburst, like from the center of a galaxy or a supernova."
Holoien's team used UV data from NASA's Neil Gehrels Swift Observatory -- the earliest yet seen from a tidal disruption -- to determine that the temperature dropped by about 50%, from around 71,500 to 35,500 degrees Fahrenheit (40,000 to 20,000 degrees Celsius), over a few days. It's the first time such an early temperature decrease has been seen in a tidal disruption before, although a few theories have predicted it, Holoien said.
More typical for these kinds of events was the low level of X-ray emission seen by Swift. Scientists don't fully understand why tidal disruptions produce so much UV emission and so few X-rays.
Read more at Science Daily
It is one of the most detailed looks yet at the phenomenon, called a tidal disruption event (or TDE), and the first for NASA's Transiting Exoplanet Survey Satellite (more commonly called TESS.)
The milestone was reached with the help of a worldwide network of robotic telescopes headquartered at The Ohio State University called ASAS-SN (All-Sky Automated Survey for Supernovae). Astronomers from the Carnegie Observatories, Ohio State and others published their findings today in The Astrophysical Journal.
"We've been closely monitoring the regions of the sky where TESS is observing with our ASAS-SN telescopes, but we were very lucky with this event in that the patch of the sky where TESS is continuously observing is small, and in that this happened to be one of the brightest TDEs we've seen," said Patrick Vallely, a co-author of the study and National Science Foundation Graduate Research Fellow at Ohio State. "Due to the quick ASAS-SN discovery and the incredible TESS data, we were able to see this TDE much earlier than we've seen others -- it gives us some new insight into how TDEs form."
Tidal disruption events happen when a star gets too close to a black hole. Depending on a number of factors, including the size of the star, the size of the black hole and how close the star is to the black hole, the black hole can either absorb the star or tear it apart into a long, spaghetti-like strand.
"TESS data let us see exactly when this destructive event, named ASASSN-19bt, started to get brighter, which we've never been able to do before," said Thomas Holoien, a Carnegie Fellow at the Carnegie Observatories in Pasadena, California, who earned his PhD at Ohio State. "Because we discovered the tidal disruption quickly with the ground-based ASAS-SN, we were able to trigger multiwavelength follow-up observations in the first few days. The early data will be incredibly helpful for modeling the physics of these outbursts."
ASAS-SN was the first system to see that a black hole was ripping a star apart. Holoien was working at the Las Campanas Observatory in Chile on Jan. 29, 2019, when he got an alert from one of ASAS-SN's robotic telescopes in South Africa. Holoien trained two Las Campanas telescopes on the tidal disruption event and then requested follow-up observations by other telescopes around the world.
TESS already happened to be monitoring the exact part of the sky where the ASAS-SN telescope discovered the tidal disruption event. It was not just good luck that the telescopes and satellite aligned -- after TESS launched in July 2018, the team behind ASAS-SN devoted more of the ASAS-SN telescopes' time to the parts of the sky that TESS was observing.
But it was fortunate that the tidal disruption event happened in the systems' lines of sight, said Chris Kochanek, professor of astronomy at Ohio State.
Tidal disruptions are rare, occurring once every 10,000 to 100,000 years in a galaxy the size of the Milky Way. Supernovae, by comparison, happen every 100 years or so. Scientists have observed about 40 tidal disruption events throughout history (ASAS-SN sees a few per year). The events are rare, Kochanek said, mostly because stars need to be very close to a black hole -- about the distance Earth is from our own sun -- in order to create one.
"Imagine that you are standing on top of a skyscraper downtown, and you drop a marble off the top, and you are trying to get it to go down a hole in a manhole cover," he said. "It's harder than that."
And because ASAS-SN caught the tidal disruption event early, Holoien was able to train additional telescopes on the event, capturing a more detailed look than might have been possible before. Astronomers could then look at data from TESS -- which, because it came from a satellite in space, was not available until a few weeks after the event -- to see whether they could spot the event in the lead-up. Data from TESS meant that they could see signs of the tidal disruption event in data from about 10 days before it occurred.
"The early TESS data allow us to see light very close to the black hole, much closer than we've been able to see before," Vallely said. "They also show us that ASASSN-19bt's rise in brightness was very smooth, which helps us tell that the event was a tidal disruption and not another type of outburst, like from the center of a galaxy or a supernova."
Holoien's team used UV data from NASA's Neil Gehrels Swift Observatory -- the earliest yet seen from a tidal disruption -- to determine that the temperature dropped by about 50%, from around 71,500 to 35,500 degrees Fahrenheit (40,000 to 20,000 degrees Celsius), over a few days. It's the first time such an early temperature decrease has been seen in a tidal disruption before, although a few theories have predicted it, Holoien said.
More typical for these kinds of events was the low level of X-ray emission seen by Swift. Scientists don't fully understand why tidal disruptions produce so much UV emission and so few X-rays.
Read more at Science Daily
Galaxy found to float in a tranquil sea of halo gas
Using one cosmic mystery to probe another, astronomers have analyzed the signal from a fast radio burst, an enigmatic blast of cosmic radio waves lasting less than a millisecond, to characterize the diffuse gas in the halo of a massive galaxy.
A vast halo of low-density gas extends far beyond the luminous part of a galaxy where the stars are concentrated. Although this hot, diffuse gas makes up more of a galaxy's mass than stars do, it is nearly impossible to see. In November 2018, astronomers detected a fast radio burst that passed through the halo of a massive galaxy on its way toward Earth, allowing them for the first time to get clues to the nature of the halo gas from an elusive radio signal.
"The signal from the fast radio burst exposed the nature of the magnetic field around the galaxy and the structure of the halo gas. The study proves a new and transformative technique for exploring the nature of galaxy halos," said J. Xavier Prochaska, professor of astronomy and astrophysics at UC Santa Cruz and lead author of a paper on the new findings published online September 26 in Science.
Astronomers still don't know what produces fast radio bursts, and only recently have they been able to trace some of these very short, very bright radio signals back to the galaxies in which they originated. The November 2018 burst (named FRB 181112) was detected and localized by the instrument that pioneered this technique, CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope. Follow-up observations with other telescopes identified not only its host galaxy but also a bright galaxy in front of it.
"When we overlaid the radio and optical images, we could see straight away that the fast radio burst pierced the halo of this coincident foreground galaxy and, for the first time, we had a direct way of investigating this otherwise invisible matter surrounding this galaxy," said coauthor Cherie Day at Swinburne University of Technology, Australia.
A galactic halo contains both dark matter and ordinary ("baryonic") matter, which is expected to be mostly hot ionized gas. While the luminous part of a massive galaxy might be around 30,000 light-years across, its roughly spherical halo is ten times larger. Halo gas fuels star formation as it falls in toward the center of the galaxy, while other processes (such as supernova explosions) can eject material out of the star-forming regions and into the galactic halo. One reason astronomers want to study the halo gas is to better understand these ejection processes, which can shut down star formation.
"The halo gas is a fossil record of these ejection processes, so our observations can inform theories about how matter is ejected and how magnetic fields are threaded through galaxies," Prochaska said.
Contrary to expectations, the results of the new study indicate a very low density and a feeble magnetic field in the halo of this intervening galaxy.
"This galaxy's halo is surprisingly tranquil," Prochaska said. "The radio signal was largely unperturbed by the galaxy, which is in stark contrast to what previous models predict would have happened to the burst."
The signal of FRB 181112 consisted of several pulses, each lasting less than 40 microseconds (ten thousand times shorter than the blink of an eye). The short duration of the pulses puts an upper limit on the density of the halo gas, because passage through a denser medium would lengthen the radio signals. The researchers calculated that the density of the halo gas must be less than a tenth of an atom per cubic centimeter (equivalent to several hundred atoms in a volume the size of a child's balloon).
"Like the shimmering air on a hot summer's day, the tenuous atmosphere in this massive galaxy should warp the signal of the fast radio burst. Instead we received a pulse so pristine and sharp that there is no signature of this gas at all," said coauthor Jean-Pierre Macquart, an astronomer at the International Center for Radio Astronomy Research at Curtin University, Australia.
The density constraints also limit the possibility of turbulence or clouds of cool gas within the halo ("cool" being a relative term, referring here to temperatures around 10,000 Kelvin, versus the hot halo gas at around 1 million Kelvin). "One favored model is that halos are pervaded by clouds of clumpy gas. We find no evidence for these clouds whatsoever," Prochaska said.
The FRB signal also yields information about the magnetic field in the halo, which affects the polarization of the radio waves. Analyzing the polarization as a function of frequency gives a "rotation measure" for the halo, which the researchers found to be very low. "The weak magnetic field in the halo is a billion times weaker than that of a refrigerator magnet," Prochaska said.
At this point, with results from only one galactic halo, the researchers cannot say whether the unexpectedly low density and magnetic field strength are unusual or if previous studies of galactic halos have overestimated these properties. ASKAP and other radio telescopes will use fast radio bursts to study many more galactic halos and resolve their properties.
Read more at Science Daily
A vast halo of low-density gas extends far beyond the luminous part of a galaxy where the stars are concentrated. Although this hot, diffuse gas makes up more of a galaxy's mass than stars do, it is nearly impossible to see. In November 2018, astronomers detected a fast radio burst that passed through the halo of a massive galaxy on its way toward Earth, allowing them for the first time to get clues to the nature of the halo gas from an elusive radio signal.
"The signal from the fast radio burst exposed the nature of the magnetic field around the galaxy and the structure of the halo gas. The study proves a new and transformative technique for exploring the nature of galaxy halos," said J. Xavier Prochaska, professor of astronomy and astrophysics at UC Santa Cruz and lead author of a paper on the new findings published online September 26 in Science.
Astronomers still don't know what produces fast radio bursts, and only recently have they been able to trace some of these very short, very bright radio signals back to the galaxies in which they originated. The November 2018 burst (named FRB 181112) was detected and localized by the instrument that pioneered this technique, CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope. Follow-up observations with other telescopes identified not only its host galaxy but also a bright galaxy in front of it.
"When we overlaid the radio and optical images, we could see straight away that the fast radio burst pierced the halo of this coincident foreground galaxy and, for the first time, we had a direct way of investigating this otherwise invisible matter surrounding this galaxy," said coauthor Cherie Day at Swinburne University of Technology, Australia.
A galactic halo contains both dark matter and ordinary ("baryonic") matter, which is expected to be mostly hot ionized gas. While the luminous part of a massive galaxy might be around 30,000 light-years across, its roughly spherical halo is ten times larger. Halo gas fuels star formation as it falls in toward the center of the galaxy, while other processes (such as supernova explosions) can eject material out of the star-forming regions and into the galactic halo. One reason astronomers want to study the halo gas is to better understand these ejection processes, which can shut down star formation.
"The halo gas is a fossil record of these ejection processes, so our observations can inform theories about how matter is ejected and how magnetic fields are threaded through galaxies," Prochaska said.
Contrary to expectations, the results of the new study indicate a very low density and a feeble magnetic field in the halo of this intervening galaxy.
"This galaxy's halo is surprisingly tranquil," Prochaska said. "The radio signal was largely unperturbed by the galaxy, which is in stark contrast to what previous models predict would have happened to the burst."
The signal of FRB 181112 consisted of several pulses, each lasting less than 40 microseconds (ten thousand times shorter than the blink of an eye). The short duration of the pulses puts an upper limit on the density of the halo gas, because passage through a denser medium would lengthen the radio signals. The researchers calculated that the density of the halo gas must be less than a tenth of an atom per cubic centimeter (equivalent to several hundred atoms in a volume the size of a child's balloon).
"Like the shimmering air on a hot summer's day, the tenuous atmosphere in this massive galaxy should warp the signal of the fast radio burst. Instead we received a pulse so pristine and sharp that there is no signature of this gas at all," said coauthor Jean-Pierre Macquart, an astronomer at the International Center for Radio Astronomy Research at Curtin University, Australia.
The density constraints also limit the possibility of turbulence or clouds of cool gas within the halo ("cool" being a relative term, referring here to temperatures around 10,000 Kelvin, versus the hot halo gas at around 1 million Kelvin). "One favored model is that halos are pervaded by clouds of clumpy gas. We find no evidence for these clouds whatsoever," Prochaska said.
The FRB signal also yields information about the magnetic field in the halo, which affects the polarization of the radio waves. Analyzing the polarization as a function of frequency gives a "rotation measure" for the halo, which the researchers found to be very low. "The weak magnetic field in the halo is a billion times weaker than that of a refrigerator magnet," Prochaska said.
At this point, with results from only one galactic halo, the researchers cannot say whether the unexpectedly low density and magnetic field strength are unusual or if previous studies of galactic halos have overestimated these properties. ASKAP and other radio telescopes will use fast radio bursts to study many more galactic halos and resolve their properties.
Read more at Science Daily
How neural circuits drive hungry individuals to peak performance
Success is no accident: To reach your goal you need perseverance. But where does the motivation come from? An international team of researchers led by scientists from the Technical University of Munich (TUM) has now identified the neural circuit in the brain of fruit flies which makes them perform at their best when searching for food.
The odor of vinegar or fruit lets fruit flies walk faster. To reach the food, they run until exhaustion. But despite their efforts, they do not get any closer to their goal: In the set-up at the laboratory of the TUM School of Life Sciences Weihenstephan the upper bodies of the tiny flies are fixed in place and the flies are running without getting anywhere.
With the movement of their legs they are turning a ball which is floating on an air cushion. The turning speed shows neurobiologist professor Ilona C. Grunwald Kadow how much effort the fruit fly is putting into finding food.
"Our experiments show that hungry individuals keep increasing their performance -- they run up to nine meters per minute. Fruit flies which are full give up much faster," the researcher reports. "This proves that even simple organisms show stamina and perseverance -- up to now, these qualities were thought to be reserved for humans and other higher organisms."
A neural circuit controls perseverance
Together with Julijana Gjorgjieva, Professor for Computational Neuroscience at TUM and group leader at the Max Planck Institute for Brain Research in Frankfurt, as well as an international and interdisciplinary team of researchers, Grunwald Kadow has now identified a neural circuit in the brain of the small flies, which controls this kind of perseverance.
It is not a coincidence that the researchers investigated the motivation of fruit flies. "The brains of these flies have a million times fewer nerve cells than human brains. This makes it a lot easier to find out what an individual neuron does and how," the professor explains. "In this way, we are able to understand the principles of neural circuits which also form the basis for the function of complex brains."
The power of neurons
To identify the neural circuit which is responsible for motivation, the team used various techniques: First, a mathematical model was created which simulates the interaction of external and internal stimuli -- for example the odor of vinegar and hunger.
In the next step, the neuroscientists of TUM identified the network of interest in the brain of the fruit fly in cooperation with colleagues in the USA and Great Britain. This was achieved with the help of electron microscopy as well as in-vivo imaging and behavioral experiments.
The result: The neural circuit of interest is located in the learning and memory center of the fly brain. It is controlled by the two neurotransmitters dopamine and octopamine, which is related to the human noradrenaline. Dopamine increases the activity of the circuit, i. e. increases motivation; octopamine reduces the willingness to make an effort.
Read more at Science Daily
The odor of vinegar or fruit lets fruit flies walk faster. To reach the food, they run until exhaustion. But despite their efforts, they do not get any closer to their goal: In the set-up at the laboratory of the TUM School of Life Sciences Weihenstephan the upper bodies of the tiny flies are fixed in place and the flies are running without getting anywhere.
With the movement of their legs they are turning a ball which is floating on an air cushion. The turning speed shows neurobiologist professor Ilona C. Grunwald Kadow how much effort the fruit fly is putting into finding food.
"Our experiments show that hungry individuals keep increasing their performance -- they run up to nine meters per minute. Fruit flies which are full give up much faster," the researcher reports. "This proves that even simple organisms show stamina and perseverance -- up to now, these qualities were thought to be reserved for humans and other higher organisms."
A neural circuit controls perseverance
Together with Julijana Gjorgjieva, Professor for Computational Neuroscience at TUM and group leader at the Max Planck Institute for Brain Research in Frankfurt, as well as an international and interdisciplinary team of researchers, Grunwald Kadow has now identified a neural circuit in the brain of the small flies, which controls this kind of perseverance.
It is not a coincidence that the researchers investigated the motivation of fruit flies. "The brains of these flies have a million times fewer nerve cells than human brains. This makes it a lot easier to find out what an individual neuron does and how," the professor explains. "In this way, we are able to understand the principles of neural circuits which also form the basis for the function of complex brains."
The power of neurons
To identify the neural circuit which is responsible for motivation, the team used various techniques: First, a mathematical model was created which simulates the interaction of external and internal stimuli -- for example the odor of vinegar and hunger.
In the next step, the neuroscientists of TUM identified the network of interest in the brain of the fruit fly in cooperation with colleagues in the USA and Great Britain. This was achieved with the help of electron microscopy as well as in-vivo imaging and behavioral experiments.
The result: The neural circuit of interest is located in the learning and memory center of the fly brain. It is controlled by the two neurotransmitters dopamine and octopamine, which is related to the human noradrenaline. Dopamine increases the activity of the circuit, i. e. increases motivation; octopamine reduces the willingness to make an effort.
Read more at Science Daily
Can excessive athletic training make your brain tired? New study says yes
You'd expect excessive athletic training to make the body tired, but can it make the brain tired too? A new study reported in the journal Current Biology on September 26 suggests that the answer is "yes."
When researchers imposed an excessive training load on triathletes, they showed a form of mental fatigue. This fatigue included reduced activity in a portion of the brain important for making decisions. The athletes also acted more impulsively, opting for immediate rewards instead of bigger ones that would take longer to achieve.
"The lateral prefrontal region that was affected by sport-training overload was exactly the same that had been shown vulnerable to excessive cognitive work in our previous studies," says corresponding author Mathias Pessiglione of Hôpital de la Pitié-Salpêtrière in Paris. "This brain region therefore appeared as the weak spot of the brain network responsible for cognitive control."
Together, the studies suggest a connection between mental and physical effort: both require cognitive control. The reason such control is essential in demanding athletic training, they suggest, is that to maintain physical effort and reach a distant goal requires cognitive control.
"You need to control the automatic process that makes you stop when muscles or joints hurt," Pessiglione says.
The researchers, including Pessiglione and first author Bastien Blain, explain that the initial idea for the study came from the National Institute of Sport, Expertise, and Performance (INSEP) in France, which trains athletes for the Olympic games. Some athletes had suffered from "overtraining syndrome," in which their performance plummeted as they experienced an overwhelming sense of fatigue. The question was: Did this overtraining syndrome arise in part from neural fatigue in the brain -- the same kind of fatigue that also can be caused by excessive intellectual work?
To find out, Pessiglione and colleagues recruited 37 competitive male endurance athletes with an average age of 35. Participants were assigned to either continue their normal training or to increase that training by 40% per session over a three-week period. The researchers monitored their physical performance during cycling exercises performed on rest days and assessed their subjective experience of fatigue using questionnaires every two days. They also conducted behavioral testing and functional magnetic resonance imaging (fMRI) scanning experiments.
The evidence showed that physical training overload led the athletes to feel more fatigued. They also acted more impulsively in standard tests used to evaluate how they'd make economic choices. This tendency was shown as a bias in favoring immediate over delayed rewards. The brains of athletes who'd been overloaded physically also showed diminished activation of the lateral prefrontal cortex, a key region of the executive control system, as they made those economic choices.
The findings show that, while endurance sport is generally good for your health, overdoing it can have adverse effects on your brain, the researchers say.
"Our findings draw attention to the fact that neural states matter: you don't make the same decisions when your brain is in a fatigue state," Pessiglione say.
These findings may be important not just for producing the best athletes but also for economic choice theory, which typically ignores such fluctuations in the neural machinery responsible for decision-making, the researchers say. It suggests it may also be important to monitor fatigue level in order to prevent bad decisions from being made in the political, judicial, or economic domains.
Read more at Science Daily
When researchers imposed an excessive training load on triathletes, they showed a form of mental fatigue. This fatigue included reduced activity in a portion of the brain important for making decisions. The athletes also acted more impulsively, opting for immediate rewards instead of bigger ones that would take longer to achieve.
"The lateral prefrontal region that was affected by sport-training overload was exactly the same that had been shown vulnerable to excessive cognitive work in our previous studies," says corresponding author Mathias Pessiglione of Hôpital de la Pitié-Salpêtrière in Paris. "This brain region therefore appeared as the weak spot of the brain network responsible for cognitive control."
Together, the studies suggest a connection between mental and physical effort: both require cognitive control. The reason such control is essential in demanding athletic training, they suggest, is that to maintain physical effort and reach a distant goal requires cognitive control.
"You need to control the automatic process that makes you stop when muscles or joints hurt," Pessiglione says.
The researchers, including Pessiglione and first author Bastien Blain, explain that the initial idea for the study came from the National Institute of Sport, Expertise, and Performance (INSEP) in France, which trains athletes for the Olympic games. Some athletes had suffered from "overtraining syndrome," in which their performance plummeted as they experienced an overwhelming sense of fatigue. The question was: Did this overtraining syndrome arise in part from neural fatigue in the brain -- the same kind of fatigue that also can be caused by excessive intellectual work?
To find out, Pessiglione and colleagues recruited 37 competitive male endurance athletes with an average age of 35. Participants were assigned to either continue their normal training or to increase that training by 40% per session over a three-week period. The researchers monitored their physical performance during cycling exercises performed on rest days and assessed their subjective experience of fatigue using questionnaires every two days. They also conducted behavioral testing and functional magnetic resonance imaging (fMRI) scanning experiments.
The evidence showed that physical training overload led the athletes to feel more fatigued. They also acted more impulsively in standard tests used to evaluate how they'd make economic choices. This tendency was shown as a bias in favoring immediate over delayed rewards. The brains of athletes who'd been overloaded physically also showed diminished activation of the lateral prefrontal cortex, a key region of the executive control system, as they made those economic choices.
The findings show that, while endurance sport is generally good for your health, overdoing it can have adverse effects on your brain, the researchers say.
"Our findings draw attention to the fact that neural states matter: you don't make the same decisions when your brain is in a fatigue state," Pessiglione say.
These findings may be important not just for producing the best athletes but also for economic choice theory, which typically ignores such fluctuations in the neural machinery responsible for decision-making, the researchers say. It suggests it may also be important to monitor fatigue level in order to prevent bad decisions from being made in the political, judicial, or economic domains.
Read more at Science Daily
Sep 25, 2019
Naming of new interstellar visitor: 2I/Borisov
On 30 August 2019 the amateur astronomer Gennady Borisov, from MARGO observatory, Crimea, discovered an object with a comet-like appearance. The object has a condensed coma, and more recently a short tail has been observed. Mr. Borisov made this discovery with a 0.65-metre telescope he built himself.
After a week of observations by amateur and professional astronomers all over the world, the IAU Minor Planet Center was able to compute a preliminary orbit, which suggested this object was interstellar -- only the second such object known to have passed through the Solar System.
The orbit is now sufficiently well known, and the object is unambiguously interstellar in origin; it has received its final designation as the second interstellar object, 2I. In this case, the IAU has decided to follow the tradition of naming cometary objects after their discoverers, so the object has been named 2I/Borisov.
Of the thousands of comets discovered so far, none has an orbit as hyperbolic as that of 2I/Borisov. This conclusion is independently supported by the NASA JPL Solar System Dynamics Group. Coming just two years after the discovery of the first interstellar object 1I/'Oumuamua, this new finding suggests that such objects may be sufficiently numerous to provide a new way of investigating processes in planetary systems beyond our own.
2I/Borisov will make its closest approach to the Sun (reach its perihelion) on 7 December 2019, when it will be 2 astronomical units (AU) from the Sun and also 2 AU from Earth. By December and January it is expected that it will be at its brightest in the southern sky. It will then begin its outbound journey, eventually leaving the Solar System forever.
Astronomers are eagerly observing this object, which will be continuously observable for many months, a period longer than that of its predecessor, 1I/'Oumuamua. Astronomers are optimistic about their chances of studying this rare guest in great detail.
Estimates of the sizes of comets are difficult because the small cometary nucleus is embedded in the coma, but, from the observed brightness, 2I/Borisov appears to be around a few kilometres in diameter. One of the largest telescopes in the world, the 10.4m Gran Telescopio Canarias in the Canary Islands, has already obtained a spectrum of 2I/Borisov and has found it to resemble those of typical cometary nuclei.
Read more at Science Daily
After a week of observations by amateur and professional astronomers all over the world, the IAU Minor Planet Center was able to compute a preliminary orbit, which suggested this object was interstellar -- only the second such object known to have passed through the Solar System.
The orbit is now sufficiently well known, and the object is unambiguously interstellar in origin; it has received its final designation as the second interstellar object, 2I. In this case, the IAU has decided to follow the tradition of naming cometary objects after their discoverers, so the object has been named 2I/Borisov.
Of the thousands of comets discovered so far, none has an orbit as hyperbolic as that of 2I/Borisov. This conclusion is independently supported by the NASA JPL Solar System Dynamics Group. Coming just two years after the discovery of the first interstellar object 1I/'Oumuamua, this new finding suggests that such objects may be sufficiently numerous to provide a new way of investigating processes in planetary systems beyond our own.
2I/Borisov will make its closest approach to the Sun (reach its perihelion) on 7 December 2019, when it will be 2 astronomical units (AU) from the Sun and also 2 AU from Earth. By December and January it is expected that it will be at its brightest in the southern sky. It will then begin its outbound journey, eventually leaving the Solar System forever.
Astronomers are eagerly observing this object, which will be continuously observable for many months, a period longer than that of its predecessor, 1I/'Oumuamua. Astronomers are optimistic about their chances of studying this rare guest in great detail.
Estimates of the sizes of comets are difficult because the small cometary nucleus is embedded in the coma, but, from the observed brightness, 2I/Borisov appears to be around a few kilometres in diameter. One of the largest telescopes in the world, the 10.4m Gran Telescopio Canarias in the Canary Islands, has already obtained a spectrum of 2I/Borisov and has found it to resemble those of typical cometary nuclei.
Read more at Science Daily
Some parents pass on more mutations to their children than others
Everyone is a mutant but some are prone to diverge more than others, report scientists at University of Utah Health.
At birth, children typically have 70 new genetic mutations compared to their parents (out of the 6 billion letters that make both parental copies of DNA sequence). A new study published in eLife shows that number varies dramatically with some people being born with twice as many mutations as others, and that characteristic runs in families.
That difference is based largely on two influences. One is the age of a child's parents. A child born to a father who is 35 years old will likely have more mutations than a sibling born to the same father at 25.
"The number of mutations we pass on to the next generation increases with parental age," said Thomas Sasani, lead author of the study and a graduate student in human genetics at U of U Health. Previous studies have demonstrated the phenomenon, also confirmed by this study.
Another difference is that the effects of parental age on mutation rates differ considerably among families -- much more than had been previously appreciated. In one family, a child may have two additional mutations compared to a sibling born when their parents were ten years younger. Two siblings born ten years apart to a different set of parents may vary by more than 30 mutations.
"This shows that we as parents are not all equal in this regard," said Aaron Quinlan, PhD, senior author of the study. He is also a professor of human genetics at U of U health and associate director of the Utah Center for Genetic Discovery. "Some of us pass on more mutations than others and this is an important source of genetic novelty and genetic disease."
Impacts of new mutations depend on where they land in our DNA, and on the passage of time. On occasion the genetic changes cause serious disease, but the majority occur in parts of our genetic code that don't have obvious effects on human health.
And even though new changes make up a small fraction of the overall DNA sequence, they add up with each subsequent generation. Increasing the so-called mutation load could potentially make individuals more susceptible to illness, said Sasani. It remains to be determined whether factors that impact the mutation rate increase the likelihood for certain diseases.
Although the majority of new mutations originally arise in fathers' sperm, not all do. One in five mutations come from mothers' eggs, and increasing age does not drive as many new mutations in moms as it does in dads. Further, it's estimated that one in ten new mutations seen in children come from neither parent. Instead, they arise anew in the embryo soon after fertilization.
The new insights were found by performing whole genome sequencing and genetic analysis on 603 individuals from 33 three-generation families from Utah, the largest study of its kind. The families were part of the Centre d'Etude du Polymorphisme Humain (CEPH) consortium that were central to many key investigations that formed a modern understanding of human genetics. The large size of the Utah CEPH families, which had as many as 16 children over a span of 27 years, made them well-suited for this new investigation.
It's surprising that the Utah CEPH families have a large range in the number of mutations they accumulate, says Quinlan. That's because the families are similar in many ways. They are all of European ancestry, live within the same geographic region, and likely have similar lifestyles and environmental exposures.
Read more at Science Daily
At birth, children typically have 70 new genetic mutations compared to their parents (out of the 6 billion letters that make both parental copies of DNA sequence). A new study published in eLife shows that number varies dramatically with some people being born with twice as many mutations as others, and that characteristic runs in families.
That difference is based largely on two influences. One is the age of a child's parents. A child born to a father who is 35 years old will likely have more mutations than a sibling born to the same father at 25.
"The number of mutations we pass on to the next generation increases with parental age," said Thomas Sasani, lead author of the study and a graduate student in human genetics at U of U Health. Previous studies have demonstrated the phenomenon, also confirmed by this study.
Another difference is that the effects of parental age on mutation rates differ considerably among families -- much more than had been previously appreciated. In one family, a child may have two additional mutations compared to a sibling born when their parents were ten years younger. Two siblings born ten years apart to a different set of parents may vary by more than 30 mutations.
"This shows that we as parents are not all equal in this regard," said Aaron Quinlan, PhD, senior author of the study. He is also a professor of human genetics at U of U health and associate director of the Utah Center for Genetic Discovery. "Some of us pass on more mutations than others and this is an important source of genetic novelty and genetic disease."
Impacts of new mutations depend on where they land in our DNA, and on the passage of time. On occasion the genetic changes cause serious disease, but the majority occur in parts of our genetic code that don't have obvious effects on human health.
And even though new changes make up a small fraction of the overall DNA sequence, they add up with each subsequent generation. Increasing the so-called mutation load could potentially make individuals more susceptible to illness, said Sasani. It remains to be determined whether factors that impact the mutation rate increase the likelihood for certain diseases.
Although the majority of new mutations originally arise in fathers' sperm, not all do. One in five mutations come from mothers' eggs, and increasing age does not drive as many new mutations in moms as it does in dads. Further, it's estimated that one in ten new mutations seen in children come from neither parent. Instead, they arise anew in the embryo soon after fertilization.
The new insights were found by performing whole genome sequencing and genetic analysis on 603 individuals from 33 three-generation families from Utah, the largest study of its kind. The families were part of the Centre d'Etude du Polymorphisme Humain (CEPH) consortium that were central to many key investigations that formed a modern understanding of human genetics. The large size of the Utah CEPH families, which had as many as 16 children over a span of 27 years, made them well-suited for this new investigation.
It's surprising that the Utah CEPH families have a large range in the number of mutations they accumulate, says Quinlan. That's because the families are similar in many ways. They are all of European ancestry, live within the same geographic region, and likely have similar lifestyles and environmental exposures.
Read more at Science Daily
True lies: How letter patterns color perceptions of truth
People today constantly encounter claims such as "Advil kills pain," "coffee prevents depression," or "Hilary promises amnesty" as brands, news outlets and social media sites vie for our attention -- yet few people take the time to investigate whether these statements are true. Researchers have now uncovered one of the subtle psychological variables that influences whether people deem a claim to be true or false: the sequence of the letters.
Based on previous literature, the researchers knew that the brain attempts to organize information in ways that follow familiar patterns and sequences. One of the most universal, well-known patterns is the alphabet, and the investigators suspected that claims with first letters conforming to the arbitrary "ABCD" sequence -- such as Andrenogel Increases Testosterone -- would be perceived as more truthful. The study is available online in the Journal of Consumer Psychology.
"We go about our lives looking for natural sequences, and when we find a match to one of these patterns, it feels right," says study author Dan King, PhD, an assistant professor at the University of Texas Rio Grande Valley. "An embedded alphabetic sequence, even if unconsciously perceived, feels like a safe haven, and our brains can make unconscious judgments that cause-and-effect statements following this pattern are true."
To test this "symbolic sequence effect," the researchers conducted an experiment in which one group of participants read 10 claims that followed the natural alphabetic sequence, such as "Befferil Eases Pain" or "Aspen Moisturizes Skin," and the control group read statements that did not conform to alphabetical order, such as "Vufferil Eases Pain" or "Vaspen Moisturizes Skin." Then both groups rated their estimation of the truthfulness of the claims. The truthfulness ratings were significantly higher for the claims that followed an alphabetical order, even if participants could not attribute the source of the feeling of truthfulness.
Then the researchers tested whether they could temporarily alter the brain's pattern recognition process and consequently influence an individual's perception of a claim's truthfulness. In this experiment, one group of participants watched a short video clip of the alphabet sung normally while another group saw the clip with the ABC song sung in reverse order. Later, the groups rated the truthfulness of 10 claims.
The truthfulness ratings for claims following the reversed alphabetical sequence -- such as "Uccuprin Strengthens Heart" -- were higher for participants who had heard the alphabet sung in reverse.
The finding suggests that companies may be more likely to convince consumers that a slogan or claim is true if the causal statement follows an alphabetical order, King says. The more frightening implication, though, relates to fake news. Headlines with cause-effect statements that are in alphabetical order may feel more true, even if they are not.
Read more at Science Daily
Based on previous literature, the researchers knew that the brain attempts to organize information in ways that follow familiar patterns and sequences. One of the most universal, well-known patterns is the alphabet, and the investigators suspected that claims with first letters conforming to the arbitrary "ABCD" sequence -- such as Andrenogel Increases Testosterone -- would be perceived as more truthful. The study is available online in the Journal of Consumer Psychology.
"We go about our lives looking for natural sequences, and when we find a match to one of these patterns, it feels right," says study author Dan King, PhD, an assistant professor at the University of Texas Rio Grande Valley. "An embedded alphabetic sequence, even if unconsciously perceived, feels like a safe haven, and our brains can make unconscious judgments that cause-and-effect statements following this pattern are true."
To test this "symbolic sequence effect," the researchers conducted an experiment in which one group of participants read 10 claims that followed the natural alphabetic sequence, such as "Befferil Eases Pain" or "Aspen Moisturizes Skin," and the control group read statements that did not conform to alphabetical order, such as "Vufferil Eases Pain" or "Vaspen Moisturizes Skin." Then both groups rated their estimation of the truthfulness of the claims. The truthfulness ratings were significantly higher for the claims that followed an alphabetical order, even if participants could not attribute the source of the feeling of truthfulness.
Then the researchers tested whether they could temporarily alter the brain's pattern recognition process and consequently influence an individual's perception of a claim's truthfulness. In this experiment, one group of participants watched a short video clip of the alphabet sung normally while another group saw the clip with the ABC song sung in reverse order. Later, the groups rated the truthfulness of 10 claims.
The truthfulness ratings for claims following the reversed alphabetical sequence -- such as "Uccuprin Strengthens Heart" -- were higher for participants who had heard the alphabet sung in reverse.
The finding suggests that companies may be more likely to convince consumers that a slogan or claim is true if the causal statement follows an alphabetical order, King says. The more frightening implication, though, relates to fake news. Headlines with cause-effect statements that are in alphabetical order may feel more true, even if they are not.
Read more at Science Daily
Humankind did not live with a high-carbon dioxide atmosphere until 1965
Humans have never before lived with the high carbon dioxide atmospheric conditions that have become the norm on Earth in the last 60 years, according to a new study that includes a Texas A&M University researcher.
Titled "Low CO2 levels of the entire Pleistocene Epoch" and published in Nature Communications today, the study shows that for the entire 2.5 million years of the Pleistocene era, carbon dioxide concentrations averaged 250 parts per million. Today's levels, by comparison, are more than 410 parts per million. In 1965, Earth's carbon dioxide atmospheric concentrations exceeded 320 parts per million, a high point never reached in the past 2.5 million years, the study shows.
"According to this research, from the first Homo erectus, which is currently dated to 2.1 to1.8 million years ago, until 1965, we have lived in a low-carbon dioxide environment -- concentrations were less than 320 parts per million," said Yige Zhang, a co-author of the research study and an assistant professor in the Department of Oceanography in the College of Geosciences. "So this current high-carbon dioxide environment is not only an experiment for the climate and the environment -- it's also an experiment for us, for ourselves."
Carbon dioxide is a greenhouse gas that contributes to the warming of Earth's atmosphere, and is considered a driver of global climate change, Zhang said.
"It's important to study atmospheric CO2 (carbon dioxide) concentrations in the geological past, because we know that there are already climate consequences and are going to be more climate consequences, and one way to learn about those consequences is to look into Earth's history," Zhang said. "Then we can see what kind of CO2 levels did we have, what did the climate look like, and what was the relationship between them."
Jiawei Da, Xianqiang Meng and Junfeng Ji, all of Nanjing University in China, and Gen Li of the California Institute of Technology co-authored the research.
The scientists analyzed soil carbonates from the Loess Plateau in central China to quantify ancient atmospheric carbon dioxide levels as far back as 2.5 million years ago. Climate scientists often use ice cores as the "gold standard" in physical climate records, Zhang said, but ice cores only cover the past 800,000 years.
Analyzing Paleogenic carbonates found in the ancient soil, or paleosols, from the Loess Plateau, the scientists reconstructed the Earth's carbon dioxide levels.
"The Loess Plateau is an incredible place to look at aeolian, or wind, accumulation of dust and soil," Zhang said. "The earliest identified dust on that plateau is from 22 million years ago. So, it has extremely long records. The layers of loess and paleosol there contain soil carbonates that record atmospheric carbon dioxide, if we have very careful eyes to look at them."
"Specifically, carbonates formed during soil formation generally reach carbon isotopic equilibrium with ambient soil CO2, which is a mixture of atmospheric CO2 and CO2 produced by soil respiration," said Nanjing University's Jiawei Da. "Through the application of a two-component mixing model, we can reconstruct paleo-CO2 levels using carbonates in fossil soils."
Using those materials and the techniques, the researchers constructed a carbon dioxide history of the Pleistocene.
"Our reconstructions show that for the entire Pleistocene period, carbon dioxide averaged around 250 parts per million, which is the same as the last 800,000 years' values," Zhang said.
Read more at Science Daily
Titled "Low CO2 levels of the entire Pleistocene Epoch" and published in Nature Communications today, the study shows that for the entire 2.5 million years of the Pleistocene era, carbon dioxide concentrations averaged 250 parts per million. Today's levels, by comparison, are more than 410 parts per million. In 1965, Earth's carbon dioxide atmospheric concentrations exceeded 320 parts per million, a high point never reached in the past 2.5 million years, the study shows.
"According to this research, from the first Homo erectus, which is currently dated to 2.1 to1.8 million years ago, until 1965, we have lived in a low-carbon dioxide environment -- concentrations were less than 320 parts per million," said Yige Zhang, a co-author of the research study and an assistant professor in the Department of Oceanography in the College of Geosciences. "So this current high-carbon dioxide environment is not only an experiment for the climate and the environment -- it's also an experiment for us, for ourselves."
Carbon dioxide is a greenhouse gas that contributes to the warming of Earth's atmosphere, and is considered a driver of global climate change, Zhang said.
"It's important to study atmospheric CO2 (carbon dioxide) concentrations in the geological past, because we know that there are already climate consequences and are going to be more climate consequences, and one way to learn about those consequences is to look into Earth's history," Zhang said. "Then we can see what kind of CO2 levels did we have, what did the climate look like, and what was the relationship between them."
Jiawei Da, Xianqiang Meng and Junfeng Ji, all of Nanjing University in China, and Gen Li of the California Institute of Technology co-authored the research.
The scientists analyzed soil carbonates from the Loess Plateau in central China to quantify ancient atmospheric carbon dioxide levels as far back as 2.5 million years ago. Climate scientists often use ice cores as the "gold standard" in physical climate records, Zhang said, but ice cores only cover the past 800,000 years.
Analyzing Paleogenic carbonates found in the ancient soil, or paleosols, from the Loess Plateau, the scientists reconstructed the Earth's carbon dioxide levels.
"The Loess Plateau is an incredible place to look at aeolian, or wind, accumulation of dust and soil," Zhang said. "The earliest identified dust on that plateau is from 22 million years ago. So, it has extremely long records. The layers of loess and paleosol there contain soil carbonates that record atmospheric carbon dioxide, if we have very careful eyes to look at them."
"Specifically, carbonates formed during soil formation generally reach carbon isotopic equilibrium with ambient soil CO2, which is a mixture of atmospheric CO2 and CO2 produced by soil respiration," said Nanjing University's Jiawei Da. "Through the application of a two-component mixing model, we can reconstruct paleo-CO2 levels using carbonates in fossil soils."
Using those materials and the techniques, the researchers constructed a carbon dioxide history of the Pleistocene.
"Our reconstructions show that for the entire Pleistocene period, carbon dioxide averaged around 250 parts per million, which is the same as the last 800,000 years' values," Zhang said.
Read more at Science Daily
Sep 24, 2019
Tale of two climate crises gives clues to the present
Figuring out what lies ahead for our species and our planet is one of the most pressing and challenging tasks for climate scientists. While models are very useful, there is nothing quite like Earth's history to reveal details about how oceans, animals, and plants respond to and recover from a warming world.
The two most recent major global warming events are especially instructive -- and worrisome, say scientists presenting new research Wednesday at the Annual Meeting of the Geological Society of America.
Ancient analogs
The two past climate crises that are comparable to today's happened 56 and 66 million years ago. The earlier one, the Cretaceous-Paleogene boundary (KPB) mass extinction, is infamous for ending the reign of the dinosaurs. The later event, called the Paleocene-Eocene Thermal Maximum (PETM) was relatively less severe, and provides clues to how the world can recover from such difficult times.
"We chose these two because they are the most recent examples of rapid climate warming and have been widely studied so we have more information about them," said Paula Mateo, a geologist at Caltech, who will be presenting the study on Wednesday.
Both ancient global warming events were, like today, caused by the release of greenhouse gases -- a.k.a. carbon emissions -- into the atmosphere. The sources in the past were not fossil fuel burning however, but related to very large and long volcanic eruptions -- unlike any that have occurred during the time humans have existed.
The geologic evidence suggests that the carbon emissions that preceded the dinosaurs' demise were at an average rate of about 0.2 to 3 gigatons per year. The PETM recorded carbon emissions of less than 1.1 gigatons per year, Mateo said. Those numbers are dwarfed next to humanity's emission rate of 10 gigatons per year, she added.
Dino killer?
The KPB mass extinction event is often attributed solely to the Chicxulub meteor impact in Mexico, but there is a growing body of evidence suggesting that the massive eruption of the Deccan Traps in India also played a role. That mega-eruption flowed across India in pulse after pulse, lasting about 750,000 years. A full 280,000 years before the extinction event the oceans had warmed 3 to 4 degrees Celsius while on land the warming was of 6 to 8 degrees C because of the eruptions. Volcanic activity accelerated during the last 25,000 years before the mass extinction, Mateo said, steadily releasing more carbon dioxide into the atmosphere. Those pulses added another 2.5 degrees C to the global temperature.
"This series of mega-pulses didn't let the ecosystems adapt or even survive," Mateo said. Fossil evidence suggests that the warming and ocean acidification stressed life on land and oceans, eventually contributing to one of the five mass extinction events in the history of the planet. Microfossils of the oceans' foraminifers, which are part of the base of the marine food chain, show signs that they were struggling leading up the end of the Cretaceous period and then 66% went extinct at the KPB, 33% survived but rapidly disappeared during the first 100,000 years after the KPB, and only one species survived in the long term. On land warming during the last 280,000 years of the Cretaceous appears to have started a decline in dinosaurs as well in early mammals, insects, and amphibians well prior to the last mega-eruptions ending with the KPB mass extinction.
Ocean-building event
The more recent PETM, for its part, was caused by the expansion of the North Atlantic Ocean basin. That involved a lot of magma rising up from below to become the new ocean crust. All that magma released a lot of carbon dioxide, which appears to have caused moderate warming that, in turn, triggered the melting of clathrates -- frozen methane hydrate deposits in the ocean floor. The methane emissions supercharged the greenhouse situation and led to a 5 degree C spike of warming.
That warming was hard on living things on land and sea, but it wasn't a series of blows, like what led to the KPB. Many animals were able to adapt or migrate and avoid the harshest conditions. It was a single blow with environmental consequences that lasted about 200,000 years but there wasn't a mass extinction event.
The best analog
Listed side-by-side, it's sobering to see how many of the same ecosystem effects of the KPB and PETM are now being played out in the oceans and on land in real time as a result of anthropogenic warming.
"The difference with today is that even though it's a very short pulse, the rate of change is very, very rapid," said Mateo. "It's happening so fast that the ecosystems are unable to catch up. There is no time for adaptation."
So while today's greenhouse warming is a single pulse, as in the PETM, it is happening orders of magnitude faster, which could be creating effects more like those of the KPB.
Read more at Science Daily
The two most recent major global warming events are especially instructive -- and worrisome, say scientists presenting new research Wednesday at the Annual Meeting of the Geological Society of America.
Ancient analogs
The two past climate crises that are comparable to today's happened 56 and 66 million years ago. The earlier one, the Cretaceous-Paleogene boundary (KPB) mass extinction, is infamous for ending the reign of the dinosaurs. The later event, called the Paleocene-Eocene Thermal Maximum (PETM) was relatively less severe, and provides clues to how the world can recover from such difficult times.
"We chose these two because they are the most recent examples of rapid climate warming and have been widely studied so we have more information about them," said Paula Mateo, a geologist at Caltech, who will be presenting the study on Wednesday.
Both ancient global warming events were, like today, caused by the release of greenhouse gases -- a.k.a. carbon emissions -- into the atmosphere. The sources in the past were not fossil fuel burning however, but related to very large and long volcanic eruptions -- unlike any that have occurred during the time humans have existed.
The geologic evidence suggests that the carbon emissions that preceded the dinosaurs' demise were at an average rate of about 0.2 to 3 gigatons per year. The PETM recorded carbon emissions of less than 1.1 gigatons per year, Mateo said. Those numbers are dwarfed next to humanity's emission rate of 10 gigatons per year, she added.
Dino killer?
The KPB mass extinction event is often attributed solely to the Chicxulub meteor impact in Mexico, but there is a growing body of evidence suggesting that the massive eruption of the Deccan Traps in India also played a role. That mega-eruption flowed across India in pulse after pulse, lasting about 750,000 years. A full 280,000 years before the extinction event the oceans had warmed 3 to 4 degrees Celsius while on land the warming was of 6 to 8 degrees C because of the eruptions. Volcanic activity accelerated during the last 25,000 years before the mass extinction, Mateo said, steadily releasing more carbon dioxide into the atmosphere. Those pulses added another 2.5 degrees C to the global temperature.
"This series of mega-pulses didn't let the ecosystems adapt or even survive," Mateo said. Fossil evidence suggests that the warming and ocean acidification stressed life on land and oceans, eventually contributing to one of the five mass extinction events in the history of the planet. Microfossils of the oceans' foraminifers, which are part of the base of the marine food chain, show signs that they were struggling leading up the end of the Cretaceous period and then 66% went extinct at the KPB, 33% survived but rapidly disappeared during the first 100,000 years after the KPB, and only one species survived in the long term. On land warming during the last 280,000 years of the Cretaceous appears to have started a decline in dinosaurs as well in early mammals, insects, and amphibians well prior to the last mega-eruptions ending with the KPB mass extinction.
Ocean-building event
The more recent PETM, for its part, was caused by the expansion of the North Atlantic Ocean basin. That involved a lot of magma rising up from below to become the new ocean crust. All that magma released a lot of carbon dioxide, which appears to have caused moderate warming that, in turn, triggered the melting of clathrates -- frozen methane hydrate deposits in the ocean floor. The methane emissions supercharged the greenhouse situation and led to a 5 degree C spike of warming.
That warming was hard on living things on land and sea, but it wasn't a series of blows, like what led to the KPB. Many animals were able to adapt or migrate and avoid the harshest conditions. It was a single blow with environmental consequences that lasted about 200,000 years but there wasn't a mass extinction event.
The best analog
Listed side-by-side, it's sobering to see how many of the same ecosystem effects of the KPB and PETM are now being played out in the oceans and on land in real time as a result of anthropogenic warming.
"The difference with today is that even though it's a very short pulse, the rate of change is very, very rapid," said Mateo. "It's happening so fast that the ecosystems are unable to catch up. There is no time for adaptation."
So while today's greenhouse warming is a single pulse, as in the PETM, it is happening orders of magnitude faster, which could be creating effects more like those of the KPB.
Read more at Science Daily
Evolution experiment: Specific immune response of beetles adapts to bacteria
When the immune system fends off pathogens, this can happen in a very wide variety of ways. For example, the immune system's memory is able to distinguish a foreign protein with which the organism has already come into contact from another and to react with a corresponding antibody. Researchers have now investigated experimentally whether this ability of the immune system to specifically fend off pathogens can adapt in the course of evolution. To this end, they studied many successive generations of flour beetles -- because insects can also specifically repel pathogens to a certain degree.
After the researchers repeatedly confronted the insects and their progeny with bacteria, they observed that the beetles' immune system reacted more strongly after just a few generations. "Our study helps us to understand whether an immune system's specificity ability can adapt quickly to the conditions of repeated confrontation with pathogens," says Prof. Joachim Kurtz from Münster University, who is heading the study.
The results might be able to help provide a better understanding of molecular processes that play a role in the innate immune memory in humans and that could perhaps be used for medical purposes. As insects are well suited for experimental evolution, the information thus acquired could usefully complement existing knowledge on the immune system of mammals. The study has been published in the journal "PNAS" (Proceedings of the National Academy of Sciences).
Background and method:
The immune system in human beings consists of two main systems -- the innate immune system and the adaptive one. The latter is the part which primarily "remembers" pathogens and can react specifically. Insects have a different immune system, but researchers have already been able to show that insects too are able to show a greater reaction to infections as a result of previous experience with pathogens. However, it has not yet been investigated whether this immunological specificity can adapt evolutionarily to the respective bacterial environment.
For their experiment, the evolutionary biologists collected data from more than 48,000 red flour beetles over a period of three years. They divided the beetles into different groups in order to expose them to different combinations of six different bacterial species in each case in the larval stage first killed and then living bacteria. In some of the groups, the researchers used the same bacteria within one generation; in the other groups they confronted the beetles with a variety of different bacteria. 14 generations and three years later, they produced the results of the experiment: beetles which had been exposed to the same type of bacteria for "vaccination" and infection had developed a higher specificity over generations. This helped the beetles especially whenever they had to defend themselves against Bacillus thuringiensis, a natural insect pathogen.
The increased specificity was demonstrated by the fact that after "vaccination" with this natural pathogen, a greater activation of certain genes could be detected which play a role for the immune system and metabolism. At the same time, the survival chances of the beetles rose after being infected with the bacterium -- in contrast to beetles that had evolved towards a low specificity. "This means that for certain bacteria a high specificity can develop quickly during evolution -- probably caused by changes in the immune genes," say the lead authors, Dr. Kevin Ferro and Dr. Robert Peuß, who carried out the experiments as part of their PhDs at the Institute of Evolution and Biodiversity at Münster University. It was noticeable, however, that this change did not occur in all bacteria used in the experiment. One possible explanation for this might be the limited opportunities of insects to recognize and combat various antigens.
Relevance and prospects:
The molecular mechanisms identified in this experiment could be relevant for humans -- in so-called 'trained immunity', a possibility being discussed in medicine for training the memory not only of the acquired, but also of the innate part of the immune system. Based on the newly acquired genetic data, the researchers want to take a more precise look at the immune memory of insects and "deactivate" the relevant genes using molecular-biological methods. In future, the researchers also want to examine the bacteria to see whether for example they develop faster when their host is prepared for them. As flour beetles are seen as a pest in food production, among others, the researchers' results could help to find a new strategy to combat them.
Read more at Science Daily
After the researchers repeatedly confronted the insects and their progeny with bacteria, they observed that the beetles' immune system reacted more strongly after just a few generations. "Our study helps us to understand whether an immune system's specificity ability can adapt quickly to the conditions of repeated confrontation with pathogens," says Prof. Joachim Kurtz from Münster University, who is heading the study.
The results might be able to help provide a better understanding of molecular processes that play a role in the innate immune memory in humans and that could perhaps be used for medical purposes. As insects are well suited for experimental evolution, the information thus acquired could usefully complement existing knowledge on the immune system of mammals. The study has been published in the journal "PNAS" (Proceedings of the National Academy of Sciences).
Background and method:
The immune system in human beings consists of two main systems -- the innate immune system and the adaptive one. The latter is the part which primarily "remembers" pathogens and can react specifically. Insects have a different immune system, but researchers have already been able to show that insects too are able to show a greater reaction to infections as a result of previous experience with pathogens. However, it has not yet been investigated whether this immunological specificity can adapt evolutionarily to the respective bacterial environment.
For their experiment, the evolutionary biologists collected data from more than 48,000 red flour beetles over a period of three years. They divided the beetles into different groups in order to expose them to different combinations of six different bacterial species in each case in the larval stage first killed and then living bacteria. In some of the groups, the researchers used the same bacteria within one generation; in the other groups they confronted the beetles with a variety of different bacteria. 14 generations and three years later, they produced the results of the experiment: beetles which had been exposed to the same type of bacteria for "vaccination" and infection had developed a higher specificity over generations. This helped the beetles especially whenever they had to defend themselves against Bacillus thuringiensis, a natural insect pathogen.
The increased specificity was demonstrated by the fact that after "vaccination" with this natural pathogen, a greater activation of certain genes could be detected which play a role for the immune system and metabolism. At the same time, the survival chances of the beetles rose after being infected with the bacterium -- in contrast to beetles that had evolved towards a low specificity. "This means that for certain bacteria a high specificity can develop quickly during evolution -- probably caused by changes in the immune genes," say the lead authors, Dr. Kevin Ferro and Dr. Robert Peuß, who carried out the experiments as part of their PhDs at the Institute of Evolution and Biodiversity at Münster University. It was noticeable, however, that this change did not occur in all bacteria used in the experiment. One possible explanation for this might be the limited opportunities of insects to recognize and combat various antigens.
Relevance and prospects:
The molecular mechanisms identified in this experiment could be relevant for humans -- in so-called 'trained immunity', a possibility being discussed in medicine for training the memory not only of the acquired, but also of the innate part of the immune system. Based on the newly acquired genetic data, the researchers want to take a more precise look at the immune memory of insects and "deactivate" the relevant genes using molecular-biological methods. In future, the researchers also want to examine the bacteria to see whether for example they develop faster when their host is prepared for them. As flour beetles are seen as a pest in food production, among others, the researchers' results could help to find a new strategy to combat them.
Read more at Science Daily
Machu Picchu: Ancient Incan sanctuary intentionally built on faults
Machu Picchu, Peru |
On Monday, 23 Sept. 2019, at the GSA Annual meeting in Phoenix, Rualdo Menegat, a geologist at Brazil's Federal University of Rio Grande do Sul, will present the results of a detailed geoarchaeological analysis that suggests the Incas intentionally built Machu Picchu -- as well as some of their cities -- in locations where tectonic faults meet. "Machu Pichu's location is not a coincidence," says Menegat. "It would be impossible to build such a site in the high mountains if the substrate was not fractured."
Using a combination of satellite imagery and field measurements, Menegat mapped a dense web of intersecting fractures and faults beneath the UNESCO World Heritage Site. His analysis indicates these features vary widely in scale, from tiny fractures visible in individual stones to major, 175-kilometer-long lineaments that control the orientation of some of the region's river valleys.
Menegat found that these faults and fractures occur in several sets, some of which correspond to the major fault zones responsible for uplifting the Central Andes Mountains during the past eight million years. Because some of these faults are oriented northeast-southwest and others trend northwest-southeast, they collectively create an "X" shape where they intersect beneath Machu Picchu.
Menegat's mapping suggests that the sanctuary's urban sectors and the surrounding agricultural fields, as well as individual buildings and stairs, are all oriented along the trends of these major faults. "The layout clearly reflects the fracture matrix underlying the site," says Menegat. Other ancient Incan cities, including Ollantaytambo, Pisac, and Cusco, are also located at the intersection of faults, says Menegat. "Each is precisely the expression of the main directions of the site's geological faults."
Menegat's results indicate the underlying fault-and-fracture network is as integral to Machu Picchu's construction as its legendary stonework. This mortar-free masonry features stones so perfectly fitted together that it's impossible to slide a credit card between them. As master stoneworkers, the Incas took advantage of the abundant building materials in the fault zone, says Menegat. "The intense fracturing there predisposed the rocks to breaking along these same planes of weakness, which greatly reduced the energy needed to carve them."
In addition to helping shape individual stones, the fault network at Machu Picchu likely offered the Incas other advantages, according to Menegat. Chief among these was a ready source of water. "The area's tectonic faults channeled meltwater and rainwater straight to the site," he says. Construction of the sanctuary in such a high perch also had the benefit of isolating the site from avalanches and landslides, all-too-common hazards in this alpine environment, Menegat explains.
Read more at Science Daily
Did mosasaurs do the breast stroke?
Illustration of an extinct mosasaur |
Now, new research suggests that mosasaurs had yet another potent advantage: a muscular breast stroke that may have added ambush-worthy bursts of speed.
"We know that mosasaurs most likely used their tails for locomotion. Now we think that they also used their forelimbs, or their tail and forelimbs together," explains lead author Kiersten Formoso, a Ph.D. student in vertebrate paleontology at the University of Southern California. That dual swimming style, she says, could make mosasaurs unique among tetrapods (four limbed creatures), living or extinct.
Previous studies noted that mosasaurs had an unusually large pectoral girdle -- the suite of bones that support the forelimbs. But most assumed the creature's swimming was mainly driven by their long tails, something like alligators or whales. That smooth, long distance-adapted swimming style is called "cruising," as opposed to "burst" motion. "Like anything that swims or flies, the laws of fluid dynamics mean that burst versus cruising is a tradeoff," explains co-author Mike Habib, Assistant Professor of Anatomical Sciences at USC. "Not many animals are good at both."
To dive in more closely on whether mosasaurs were burst-adapted, cruise-adapted, or an unusual balance of both, Formoso and co-authors focused on the oversized pectoral girdle. They studied a fossil Plotosaurus, a type of mosasaur, at the Natural History Museum of Los Angeles County. In addition, they used measurements of mosasaur pectoral girdles published in other studies.
They determined that the mosasaurs' unusually large and low-placed pectoral girdle supported large muscle attachments. In addition, says Habib, asymmetry in the bone structure is a telltale sign of the strong, inward pull-down motion called adduction. These analyses suggest that mosasaurs used their forelimbs to swim, breast-stroke style, adding powerful bursts of propulsion to their ability to cruise.
The team continues to model bone structure, morphology, measurements, and fluid dynamics such as drag to learn exactly how, and how fast, these sea monsters swam. Along with applications to biomechanics, and even robotics, say Formoso and Habib, the study also sheds light on how evolution and ecosystems are affected by fluid dynamics.
Read more at Science Daily
Sep 23, 2019
Is theory on Earth's climate in the last 15 million years wrong?
A key theory that attributes the climate evolution of the Earth to the breakdown of Himalayan rocks may not explain the cooling over the past 15 million years, according to a Rutgers-led study.
The study in the journal Nature Geoscience could shed more light on the causes of long-term climate change. It centers on the long-term cooling that occurred before the recent global warming tied to greenhouse gas emissions from humanity.
"The findings of our study, if substantiated, raise more questions than they answered," said senior author Yair Rosenthal, a distinguished professor in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick. "If the cooling is not due to enhanced Himalayan rock weathering, then what processes have been overlooked?"
For decades, the leading hypothesis has been that the collision of the Indian and Asian continents and uplifting of the Himalayas brought fresh rocks to the Earth's surface, making them more vulnerable to weathering that captured and stored carbon dioxide -- a key greenhouse gas. But that hypothesis remains unconfirmed.
Lead author Weimin Si, a former Rutgers doctoral student now at Brown University, and Rosenthal challenge the hypothesis and examined deep-sea sediments rich with calcium carbonate.
Over millions of years, the weathering of rocks captured carbon dioxide and rivers carried it to the ocean as dissolved inorganic carbon, which is used by algae to build their calcium carbonate shells. When algae die, their skeletons fall on the seafloor and get buried, locking carbon from the atmosphere in deep-sea sediments.
If weathering increases, the accumulation of calcium carbonate in the deep sea should increase. But after studying dozens of deep-sea sediment cores through an international ocean drilling program, Si found that calcium carbonate in shells decreased significantly over 15 million years, which suggests that rock weathering may not be responsible for the long-term cooling.
Meanwhile, the scientists -- surprisingly -- also found that algae called coccolithophores adapted to the carbon dioxide decline over 15 million years by reducing their production of calcium carbonate. This reduction apparently was not taken into account in previous studies.
Many scientists believe that ocean acidification from high carbon dioxide levels will reduce the calcium carbonate in algae, especially in the near future. The data, however, suggest the opposite occurred over the 15 million years before the current global warming spell.
Read more at Science Daily
The study in the journal Nature Geoscience could shed more light on the causes of long-term climate change. It centers on the long-term cooling that occurred before the recent global warming tied to greenhouse gas emissions from humanity.
"The findings of our study, if substantiated, raise more questions than they answered," said senior author Yair Rosenthal, a distinguished professor in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick. "If the cooling is not due to enhanced Himalayan rock weathering, then what processes have been overlooked?"
For decades, the leading hypothesis has been that the collision of the Indian and Asian continents and uplifting of the Himalayas brought fresh rocks to the Earth's surface, making them more vulnerable to weathering that captured and stored carbon dioxide -- a key greenhouse gas. But that hypothesis remains unconfirmed.
Lead author Weimin Si, a former Rutgers doctoral student now at Brown University, and Rosenthal challenge the hypothesis and examined deep-sea sediments rich with calcium carbonate.
Over millions of years, the weathering of rocks captured carbon dioxide and rivers carried it to the ocean as dissolved inorganic carbon, which is used by algae to build their calcium carbonate shells. When algae die, their skeletons fall on the seafloor and get buried, locking carbon from the atmosphere in deep-sea sediments.
If weathering increases, the accumulation of calcium carbonate in the deep sea should increase. But after studying dozens of deep-sea sediment cores through an international ocean drilling program, Si found that calcium carbonate in shells decreased significantly over 15 million years, which suggests that rock weathering may not be responsible for the long-term cooling.
Meanwhile, the scientists -- surprisingly -- also found that algae called coccolithophores adapted to the carbon dioxide decline over 15 million years by reducing their production of calcium carbonate. This reduction apparently was not taken into account in previous studies.
Many scientists believe that ocean acidification from high carbon dioxide levels will reduce the calcium carbonate in algae, especially in the near future. The data, however, suggest the opposite occurred over the 15 million years before the current global warming spell.
Read more at Science Daily
Green tea could hold the key to reducing antibiotic resistance
Scientists at the University of Surrey have discovered that a natural antioxidant commonly found in green tea can help eliminate antibiotic resistant bacteria.
The study, published in the Journal of Medical Microbiology, found that epigallocatechin (EGCG) can restore the activity of aztreonam, an antibiotic commonly used to treat infections caused by the bacterial pathogen Pseudomonas aeruginosa.
P. aeruginosa is associated with serious respiratory tract and bloodstream infections and in recent years has become resistant to many major classes of antibiotics. Currently a combination of antibiotics is used to fight P. aeruginosa.
However, these infections are becoming increasingly difficult to treat, as resistance to last line antibiotics is being observed.
To assess the synergy of EGCG and aztreonam, researchers conducted in vitro tests to analyse how they interacted with the P. aeruginosa, individually and in combination. The Surrey team found that the combination of aztreonam and EGCG was significantly more effective at reducing P. aeruginosa numbers than either agent alone.
This synergistic activity was also confirmed in vivo using Galleria mellonella (Greater Wax Moth larvae), with survival rates being significantly higher in those treated with the combination than those treated with EGCG or aztreonam alone. Furthermore, minimal to no toxicity was observed in human skin cells and in Galleria mellonella larvae.
Researchers believe that in P. aeruginosa, EGCG may facilitate increased uptake of aztreonam by increasing permeability in the bacteria. Another potential mechanism is EGCG's interference with a biochemical pathway linked to antibiotic susceptibility.
Lead author Dr Jonathan Betts, Senior Research Fellow in the School of Veterinary Medicine at the University of Surrey, said:
"Antimicrobial resistance (AMR) is a serious threat to global public health. Without effective antibiotics, the success of medical treatments will be compromised. We urgently need to develop novel antibiotics in the fight against AMR. Natural products such as EGCG, used in combination with currently licenced antibiotics, may be a way of improving their effectiveness and clinically useful lifespan."
Professor Roberto La Ragione, Head of the Department of Pathology and Infectious Diseases in the School of Veterinary Medicine at the University of Surrey, said:
"The World Health Organisation has listed antibiotic resistant Pseudomonas aeruginosa as a critical threat to human health. We have shown that we can successfully eliminate such threats with the use of natural products, in combination with antibiotics already in use. Further development of these alternatives to antibiotics may allow them to be used in clinical settings in the future."
From Science Daily
The study, published in the Journal of Medical Microbiology, found that epigallocatechin (EGCG) can restore the activity of aztreonam, an antibiotic commonly used to treat infections caused by the bacterial pathogen Pseudomonas aeruginosa.
P. aeruginosa is associated with serious respiratory tract and bloodstream infections and in recent years has become resistant to many major classes of antibiotics. Currently a combination of antibiotics is used to fight P. aeruginosa.
However, these infections are becoming increasingly difficult to treat, as resistance to last line antibiotics is being observed.
To assess the synergy of EGCG and aztreonam, researchers conducted in vitro tests to analyse how they interacted with the P. aeruginosa, individually and in combination. The Surrey team found that the combination of aztreonam and EGCG was significantly more effective at reducing P. aeruginosa numbers than either agent alone.
This synergistic activity was also confirmed in vivo using Galleria mellonella (Greater Wax Moth larvae), with survival rates being significantly higher in those treated with the combination than those treated with EGCG or aztreonam alone. Furthermore, minimal to no toxicity was observed in human skin cells and in Galleria mellonella larvae.
Researchers believe that in P. aeruginosa, EGCG may facilitate increased uptake of aztreonam by increasing permeability in the bacteria. Another potential mechanism is EGCG's interference with a biochemical pathway linked to antibiotic susceptibility.
Lead author Dr Jonathan Betts, Senior Research Fellow in the School of Veterinary Medicine at the University of Surrey, said:
"Antimicrobial resistance (AMR) is a serious threat to global public health. Without effective antibiotics, the success of medical treatments will be compromised. We urgently need to develop novel antibiotics in the fight against AMR. Natural products such as EGCG, used in combination with currently licenced antibiotics, may be a way of improving their effectiveness and clinically useful lifespan."
Professor Roberto La Ragione, Head of the Department of Pathology and Infectious Diseases in the School of Veterinary Medicine at the University of Surrey, said:
"The World Health Organisation has listed antibiotic resistant Pseudomonas aeruginosa as a critical threat to human health. We have shown that we can successfully eliminate such threats with the use of natural products, in combination with antibiotics already in use. Further development of these alternatives to antibiotics may allow them to be used in clinical settings in the future."
From Science Daily
Soap from straw: Scientists develop eco friendly ingredient from agricultural waste
A scientist has discovered a way of using one of the world's most abundant natural resources as a replacement for humanmade chemicals in soaps and thousands of other household products.
An innovative research project, published this month and led by the University of Portsmouth, has demonstrated that bails of rice straw could create a 'biosurfacant', providing an alternative non-toxic ingredient in the production of a vast variety of products that normally include synthetic materials which are often petroleum based.
The biotechnology project set out to solve one of the planet's most pressing environmental problems, looking for a way of reducing the amount of humanmade chemicals in everyday life. It has been co-supervised by the University of Portsmouth's Centre for Enzyme Innovation, working in conjunction with Amity University in India and the Indian Institute of Technology.
The study was looking for a natural replacement for chemical surfactants, a main active ingredient in the production of cleaning products, medicine, suncream, make-up and insecticides. The surfactant holds oil and water together, helping to lower the surface tension of a liquid, aiding the cleaning power and penetration of the product.
Dr Pattanathu Rahman, microbial biotechnologist from the University of Portsmouth and Director of TeeGene, worked with academics and PhD Scholar Mr Sam Joy from 2015 to create a biosurfacant by brewing rice straw with enzymes. The scientists believe this environmentally friendly method results in a high quality ingredient that manufacturing industries are crying out for.
Dr Rahman said: "Surfactants are everywhere, including detergent, fabric softener, glue insecticides, shampoo, toothpaste, paint, laxatives and make up. Imagine if we could make and manufacture biosurfacants in sufficient quantities to use instead of surfactants, taking the humanmade chemical bonds out of these products. This research shows that with the use of agricultural waste such as rice straws, which is in plentiful supply, we are a step closer."
Scientists behind the research believe the use of biosurfactants created from rice straw or other agricultural waste could have a positive ecological effect in a number of ways:
Dr Rahman explains: "The levels of purity needed for biosurfactants in the industries in which they're used is extremely high. Because of this, they can be very expensive. However, the methods we have of producing them make it much more economical and cost efficient. It's a very exciting technology with tremendous potential for applications in a range of industries."
The study shows that biosurfactants could be a potential alternative for the synthetic surfactant molecules, with a market value of $US2.8 billion in 2023. The considerable interest in biosurfactants in recent years is also due to their low toxicity, biodegradable nature and specificity, which would help them meet the European Surfactant Directive.
Dr Rahman says the process of producing biosurfacants calls for new attitudes to soap and cleaning products.
Read more at Science Daily
An innovative research project, published this month and led by the University of Portsmouth, has demonstrated that bails of rice straw could create a 'biosurfacant', providing an alternative non-toxic ingredient in the production of a vast variety of products that normally include synthetic materials which are often petroleum based.
The biotechnology project set out to solve one of the planet's most pressing environmental problems, looking for a way of reducing the amount of humanmade chemicals in everyday life. It has been co-supervised by the University of Portsmouth's Centre for Enzyme Innovation, working in conjunction with Amity University in India and the Indian Institute of Technology.
The study was looking for a natural replacement for chemical surfactants, a main active ingredient in the production of cleaning products, medicine, suncream, make-up and insecticides. The surfactant holds oil and water together, helping to lower the surface tension of a liquid, aiding the cleaning power and penetration of the product.
Dr Pattanathu Rahman, microbial biotechnologist from the University of Portsmouth and Director of TeeGene, worked with academics and PhD Scholar Mr Sam Joy from 2015 to create a biosurfacant by brewing rice straw with enzymes. The scientists believe this environmentally friendly method results in a high quality ingredient that manufacturing industries are crying out for.
Dr Rahman said: "Surfactants are everywhere, including detergent, fabric softener, glue insecticides, shampoo, toothpaste, paint, laxatives and make up. Imagine if we could make and manufacture biosurfacants in sufficient quantities to use instead of surfactants, taking the humanmade chemical bonds out of these products. This research shows that with the use of agricultural waste such as rice straws, which is in plentiful supply, we are a step closer."
Scientists behind the research believe the use of biosurfactants created from rice straw or other agricultural waste could have a positive ecological effect in a number of ways:
- There is significant concern about the impact of the chemical surfactants used in household products, most of which ends up in the oceans.
- Rice straw is a natural by-product of the rice harvest, with millions of tonnes created worldwide every year.
- Farmers often burn the waste producing harmful environmental emissions. Using it to create another product could be an efficient and beneficial recycling process.
- There could also be an economic advantage to using biosurfacants produced from agricultural waste.
Dr Rahman explains: "The levels of purity needed for biosurfactants in the industries in which they're used is extremely high. Because of this, they can be very expensive. However, the methods we have of producing them make it much more economical and cost efficient. It's a very exciting technology with tremendous potential for applications in a range of industries."
The study shows that biosurfactants could be a potential alternative for the synthetic surfactant molecules, with a market value of $US2.8 billion in 2023. The considerable interest in biosurfactants in recent years is also due to their low toxicity, biodegradable nature and specificity, which would help them meet the European Surfactant Directive.
Dr Rahman says the process of producing biosurfacants calls for new attitudes to soap and cleaning products.
Read more at Science Daily
Cats are securely bonded to their people, too
Cats have a reputation for being aloof and independent. But a study of the way domestic cats respond to their caregivers suggests that their socio-cognitive abilities and the depth of their human attachments have been underestimated.
The findings reported in the journal Current Biology on September 23 show that, much like children and dogs, pet cats form secure and insecure bonds with their human caretakers. The findings suggest that this bonding ability across species must be explained by traits that aren't specific to canines, the researchers say.
"Like dogs, cats display social flexibility in regard to their attachments with humans," said Kristyn Vitale of Oregon State University. "The majority of cats are securely attached to their owner and use them as a source of security in a novel environment."
One revealing way to study human attachment behavior is to observe an infant's response to a reunion with their caregiver following a brief absence in a novel environment. When a caregiver returns, secure infants quickly return to relaxed exploration while insecure individuals engage in excessive clinging or avoidance behavior.
Similar tests had been run before with primates and dogs, so Vitale and her colleagues decided to run the same test, only this time with cats.
During the test, an adult cat or kitten spent two minutes in a novel room with their caregiver followed by two minutes alone. Then, they had a two-minute reunion. The cats' responses to seeing their owners again were classified into attachment styles.
The results show that cats bond in a way that's surprisingly similar to infants. In humans, 65 percent of infants are securely attached to their caregiver.
"Domestic cats mirrored this very closely," Vitale says. In fact, they classified about 65 percent of both cats and kittens as securely bonded to their people.
The findings show that cats' human attachments are stable and present in adulthood. This social flexibility may have helped facilitate the success of the species in human homes, Vitale says.
The researchers are now exploring the importance of this work in relation to the thousands of kittens and cats that wind up in animal shelters. "We're currently looking at several aspects of cat attachment behavior, including whether socialization and fostering opportunities impact attachment security in shelter cats," Vitale said.
From Science Daily
The findings reported in the journal Current Biology on September 23 show that, much like children and dogs, pet cats form secure and insecure bonds with their human caretakers. The findings suggest that this bonding ability across species must be explained by traits that aren't specific to canines, the researchers say.
"Like dogs, cats display social flexibility in regard to their attachments with humans," said Kristyn Vitale of Oregon State University. "The majority of cats are securely attached to their owner and use them as a source of security in a novel environment."
One revealing way to study human attachment behavior is to observe an infant's response to a reunion with their caregiver following a brief absence in a novel environment. When a caregiver returns, secure infants quickly return to relaxed exploration while insecure individuals engage in excessive clinging or avoidance behavior.
Similar tests had been run before with primates and dogs, so Vitale and her colleagues decided to run the same test, only this time with cats.
During the test, an adult cat or kitten spent two minutes in a novel room with their caregiver followed by two minutes alone. Then, they had a two-minute reunion. The cats' responses to seeing their owners again were classified into attachment styles.
The results show that cats bond in a way that's surprisingly similar to infants. In humans, 65 percent of infants are securely attached to their caregiver.
"Domestic cats mirrored this very closely," Vitale says. In fact, they classified about 65 percent of both cats and kittens as securely bonded to their people.
The findings show that cats' human attachments are stable and present in adulthood. This social flexibility may have helped facilitate the success of the species in human homes, Vitale says.
The researchers are now exploring the importance of this work in relation to the thousands of kittens and cats that wind up in animal shelters. "We're currently looking at several aspects of cat attachment behavior, including whether socialization and fostering opportunities impact attachment security in shelter cats," Vitale said.
From Science Daily
Sep 22, 2019
Atlantic Ocean may get a jump-start from the other side of the world
Ocean waves crashing on beach. |
Think of it as ocean-to-ocean altruism in the age of climate change.
The new study, from Shineng Hu of the Scripps Institution of Oceanography at the University of California-San Diego and Alexey Fedorov of Yale University, appears Sept. 16 in the journal Nature Climate Change. It is the latest in a growing body of research that explores how global warming may alter global climate components such as the Atlantic meridional overturning circulation (AMOC).
AMOC is one of the planet's largest water circulation systems. It operates like a liquid escalator, delivering warm water to the North Atlantic via an upper limb and sending colder water south via a deeper limb.
Although AMOC has been stable for thousands of years, data from the past 15 years, as well as computer model projections, have given some scientists cause for concern. AMOC has showed signs of slowing during that period, but whether it is a result of global warming or only a short-term anomaly related to natural ocean variability is not known.
"There is no consensus yet," Fedorov said, "but I think the issue of AMOC stability should not be ignored. The mere possibility that the AMOC could collapse should be a strong reason for concern in an era when human activity is forcing significant changes to the Earth's systems.
"We know that the last time AMOC weakened substantially was 15,000 to 17,000 years ago, and it had global impacts," Fedorov added. "We would be talking about harsh winters in Europe, with more storms or a drier Sahel in Africa due to the downward shift of the tropical rain belt, for example."
Much of Fedorov and Hu's work focuses on specific climate mechanisms and features that may be shifting due to global warming. Using a combination of observational data and sophisticated computer modeling, they plot out what effects such shifts might have over time. For example, Fedorov has looked previously at the role melting Arctic sea ice might have on AMOC.
For the new study, they looked at warming in the Indian Ocean.
"The Indian Ocean is one of the fingerprints of global warming," said Hu, who is first author of the new work. "Warming of the Indian Ocean is considered one of the most robust aspects of global warming."
The researchers said their modeling indicates a series of cascading effects that stretch from the Indian Ocean all way over to the Atlantic: As the Indian Ocean warms faster and faster, it generates additional precipitation. This, in turn, draws more air from other parts of the world, including the Atlantic, to the Indian Ocean.
With so much precipitation in the Indian Ocean, there will be less precipitation in the Atlantic Ocean, the researchers said. Less precipitation will lead to higher salinity in the waters of the tropical portion of the Atlantic -- because there won't be as much rainwater to dilute it. This saltier water in the Atlantic, as it comes north via AMOC, will get cold much quicker than usual and sink faster.
"This would act as a jump-start for AMOC, intensifying the circulation," Fedorov said. "On the other hand, we don't know how long this enhanced Indian Ocean warming will continue. If other tropical oceans' warming, especially the Pacific, catches up with the Indian Ocean, the advantage for AMOC will stop."
Read more at Science Daily
US and Canada have lost more than 1 in 4 birds in the past 50 years
Three swallows on a branch. |
"Multiple, independent lines of evidence show a massive reduction in the abundance of birds," said Ken Rosenberg, the study's lead author and a senior scientist at the Cornell Lab of Ornithology and American Bird Conservancy. "We expected to see continuing declines of threatened species. But for the first time, the results also showed pervasive losses among common birds across all habitats, including backyard birds."
The study notes that birds are indicators of environmental health, signaling that natural systems across the U.S. and Canada are now being so severely impacted by human activities that they no longer support the same robust wildlife populations.
The findings showed that of nearly 3 billion birds lost, 90 percent belong to 12 bird families, including sparrows, warblers, finches, and swallows -- common, widespread species that play influential roles in food webs and ecosystem functioning, from seed dispersal to pest control.
Among the steep declines noted:
- Grassland birds are especially hard hit, with a 53 percent reduction in population -- more than 720 million birds -- since 1970.
- Shorebirds, most of which frequent sensitive coastal habitats, were already at dangerously low numbers and have lost more than one-third of their population.
- The volume of spring migration, measured by radar in the night skies, has dropped by 14 percent in just the past decade.
"These data are consistent with what we're seeing elsewhere with other taxa showing massive declines, including insects and amphibians," said coauthor Peter Marra, senior scientist emeritus and former head of the Smithsonian Migratory Bird Center and now director of the Georgetown Environment Initiative at Georgetown University. "It's imperative to address immediate and ongoing threats, both because the domino effects can lead to the decay of ecosystems that humans depend on for our own health and livelihoods -- and because people all over the world cherish birds in their own right. Can you imagine a world without birdsong?"
Evidence for the declines emerged from detection of migratory birds in the air from 143 NEXRAD weather radar stations across the continent in a period spanning over 10 years, as well as from nearly 50 years of data collected through multiple monitoring efforts on the ground.
"Citizen-science participants contributed critical scientific data to show the international scale of losses of birds," said coauthor John Sauer of the U.S. Geological Survey (USGS). "Our results also provide insights into actions we can take to reverse the declines." The analysis included citizen-science data from the North American Breeding Bird Survey coordinated by the USGS and the Canadian Wildlife Service -- the main sources of long-term, large-scale population data for North American birds -- the Audubon Christmas Bird Count, and Manomet's International Shorebird Survey.
Although the study did not analyze the causes of declines, it noted that the steep drop in North American birds parallels the losses of birds elsewhere in the world, suggesting multiple interacting causes that reduce breeding success and increase mortality. It noted that the largest factor driving these declines is likely the widespread loss and degradation of habitat, especially due to agricultural intensification and urbanization.
Other studies have documented mortality from predation by free-roaming domestic cats; collisions with glass, buildings, and other structures; and pervasive use of pesticides associated with widespread declines in insects, an essential food source for birds. Climate change is expected to compound these challenges by altering habitats and threatening plant communities that birds need to survive. More research is needed to pinpoint primary causes for declines in individual species.
"The story is not over," said coauthor Michael Parr, president of American Bird Conservancy. "There are so many ways to help save birds. Some require policy decisions such as strengthening the Migratory Bird Treaty Act. We can also work to ban harmful pesticides and properly fund effective bird conservation programs. Each of us can make a difference with everyday actions that together can save the lives of millions of birds -- actions like making windows safer for birds, keeping cats indoors, and protecting habitat."
The study also documents a few promising rebounds resulting from galvanized human efforts. Waterfowl (ducks, geese, and swans) have made a remarkable recovery over the past 50 years, made possible by investments in conservation by hunters and billions of dollars of government funding for wetland protection and restoration. Raptors such as the Bald Eagle have also made spectacular comebacks since the 1970s, after the harmful pesticide DDT was banned and recovery efforts through endangered species legislation in the U.S. and Canada provided critical protection.
"It's a wake-up call that we've lost more than a quarter of our birds in the U.S. and Canada," said coauthor Adam Smith from Environment and Climate Change Canada. "But the crisis reaches far beyond our individual borders. Many of the birds that breed in Canadian backyards migrate through or spend the winter in the U.S. and places farther south -- from Mexico and the Caribbean to Central and South America. What our birds need now is an historic, hemispheric effort that unites people and organizations with one common goal: bringing our birds back."
Organizations Behind the Study
American Bird Conservancy (ABC) is a nonprofit organization dedicated to conserving birds and their habitats throughout the Americas. With an emphasis on achieving results and working in partnership, we take on the greatest problems facing birds today, innovating and building on rapid advancements in science to halt extinctions, protect habitats, eliminate threats, and build capacity for bird conservation.
Bird Conservancy of the Rockies (Bird Conservancy) is a Colorado-based nonprofit that works to conserve birds and their habitats through an integrated approach of science, education, and land stewardship. Our work extends from the Rockies to the Great Plains, Mexico, and beyond. Together, we are improving native bird populations, the land, and the lives of people. Bird Conservancy's vision is a future where birds are forever abundant, contributing to healthy landscapes and inspiring human curiosity and love of nature.
The Cornell Lab of Ornithology is a nonprofit member-supported organization dedicated to interpreting and conserving the earth's biological diversity through research, education, and citizen science focused on birds.
Environment and Climate Change Canada is Canada's lead federal department for a wide range of environmental issues. It informs Canadians about protecting and conserving our natural heritage, and ensuring a clean, safe, and sustainable environment for present and future generations.
Advancing Georgetown's commitment to the environment, sustainability, and equitability, the Georgetown Environment Initiative brings together students, faculty, and staff from across disciplines -- from the natural sciences, social sciences, humanities, public policy, law, medicine, and business -- to contribute to global efforts to deepen understanding of our world and to transform the Earth's stewardship.
Read more at Science Daily
Subscribe to:
Posts (Atom)