A new UCLA study in zebrafish identified the process by which air pollution can damage brain cells, potentially contributing to Parkinson's disease.
Published in the peer-reviewed journal Toxicological Sciences, the findings show that chemicals in diesel exhaust can trigger the toxic buildup of a protein in the brain called alpha-synuclein, which is commonly seen in people with the disease.
Previous studies have revealed that people living in areas with heightened levels of traffic-related air pollution tend to have higher rates of Parkinson's. To understand what the pollutants do to the brain, Dr. Jeff Bronstein, a professor of neurology and director of the UCLA Movement Disorders Program, tested the effect of diesel exhaust on zebrafish in the lab.
"It's really important to be able to demonstrate whether air pollution is actually the thing that's causing the effect or whether it's something else in urban environments," Bronstein said.
Testing the chemicals on zebrafish, he said, lets researchers tease out whether air pollution components affect brain cells in a way that could increase the risk of Parkinson's. The freshwater fish works well for studying molecular changes in the brain because its neurons interact in a way similar to humans. In addition, the fish are transparent, allowing scientists to easily observe and measure biological processes without killing the animals.
"Using zebrafish allowed us to see what was going on inside their brains at various time-points during the study," said Lisa Barnhill, a UCLA postdoctoral fellow and the study's first author.
Barnhill added certain chemicals found in diesel exhaust to the water in which the zebrafish were kept. These chemicals caused a change in the animals' behavior, and the researchers confirmed that neurons were dying off in the exposed fish.
Next, they investigated the activity in several pathways in the brain known to be related to Parkinson's disease to see precisely how the pollutant particles were contributing to cell death.
In humans, Parkinson's disease is associated with the toxic accumulation of alpha-synuclein proteins in the brain. One way these proteins can build up is through the disruption of autophagy -- the process of breaking down old or damaged proteins. A healthy brain continuously makes and disposes of the proteins it needs for communication between neurons, but when this disposal process stops working, the cells continue to make new proteins and the old ones never get cleared away.
In Parkinson's, alpha-synuclein proteins that would normally be disposed of pile up in toxic clumps in and around neurons, eventually killing them and interfering with the proper functioning of the brain. This can result in various symptoms, such as tremors and muscle rigidity.
Before exposing the zebrafish to diesel particles, the researchers examined the fishes' neurons for the tell-tale pouches that carry out old proteins, including alpha-synuclein, as part of the autophagy disposal operation and found that the process was working properly.
"We can actually watch them move along, and appear and disappear," Bronstein said of the pouches.
After diesel exposure, however, they saw far fewer of the garbage-toting pouches than normal. To confirm that this was the reason brain cells were dying, they treated the fish with a drug that boosts the garbage-disposal process and found that it did save the cells from dying after diesel exposure.
To confirm that diesel could have the same effect on human neurons, the researchers replicated the experiment using cultured human cells. Exposure to diesel exhaust had a similar effect on those cells.
Read more at Science Daily
May 23, 2020
A clue as to why it's so hard to wake up on a cold winter's morning
Winter may be behind us, but do you remember the challenge of waking up on those cold, dark days? Temperature affects the behavior of nearly all living creatures, but there is still much to learn about the link between sensory neurons and neurons controlling the sleep-wake cycle.
Northwestern University neurobiologists have uncovered a clue to what's behind this behavior. In a study of the fruit fly, the researchers have identified a "thermometer" circuit that relays information about external cold temperature from the fly antenna to the higher brain. They show how, through this circuit, seasonally cold and dark conditions can inhibit neurons within the fly brain that promote activity and wakefulness, particularly in the morning.
"This helps explains why -- for both flies and humans -- it is so hard to wake up in the morning in winter," said Marco Gallio, associate professor of neurobiology in the Weinberg College of Arts and Sciences. "By studying behaviors in a fruit fly, we can better understand how and why temperature is so critical to regulating sleep."
The study, led by Gallio and conducted in Drosophila melanogaster, was published today (May 21) in the journal Current Biology.
The paper describes for the first time "absolute cold" receptors residing in the fly antenna, which respond to temperature only below the fly's "comfort zone" of approximately 77 degrees Fahrenheit. Having identified those neurons, the researchers followed them all the way to their targets within the brain. They found the main recipients of this information are a small group of brain neurons that are part of a larger network that controls rhythms of activity and sleep. When the cold circuit they discovered is active, the target cells, which normally are activated by morning light, are shut down.
Drosophila is a classic model system for circadian biology, the area in which researchers study the mechanisms controlling our 24-hour cycle of rest and activity. The focus of much current work is on how changes in external cues such as light and temperature impact rhythms of activity and sleep and how the cues reach the specific brain circuits that control these responses.
While detection of environmental temperature is critical for small "cold-blooded" fruit flies, humans are still creatures of comfort and are continually seeking ideal temperatures. Part of the reason humans seek optimal temperatures is that core and brain temperatures are intimately tied to the induction and maintenance of sleep. Seasonal changes in daylight and temperature are also tied to changes in sleep.
"Temperature sensing is one of the most fundamental sensory modalities," said Gallio, whose group is one of only a few in the world that is systematically studying temperature sensing in fruit flies. "The principles we are finding in the fly brain -- the logic and organization -- may be the same all the way to humans. Whether fly or human, the sensory systems have to solve the same problems, so they often do it in the same ways."
Gallio is the corresponding author of the paper. Michael H. Alpert, a postdoctoral fellow in Gallio's lab, and Dominic D. Frank, a former Ph.D. student in Gallio's lab, are the paper's co-first authors.
"The ramifications of impaired sleep are numerous -- fatigue, reduced concentration, poor learning and alteration of a myriad of health parameters -- yet we still do not fully understand how sleep is produced and regulated within the brain and how changes in external conditions may impact sleep drive and quality," Alpert said.
The study, a collaborative effort many years in the making, was performed in the Gallio lab by a range of scientists at different stages of their careers, ranging from undergraduate students to the principal investigator.
"It is crucial to study the brain in action," Frank said. "Our findings demonstrate the importance of functional studies for understanding how the brain governs behavior."
Read more at Science Daily
Northwestern University neurobiologists have uncovered a clue to what's behind this behavior. In a study of the fruit fly, the researchers have identified a "thermometer" circuit that relays information about external cold temperature from the fly antenna to the higher brain. They show how, through this circuit, seasonally cold and dark conditions can inhibit neurons within the fly brain that promote activity and wakefulness, particularly in the morning.
"This helps explains why -- for both flies and humans -- it is so hard to wake up in the morning in winter," said Marco Gallio, associate professor of neurobiology in the Weinberg College of Arts and Sciences. "By studying behaviors in a fruit fly, we can better understand how and why temperature is so critical to regulating sleep."
The study, led by Gallio and conducted in Drosophila melanogaster, was published today (May 21) in the journal Current Biology.
The paper describes for the first time "absolute cold" receptors residing in the fly antenna, which respond to temperature only below the fly's "comfort zone" of approximately 77 degrees Fahrenheit. Having identified those neurons, the researchers followed them all the way to their targets within the brain. They found the main recipients of this information are a small group of brain neurons that are part of a larger network that controls rhythms of activity and sleep. When the cold circuit they discovered is active, the target cells, which normally are activated by morning light, are shut down.
Drosophila is a classic model system for circadian biology, the area in which researchers study the mechanisms controlling our 24-hour cycle of rest and activity. The focus of much current work is on how changes in external cues such as light and temperature impact rhythms of activity and sleep and how the cues reach the specific brain circuits that control these responses.
While detection of environmental temperature is critical for small "cold-blooded" fruit flies, humans are still creatures of comfort and are continually seeking ideal temperatures. Part of the reason humans seek optimal temperatures is that core and brain temperatures are intimately tied to the induction and maintenance of sleep. Seasonal changes in daylight and temperature are also tied to changes in sleep.
"Temperature sensing is one of the most fundamental sensory modalities," said Gallio, whose group is one of only a few in the world that is systematically studying temperature sensing in fruit flies. "The principles we are finding in the fly brain -- the logic and organization -- may be the same all the way to humans. Whether fly or human, the sensory systems have to solve the same problems, so they often do it in the same ways."
Gallio is the corresponding author of the paper. Michael H. Alpert, a postdoctoral fellow in Gallio's lab, and Dominic D. Frank, a former Ph.D. student in Gallio's lab, are the paper's co-first authors.
"The ramifications of impaired sleep are numerous -- fatigue, reduced concentration, poor learning and alteration of a myriad of health parameters -- yet we still do not fully understand how sleep is produced and regulated within the brain and how changes in external conditions may impact sleep drive and quality," Alpert said.
The study, a collaborative effort many years in the making, was performed in the Gallio lab by a range of scientists at different stages of their careers, ranging from undergraduate students to the principal investigator.
"It is crucial to study the brain in action," Frank said. "Our findings demonstrate the importance of functional studies for understanding how the brain governs behavior."
Read more at Science Daily
May 21, 2020
Mysterious glowing coral reefs are fighting to recover
A new study by the University of Southampton has revealed why some corals exhibit a dazzling colourful display, instead of turning white, when they suffer 'coral bleaching' -- a condition which can devastate reefs and is caused by ocean warming. The scientists behind the research think this phenomenon is a sign that corals are fighting to survive.
Many coral animals live in a fragile, mutually beneficial relationship, a 'symbiosis' with tiny algae embedded in their cells. The algae gain shelter, carbon dioxide and nutrients, while the corals receive photosynthetic products to fulfil their energy needs. If temperatures rise just 1?C above the usual summer maximum, this symbiosis breaks down; the algae are lost, the coral's white limestone skeleton shines through its transparent tissue and a damaging process known as 'coral bleaching' occurs.
This condition can be fatal to the coral. Once its live tissue is gone, the skeleton is exposed to the eroding forces of the environment. Within a few years, an entire coral reef can break down and much of the biodiversity that depends on its complex structure is lost -- a scenario which currently threatens the future of reefs around the world.
However, some bleaching corals undergo an, until now, mysterious transformation -- emitting a range of different bright neon colours. Why this happens has now been explained by a team of scientists from the University of Southampton's Coral Reef Laboratory, who have published their detailed insights in the journal Current Biology.
The researchers conducted a series of controlled laboratory experiments at the coral aquarium facility of the University of Southampton. They found that during colourful bleaching events, corals produce what is effectively a sunscreen layer of their own, showing itself as a colourful display. Furthermore, it's thought this process encourages the coral symbionts to return.
Professor Jörg Wiedenmann, head of the University of Southampton's Coral Reef Laboratory explains: "Our research shows colourful bleaching involves a self-regulating mechanism, a so-called optical feedback loop, which involves both partners of the symbiosis. In healthy corals, much of the sunlight is taken up by the photosynthetic pigments of the algal symbionts. When corals lose their symbionts, the excess light travels back and forth inside the animal tissue -reflected by the white coral skeleton. This increased internal light level is very stressful for the symbionts and may delay or even prevent their return after conditions return to normal.
"However, if the coral cells can still carry out at least some of their normal functions, despite the environmental stress that caused bleaching, the increased internal light levels will boost the production of colourful, photoprotective pigments. The resulting sunscreen layer will subsequently promote the return of the symbionts. As the recovering algal population starts taking up the light for their photosynthesis again, the light levels inside the coral will drop and the coral cells will lower the production of the colourful pigments to their normal level."
The researchers believe corals which undergo this process are likely to have experienced episodes of mild or brief ocean-warming or disturbances in their nutrient environment -- rather than extreme events.
Dr. Cecilia D'Angelo, Lecturer of Molecular Coral Biology at Southampton, comments: "Bleaching is not always a death sentence for corals, the coral animal can still be alive. If the stress event is mild enough, corals can re-establish the symbiosis with their algal partner. Unfortunately, recent episodes of global bleaching caused by unusually warm water have resulted in high coral mortality, leaving the world's coral reefs struggling for survival."
Dr. Elena Bollati, Researcher at the National University Singapore, who studied this subject during her PhD training at the University of Southampton, adds: "We reconstructed the temperature history of known colourful bleaching events around the globe using satellite imagery. These data are in excellent agreement with the conclusions of our controlled laboratory experiments, suggesting that colourful bleaching occurs in association with brief or mild episodes of heat stress."
Read more at Science Daily
Many coral animals live in a fragile, mutually beneficial relationship, a 'symbiosis' with tiny algae embedded in their cells. The algae gain shelter, carbon dioxide and nutrients, while the corals receive photosynthetic products to fulfil their energy needs. If temperatures rise just 1?C above the usual summer maximum, this symbiosis breaks down; the algae are lost, the coral's white limestone skeleton shines through its transparent tissue and a damaging process known as 'coral bleaching' occurs.
This condition can be fatal to the coral. Once its live tissue is gone, the skeleton is exposed to the eroding forces of the environment. Within a few years, an entire coral reef can break down and much of the biodiversity that depends on its complex structure is lost -- a scenario which currently threatens the future of reefs around the world.
However, some bleaching corals undergo an, until now, mysterious transformation -- emitting a range of different bright neon colours. Why this happens has now been explained by a team of scientists from the University of Southampton's Coral Reef Laboratory, who have published their detailed insights in the journal Current Biology.
The researchers conducted a series of controlled laboratory experiments at the coral aquarium facility of the University of Southampton. They found that during colourful bleaching events, corals produce what is effectively a sunscreen layer of their own, showing itself as a colourful display. Furthermore, it's thought this process encourages the coral symbionts to return.
Professor Jörg Wiedenmann, head of the University of Southampton's Coral Reef Laboratory explains: "Our research shows colourful bleaching involves a self-regulating mechanism, a so-called optical feedback loop, which involves both partners of the symbiosis. In healthy corals, much of the sunlight is taken up by the photosynthetic pigments of the algal symbionts. When corals lose their symbionts, the excess light travels back and forth inside the animal tissue -reflected by the white coral skeleton. This increased internal light level is very stressful for the symbionts and may delay or even prevent their return after conditions return to normal.
"However, if the coral cells can still carry out at least some of their normal functions, despite the environmental stress that caused bleaching, the increased internal light levels will boost the production of colourful, photoprotective pigments. The resulting sunscreen layer will subsequently promote the return of the symbionts. As the recovering algal population starts taking up the light for their photosynthesis again, the light levels inside the coral will drop and the coral cells will lower the production of the colourful pigments to their normal level."
The researchers believe corals which undergo this process are likely to have experienced episodes of mild or brief ocean-warming or disturbances in their nutrient environment -- rather than extreme events.
Dr. Cecilia D'Angelo, Lecturer of Molecular Coral Biology at Southampton, comments: "Bleaching is not always a death sentence for corals, the coral animal can still be alive. If the stress event is mild enough, corals can re-establish the symbiosis with their algal partner. Unfortunately, recent episodes of global bleaching caused by unusually warm water have resulted in high coral mortality, leaving the world's coral reefs struggling for survival."
Dr. Elena Bollati, Researcher at the National University Singapore, who studied this subject during her PhD training at the University of Southampton, adds: "We reconstructed the temperature history of known colourful bleaching events around the globe using satellite imagery. These data are in excellent agreement with the conclusions of our controlled laboratory experiments, suggesting that colourful bleaching occurs in association with brief or mild episodes of heat stress."
Read more at Science Daily
How cosmic rays may have shaped life
Before there were animals, bacteria or even DNA on Earth, self-replicating molecules were slowly evolving their way from simple matter to life beneath a constant shower of energetic particles from space.
In a new paper, a Stanford professor and a former post-doctoral scholar speculate that this interaction between ancient proto-organisms and cosmic rays may be responsible for a crucial structural preference, called chirality, in biological molecules. If their idea is correct, it suggests that all life throughout the universe could share the same chiral preference.
Chirality, also known as handedness, is the existence of mirror-image versions of molecules. Like the left and right hand, two chiral forms of a single molecule reflect each other in shape but don't line up if stacked. In every major biomolecule -- amino acids, DNA, RNA -- life only uses one form of molecular handedness. If the mirror version of a molecule is substituted for the regular version within a biological system, the system will often malfunction or stop functioning entirely. In the case of DNA, a single wrong handed sugar would disrupt the stable helical structure of the molecule.
Louis Pasteur first discovered this biological homochirality in 1848. Since then, scientists have debated whether the handedness of life was driven by random chance or some unknown deterministic influence. Pasteur hypothesized that, if life is asymmetric, then it may be due to an asymmetry in the fundamental interactions of physics that exist throughout the cosmos.
"We propose that the biological handedness we witness now on Earth is due to evolution amidst magnetically polarized radiation, where a tiny difference in the mutation rate may have promoted the evolution of DNA-based life, rather than its mirror image," said Noémie Globus lead author of the paper and a former Koret Fellow at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC).
In their paper, published on May 20 in Astrophysical Journal Letters, the researchers detail their argument in favor of cosmic rays as the origin of homochirality. They also discuss potential experiments to test their hypothesis.
Magnetic polarization from space
Cosmic rays are an abundant form of high-energy radiation that originate from various sources throughout the universe, including stars and distant galaxies. After hitting the Earth's atmosphere, cosmic rays eventually degrade into fundamental particles. At ground level, most of the cosmic rays exist only as particles known as muons.
Muons are unstable particles, existing for a mere 2 millionths of a second, but because they travel near the speed of light, they have been detected more than 700 meters below Earth's surface. They are also magnetically polarized, meaning, on average, muons all share the same magnetic orientation. When muons finally decay, they produce electrons with the same magnetic polarization. The researchers believe that the muon's penetrative ability allows it and its daughter electrons to potentially affect chiral molecules on Earth and everywhere else in the universe.
"We are irradiated all the time by cosmic rays," explained Globus, who is currently a post-doctoral researcher at New York University and the Simons Foundation's Flatiron Institute. "Their effects are small but constant in every place on the planet where life could evolve, and the magnetic polarization of the muons and electrons is always the same. And even on other planets, cosmic rays would have the same effects."
The researchers' hypothesis is that, at the beginning of life of on Earth, this constant and consistent radiation affected the evolution of the two mirror life-forms in different ways, helping one ultimately prevail over the other. These tiny differences in mutation rate would have been most significant when life was beginning and the molecules involved were very simple and more fragile. Under these circumstances, the small but persistent chiral influence from cosmic rays could have, over billions of generations of evolution, produced the single biological handedness we see today.
"This is a little bit like a roulette wheel in Vegas, where you might engineer a slight preference for the red pockets, rather than the black pockets," said Roger Blandford, the Luke Blossom Professor in the School of Humanities and Sciences at Stanford and an author on the paper. "Play a few games, you would never notice. But if you play with this roulette wheel for many years, those who bet habitually on red will make money and those who bet on black will lose and go away."
Ready to be surprised
Globus and Blandford suggest experiments that could help prove or disprove their cosmic ray hypothesis. For example, they would like to test how bacteria respond to radiation with different magnetic polarization.
"Experiments like this have never been performed and I am excited to see what they teach us. Surprises inevitably come from further work on interdisciplinary topics," said Globus.
The researchers also look forward to organic samples from comets, asteroids or Mars to see if they too exhibit a chiral bias.
"This idea connects fundamental physics and the origin of life," said Blandford, who is also Stanford and SLAC professor of physics and particle physics and former director of KIPAC. "Regardless of whether or not it's correct, bridging these very different fields is exciting and a successful experiment should be interesting."
Read more at Science Daily
In a new paper, a Stanford professor and a former post-doctoral scholar speculate that this interaction between ancient proto-organisms and cosmic rays may be responsible for a crucial structural preference, called chirality, in biological molecules. If their idea is correct, it suggests that all life throughout the universe could share the same chiral preference.
Chirality, also known as handedness, is the existence of mirror-image versions of molecules. Like the left and right hand, two chiral forms of a single molecule reflect each other in shape but don't line up if stacked. In every major biomolecule -- amino acids, DNA, RNA -- life only uses one form of molecular handedness. If the mirror version of a molecule is substituted for the regular version within a biological system, the system will often malfunction or stop functioning entirely. In the case of DNA, a single wrong handed sugar would disrupt the stable helical structure of the molecule.
Louis Pasteur first discovered this biological homochirality in 1848. Since then, scientists have debated whether the handedness of life was driven by random chance or some unknown deterministic influence. Pasteur hypothesized that, if life is asymmetric, then it may be due to an asymmetry in the fundamental interactions of physics that exist throughout the cosmos.
"We propose that the biological handedness we witness now on Earth is due to evolution amidst magnetically polarized radiation, where a tiny difference in the mutation rate may have promoted the evolution of DNA-based life, rather than its mirror image," said Noémie Globus lead author of the paper and a former Koret Fellow at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC).
In their paper, published on May 20 in Astrophysical Journal Letters, the researchers detail their argument in favor of cosmic rays as the origin of homochirality. They also discuss potential experiments to test their hypothesis.
Magnetic polarization from space
Cosmic rays are an abundant form of high-energy radiation that originate from various sources throughout the universe, including stars and distant galaxies. After hitting the Earth's atmosphere, cosmic rays eventually degrade into fundamental particles. At ground level, most of the cosmic rays exist only as particles known as muons.
Muons are unstable particles, existing for a mere 2 millionths of a second, but because they travel near the speed of light, they have been detected more than 700 meters below Earth's surface. They are also magnetically polarized, meaning, on average, muons all share the same magnetic orientation. When muons finally decay, they produce electrons with the same magnetic polarization. The researchers believe that the muon's penetrative ability allows it and its daughter electrons to potentially affect chiral molecules on Earth and everywhere else in the universe.
"We are irradiated all the time by cosmic rays," explained Globus, who is currently a post-doctoral researcher at New York University and the Simons Foundation's Flatiron Institute. "Their effects are small but constant in every place on the planet where life could evolve, and the magnetic polarization of the muons and electrons is always the same. And even on other planets, cosmic rays would have the same effects."
The researchers' hypothesis is that, at the beginning of life of on Earth, this constant and consistent radiation affected the evolution of the two mirror life-forms in different ways, helping one ultimately prevail over the other. These tiny differences in mutation rate would have been most significant when life was beginning and the molecules involved were very simple and more fragile. Under these circumstances, the small but persistent chiral influence from cosmic rays could have, over billions of generations of evolution, produced the single biological handedness we see today.
"This is a little bit like a roulette wheel in Vegas, where you might engineer a slight preference for the red pockets, rather than the black pockets," said Roger Blandford, the Luke Blossom Professor in the School of Humanities and Sciences at Stanford and an author on the paper. "Play a few games, you would never notice. But if you play with this roulette wheel for many years, those who bet habitually on red will make money and those who bet on black will lose and go away."
Ready to be surprised
Globus and Blandford suggest experiments that could help prove or disprove their cosmic ray hypothesis. For example, they would like to test how bacteria respond to radiation with different magnetic polarization.
"Experiments like this have never been performed and I am excited to see what they teach us. Surprises inevitably come from further work on interdisciplinary topics," said Globus.
The researchers also look forward to organic samples from comets, asteroids or Mars to see if they too exhibit a chiral bias.
"This idea connects fundamental physics and the origin of life," said Blandford, who is also Stanford and SLAC professor of physics and particle physics and former director of KIPAC. "Regardless of whether or not it's correct, bridging these very different fields is exciting and a successful experiment should be interesting."
Read more at Science Daily
Adding a blend of spices to a meal may help lower inflammation
Adding an array of spices to your meal is a surefire way to make it more tasty, but new Penn State research suggests it may increase its health benefits, as well.
In a randomized, controlled feeding study, the researchers found that when participants ate a meal high in fat and carbohydrates with six grams of a spice blend added, the participants had lower inflammation markers compared to when they ate a meal with less or no spices.
"If spices are palatable to you, they might be a way to make a high-fat or high-carb meal more healthful," said Connie Rogers, associate professor of nutritional sciences. "We can't say from this study if it was one spice in particular, but this specific blend seemed to be beneficial."
The researchers used a blend of basil, bay leaf, black pepper, cinnamon, coriander, cumin, ginger, oregano, parsley, red pepper, rosemary, thyme and turmeric for the study, which was recently published in the Journal of Nutrition.
According to Rogers, previous research has linked a variety of different spices, like ginger and tumeric, with anti-inflammatory properties. Additionally, chronic inflammation has previously been associated with poor health outcomes like cancer, cardiovascular disease, and overweight and obesity, which affects approximately 72 percent of the U.S. population.
In more recent years, researchers have found that inflammation can spike after a person eats a meal high in fat or sugar. While it is not clear whether these short bursts -- called acute inflammation -- can cause chronic inflammation, Rogers said it's suspected they play a factor, especially in people with overweight or obesity.
"Ultimately the gold standard would be to get people eating more healthfully and to lose weight and exercise, but those behavioral changes are difficult and take time," Rogers said. "So in the interim, we wanted to explore whether a combination of spices that people are already familiar with and could fit in a single meal could have a positive effect."
For the study, the researchers recruited 12 men between the ages of 40 and 65, with overweight or obesity, and at least one risk factor for cardiovascular disease. Rogers said the sample was chosen because people in these demographics tend to be at a higher risk for developing poorer health outcomes.
In random order, each participant ate three versions of a meal high in saturated fat and carbohydrates on three separate days: one with no spices, one with two grams of the spice blend, and one with six grams of the spice blend. The researchers drew blood samples before and then after each meal hourly for four hours to measure inflammatory markers.
"Additionally, we cultured the white blood cells and stimulated them to get the cells to respond to an inflammatory stimulus, similar to what would happen while your body is fighting an infection," Rogers said. "We think that's important because it's representative of what would happen in the body. Cells would encounter a pathogen and produce inflammatory cytokines."
After analyzing the data, the researchers found that inflammatory cytokines were reduced following the meal containing six grams of spices compared to the meal containing two grams of spices or no spices. Rogers said six grams roughly translates to between one teaspoon to one tablespoon, depending on how the spices are dehydrated.
While the researchers can't be sure which spice or spices are contributing to the effect, or the precise mechanism in which the effect is created, Rogers said the results suggest that the spices have anti-inflammatory properties that help offset inflammation caused by the high-carb and high-fat meal.
Additionally, Rogers said that a second study using the same subjects, conducted by Penn State researchers Penny Kris-Etherton and Kristina Petersen, found that six grams of spices resulted in a smaller post-meal reduction of "flow mediated dilation" in the blood vessels -- a measure of blood vessel flexibility and marker of blood vessel health.
Read more at Science Daily
In a randomized, controlled feeding study, the researchers found that when participants ate a meal high in fat and carbohydrates with six grams of a spice blend added, the participants had lower inflammation markers compared to when they ate a meal with less or no spices.
"If spices are palatable to you, they might be a way to make a high-fat or high-carb meal more healthful," said Connie Rogers, associate professor of nutritional sciences. "We can't say from this study if it was one spice in particular, but this specific blend seemed to be beneficial."
The researchers used a blend of basil, bay leaf, black pepper, cinnamon, coriander, cumin, ginger, oregano, parsley, red pepper, rosemary, thyme and turmeric for the study, which was recently published in the Journal of Nutrition.
According to Rogers, previous research has linked a variety of different spices, like ginger and tumeric, with anti-inflammatory properties. Additionally, chronic inflammation has previously been associated with poor health outcomes like cancer, cardiovascular disease, and overweight and obesity, which affects approximately 72 percent of the U.S. population.
In more recent years, researchers have found that inflammation can spike after a person eats a meal high in fat or sugar. While it is not clear whether these short bursts -- called acute inflammation -- can cause chronic inflammation, Rogers said it's suspected they play a factor, especially in people with overweight or obesity.
"Ultimately the gold standard would be to get people eating more healthfully and to lose weight and exercise, but those behavioral changes are difficult and take time," Rogers said. "So in the interim, we wanted to explore whether a combination of spices that people are already familiar with and could fit in a single meal could have a positive effect."
For the study, the researchers recruited 12 men between the ages of 40 and 65, with overweight or obesity, and at least one risk factor for cardiovascular disease. Rogers said the sample was chosen because people in these demographics tend to be at a higher risk for developing poorer health outcomes.
In random order, each participant ate three versions of a meal high in saturated fat and carbohydrates on three separate days: one with no spices, one with two grams of the spice blend, and one with six grams of the spice blend. The researchers drew blood samples before and then after each meal hourly for four hours to measure inflammatory markers.
"Additionally, we cultured the white blood cells and stimulated them to get the cells to respond to an inflammatory stimulus, similar to what would happen while your body is fighting an infection," Rogers said. "We think that's important because it's representative of what would happen in the body. Cells would encounter a pathogen and produce inflammatory cytokines."
After analyzing the data, the researchers found that inflammatory cytokines were reduced following the meal containing six grams of spices compared to the meal containing two grams of spices or no spices. Rogers said six grams roughly translates to between one teaspoon to one tablespoon, depending on how the spices are dehydrated.
While the researchers can't be sure which spice or spices are contributing to the effect, or the precise mechanism in which the effect is created, Rogers said the results suggest that the spices have anti-inflammatory properties that help offset inflammation caused by the high-carb and high-fat meal.
Additionally, Rogers said that a second study using the same subjects, conducted by Penn State researchers Penny Kris-Etherton and Kristina Petersen, found that six grams of spices resulted in a smaller post-meal reduction of "flow mediated dilation" in the blood vessels -- a measure of blood vessel flexibility and marker of blood vessel health.
Read more at Science Daily
Every heart dances to a different tune
Play the same piece of music to two people, and their hearts can respond very differently. That's the conclusion of a novel study presented today on EHRA Essentials 4 You, a scientific platform of the European Society of Cardiology (ESC).
This pioneering research revealed how music triggers individual effects on the heart, a vital first step to developing personalised music prescriptions for common ailments or to help people stay alert or relaxed.
"We used precise methods to record the heart's response to music and found that what is calming for one person can be arousing for another," said Professor Elaine Chew of the French National Centre for Scientific Research (CNRS).1
Previous studies investigating physiological responses to music have measured changes in heart rate after listening to different recordings simply categorised as 'sad', 'happy', 'calm', or 'violent'.
This small study took a more precise approach, featuring several unique aspects. Three patients with mild heart failure requiring a pacemaker were invited to a live classical piano concert. Because they all wore a pacemaker, their heart rate could be kept constant during the performance. The researchers measured the electrical activity of the heart directly from the pacemaker leads before and after 24 points in the score (and performance) where there were stark changes in tempo, volume, or rhythm.
Specifically, they measured the time it takes the heart to recover after a heartbeat. "Heart rate affects this recovery time, so by keeping that constant we could assess electrical changes in the heart based on emotional response to the music," said Professor Chew.
"We are interested in the heart's recovery time (rather than heart rate) because it is linked to the heart's electrical stability and susceptibility to dangerous heart rhythm disorders," explained the project's medical lead Professor Pier Lambiase of University College London. "In some people, life-threatening heart rhythm disorders can be triggered by stress. Using music we can study, in a low risk way, how stress (or mild tension induced by music) alters this recovery period."
The researchers found that change in the heart's recovery time was significantly different from person to person at the same junctures in the music. Recovery time reduced by as much as 5 milliseconds, indicating increased stress or arousal. And recovery time lengthened by as much as 5 milliseconds, meaning greater relaxation.
Commenting on the individual nature of reactions, Professor Chew said: "Even though two people might have statistically significant changes across the same musical transition, their responses could go in opposite directions. So for one person the musical transition is relaxing, while for another it is arousing or stress inducing."
For example: a person not expecting a transition from soft to loud music could find it stressful, leading to a shortened heart recovery time. For another person it could be the resolution to a long build-up in the music and hence a release, resulting in a lengthened heart recovery time.
Professor Chew said: "By understanding how an individual's heart reacts to musical changes, we plan to design tailored music interventions to elicit the desired response."
"This could be to reduce blood pressure or lower the risk of heart rhythm disorders without the side effects of medication," added Professor Lambiase.
Read more at Science Daily
This pioneering research revealed how music triggers individual effects on the heart, a vital first step to developing personalised music prescriptions for common ailments or to help people stay alert or relaxed.
"We used precise methods to record the heart's response to music and found that what is calming for one person can be arousing for another," said Professor Elaine Chew of the French National Centre for Scientific Research (CNRS).1
Previous studies investigating physiological responses to music have measured changes in heart rate after listening to different recordings simply categorised as 'sad', 'happy', 'calm', or 'violent'.
This small study took a more precise approach, featuring several unique aspects. Three patients with mild heart failure requiring a pacemaker were invited to a live classical piano concert. Because they all wore a pacemaker, their heart rate could be kept constant during the performance. The researchers measured the electrical activity of the heart directly from the pacemaker leads before and after 24 points in the score (and performance) where there were stark changes in tempo, volume, or rhythm.
Specifically, they measured the time it takes the heart to recover after a heartbeat. "Heart rate affects this recovery time, so by keeping that constant we could assess electrical changes in the heart based on emotional response to the music," said Professor Chew.
"We are interested in the heart's recovery time (rather than heart rate) because it is linked to the heart's electrical stability and susceptibility to dangerous heart rhythm disorders," explained the project's medical lead Professor Pier Lambiase of University College London. "In some people, life-threatening heart rhythm disorders can be triggered by stress. Using music we can study, in a low risk way, how stress (or mild tension induced by music) alters this recovery period."
The researchers found that change in the heart's recovery time was significantly different from person to person at the same junctures in the music. Recovery time reduced by as much as 5 milliseconds, indicating increased stress or arousal. And recovery time lengthened by as much as 5 milliseconds, meaning greater relaxation.
Commenting on the individual nature of reactions, Professor Chew said: "Even though two people might have statistically significant changes across the same musical transition, their responses could go in opposite directions. So for one person the musical transition is relaxing, while for another it is arousing or stress inducing."
For example: a person not expecting a transition from soft to loud music could find it stressful, leading to a shortened heart recovery time. For another person it could be the resolution to a long build-up in the music and hence a release, resulting in a lengthened heart recovery time.
Professor Chew said: "By understanding how an individual's heart reacts to musical changes, we plan to design tailored music interventions to elicit the desired response."
"This could be to reduce blood pressure or lower the risk of heart rhythm disorders without the side effects of medication," added Professor Lambiase.
Read more at Science Daily
May 20, 2020
NASA's Curiosity rover finds clues to chilly ancient Mars buried in rocks
By studying the chemical elements on Mars today -- including carbon and oxygen -- scientists can work backwards to piece together the history of a planet that once had the conditions necessary to support life.
Weaving this story, element by element, from roughly 140 million miles (225 million kilometers) away is a painstaking process. But scientists aren't the type to be easily deterred. Orbiters and rovers at Mars have confirmed that the planet once had liquid water, thanks to clues that include dry riverbeds, ancient shorelines, and salty surface chemistry. Using NASA's Curiosity Rover, scientists have found evidence for long-lived lakes. They've also dug up organic compounds, or life's chemical building blocks. The combination of liquid water and organic compounds compels scientists to keep searching Mars for signs of past -- or present -- life.
Despite the tantalizing evidence found so far, scientists' understanding of Martian history is still unfolding, with several major questions open for debate. For one, was the ancient Martian atmosphere thick enough to keep the planet warm, and thus wet, for the amount of time necessary to sprout and nurture life? And the organic compounds: are they signs of life -- or of chemistry that happens when Martian rocks interact with water and sunlight?
In a recent Nature Astronomy report on a multi-year experiment conducted in the chemistry lab inside Curiosity's belly, called Sample Analysis at Mars (SAM), a team of scientists offers some insights to help answer these questions. The team found that certain minerals in rocks at Gale Crater may have formed in an ice-covered lake. These minerals may have formed during a cold stage sandwiched between warmer periods, or after Mars lost most of its atmosphere and began to turn permanently cold.
Gale is a crater the size of Connecticut and Rhode Island combined. It was selected as Curiosity's 2012 landing site because it had signs of past water, including clay minerals that might help trap and preserve ancient organic molecules. Indeed, while exploring the base of a mountain in the center of the crater, called Mount Sharp, Curiosity found a layer of sediments 1,000 feet (304 meters) thick that was deposited as mud in ancient lakes. To form that much sediment an incredible amount of water would have flowed down into those lakes for millions to tens of millions of warm and humid years, some scientists say. But some geological features in the crater also hint at a past that included cold, icy conditions.
"At some point, Mars' surface environment must have experienced a transition from being warm and humid to being cold and dry, as it is now, but exactly when and how that occurred is still a mystery," says Heather Franz, a NASA geochemist based at NASA's Goddard Space Flight Center in Greenbelt, Maryland.
Franz, who led the SAM study, notes that factors such as changes in Mars' obliquity and the amount of volcanic activity could have caused the Martian climate to alternate between warm and cold over time. This idea is supported by chemical and mineralogical changes in Martian rocks showing that some layers formed in colder environments and others formed in warmer ones.
In any case, says Franz, the array of data collected by Curiosity so far suggests that the team is seeing evidence for Martian climate change recorded in rocks.
Carbon and oxygen star in the Martian climate story
Franz's team found evidence for a cold ancient environment after the SAM lab extracted the gases carbon dioxide, or CO2, and oxygen from 13 dust and rock samples. Curiosity collected these samples over the course of five Earth years (Earth years vs. Mars years).
CO2 is a molecule of one carbon atom bonded with two oxygen atoms, with carbon serving as a key witness in the case of the mysterious Martian climate. In fact, this simple yet versatile element is as critical as water in the search for life elsewhere. On Earth, carbon flows continuously through the air, water, and surface in a well-understood cycle that hinges on life. For example, plants absorb carbon from the atmosphere in the form of CO2. In return, they produce oxygen, which humans and most other life forms use for respiration in a process that ends with the release of carbon back into the air, again via CO2, or into the Earth's crust as life forms die and are buried.
Scientists are finding there's also a carbon cycle on Mars and they're working to understand it. With little water or abundant surface life on the Red Planet for at least the past 3 billion years, the carbon cycle is much different than Earth's.
"Nevertheless, the carbon cycling is still happening and is still important because it's not only helping reveal information about Mars' ancient climate," says Paul Mahaffy, principal investigator on SAM and director of the Solar System Exploration Division at NASA Goddard. "It's also showing us that Mars is a dynamic planet that's circulating elements that are the buildings blocks of life as we know it."
The gases build a case for a chilly period
After Curiosity fed rock and dust samples into SAM, the lab heated each one to nearly 1,650 degrees Fahrenheit (900 degrees Celsius) to liberate the gases inside. By looking at the oven temperatures that released the CO2 and oxygen, scientists could tell what kind of minerals the gases were coming from. This type of information helps them understand how carbon is cycling on Mars.
Various studies have suggested that Mars' ancient atmosphere, containing mostly CO2, may have been thicker than Earth's is today. Most of it has been lost to space, but some may be stored in rocks at the planet's surface, particularly in the form of carbonates, which are minerals made of carbon and oxygen. On Earth, carbonates are produced when CO2 from the air is absorbed in the oceans and other bodies of water and then mineralized into rocks. Scientists think the same process happened on Mars and that it could help explain what happened to some of the Martian atmosphere.
Yet, missions to Mars haven't found enough carbonates in the surface to support a thick atmosphere.
Nonetheless, the few carbonates that SAM did detect revealed something interesting about the Martian climate through the isotopes of carbon and oxygen stored in them. Isotopes are versions of each element that have different masses. Because different chemical processes, from rock formation to biological activity, use these isotopes in different proportions, the ratios of heavy to light isotopes in a rock provide scientists with clues to how the rock formed.
In some of the carbonates SAM found, scientists noticed that the oxygen isotopes were lighter than those in the Martian atmosphere. This suggests that the carbonates did not form long ago simply from atmospheric CO2 absorbed into a lake. If they had, the oxygen isotopes in the rocks would have been slightly heavier than the ones in the air.
While it's possible that the carbonates formed very early in Mars' history, when the atmospheric composition was a bit different than it is today, Franz and her colleagues suggest that the carbonates more likely formed in a freezing lake. In this scenario, the ice could have sucked up heavy oxygen isotopes and left the lightest ones to form carbonates later. Other Curiosity scientists have also presented evidence suggesting that ice-covered lakes could have existed in Gale Crater.
So where is all the carbon?
The low abundance of carbonates on Mars is puzzling, scientists say. If there aren't many of these minerals at Gale Crater, perhaps the early atmosphere was thinner than predicted. Or maybe something else is storing the missing atmospheric carbon.
Based on their analysis, Franz and her colleagues suggest that some carbon could be sequestered in other minerals, such as oxalates, which store carbon and oxygen in a different structure than carbonates. Their hypothesis is based on the temperatures at which CO2 was released from some samples inside SAM -- too low for carbonates, but just right for oxalates -- and on the different carbon and oxygen isotope ratios than the scientists saw in the carbonates.
A model of a carbonate molecule next to an oxalate molecule
Oxalates are the most common type of organic mineral produced by plants on Earth. But oxalates also can be produced without biology. One way is through the interaction of atmospheric CO2 with surface minerals, water, and sunlight, in a process known as abiotic photosynthesis. This type of chemistry is hard to find on Earth because there's abundant life here, but Franz's team hopes to create abiotic photosynthesis in the lab to figure out if it actually could be responsible for the carbon chemistry they're seeing in Gale Crater.
Read more at Science Daily
Weaving this story, element by element, from roughly 140 million miles (225 million kilometers) away is a painstaking process. But scientists aren't the type to be easily deterred. Orbiters and rovers at Mars have confirmed that the planet once had liquid water, thanks to clues that include dry riverbeds, ancient shorelines, and salty surface chemistry. Using NASA's Curiosity Rover, scientists have found evidence for long-lived lakes. They've also dug up organic compounds, or life's chemical building blocks. The combination of liquid water and organic compounds compels scientists to keep searching Mars for signs of past -- or present -- life.
Despite the tantalizing evidence found so far, scientists' understanding of Martian history is still unfolding, with several major questions open for debate. For one, was the ancient Martian atmosphere thick enough to keep the planet warm, and thus wet, for the amount of time necessary to sprout and nurture life? And the organic compounds: are they signs of life -- or of chemistry that happens when Martian rocks interact with water and sunlight?
In a recent Nature Astronomy report on a multi-year experiment conducted in the chemistry lab inside Curiosity's belly, called Sample Analysis at Mars (SAM), a team of scientists offers some insights to help answer these questions. The team found that certain minerals in rocks at Gale Crater may have formed in an ice-covered lake. These minerals may have formed during a cold stage sandwiched between warmer periods, or after Mars lost most of its atmosphere and began to turn permanently cold.
Gale is a crater the size of Connecticut and Rhode Island combined. It was selected as Curiosity's 2012 landing site because it had signs of past water, including clay minerals that might help trap and preserve ancient organic molecules. Indeed, while exploring the base of a mountain in the center of the crater, called Mount Sharp, Curiosity found a layer of sediments 1,000 feet (304 meters) thick that was deposited as mud in ancient lakes. To form that much sediment an incredible amount of water would have flowed down into those lakes for millions to tens of millions of warm and humid years, some scientists say. But some geological features in the crater also hint at a past that included cold, icy conditions.
"At some point, Mars' surface environment must have experienced a transition from being warm and humid to being cold and dry, as it is now, but exactly when and how that occurred is still a mystery," says Heather Franz, a NASA geochemist based at NASA's Goddard Space Flight Center in Greenbelt, Maryland.
Franz, who led the SAM study, notes that factors such as changes in Mars' obliquity and the amount of volcanic activity could have caused the Martian climate to alternate between warm and cold over time. This idea is supported by chemical and mineralogical changes in Martian rocks showing that some layers formed in colder environments and others formed in warmer ones.
In any case, says Franz, the array of data collected by Curiosity so far suggests that the team is seeing evidence for Martian climate change recorded in rocks.
Carbon and oxygen star in the Martian climate story
Franz's team found evidence for a cold ancient environment after the SAM lab extracted the gases carbon dioxide, or CO2, and oxygen from 13 dust and rock samples. Curiosity collected these samples over the course of five Earth years (Earth years vs. Mars years).
CO2 is a molecule of one carbon atom bonded with two oxygen atoms, with carbon serving as a key witness in the case of the mysterious Martian climate. In fact, this simple yet versatile element is as critical as water in the search for life elsewhere. On Earth, carbon flows continuously through the air, water, and surface in a well-understood cycle that hinges on life. For example, plants absorb carbon from the atmosphere in the form of CO2. In return, they produce oxygen, which humans and most other life forms use for respiration in a process that ends with the release of carbon back into the air, again via CO2, or into the Earth's crust as life forms die and are buried.
Scientists are finding there's also a carbon cycle on Mars and they're working to understand it. With little water or abundant surface life on the Red Planet for at least the past 3 billion years, the carbon cycle is much different than Earth's.
"Nevertheless, the carbon cycling is still happening and is still important because it's not only helping reveal information about Mars' ancient climate," says Paul Mahaffy, principal investigator on SAM and director of the Solar System Exploration Division at NASA Goddard. "It's also showing us that Mars is a dynamic planet that's circulating elements that are the buildings blocks of life as we know it."
The gases build a case for a chilly period
After Curiosity fed rock and dust samples into SAM, the lab heated each one to nearly 1,650 degrees Fahrenheit (900 degrees Celsius) to liberate the gases inside. By looking at the oven temperatures that released the CO2 and oxygen, scientists could tell what kind of minerals the gases were coming from. This type of information helps them understand how carbon is cycling on Mars.
Various studies have suggested that Mars' ancient atmosphere, containing mostly CO2, may have been thicker than Earth's is today. Most of it has been lost to space, but some may be stored in rocks at the planet's surface, particularly in the form of carbonates, which are minerals made of carbon and oxygen. On Earth, carbonates are produced when CO2 from the air is absorbed in the oceans and other bodies of water and then mineralized into rocks. Scientists think the same process happened on Mars and that it could help explain what happened to some of the Martian atmosphere.
Yet, missions to Mars haven't found enough carbonates in the surface to support a thick atmosphere.
Nonetheless, the few carbonates that SAM did detect revealed something interesting about the Martian climate through the isotopes of carbon and oxygen stored in them. Isotopes are versions of each element that have different masses. Because different chemical processes, from rock formation to biological activity, use these isotopes in different proportions, the ratios of heavy to light isotopes in a rock provide scientists with clues to how the rock formed.
In some of the carbonates SAM found, scientists noticed that the oxygen isotopes were lighter than those in the Martian atmosphere. This suggests that the carbonates did not form long ago simply from atmospheric CO2 absorbed into a lake. If they had, the oxygen isotopes in the rocks would have been slightly heavier than the ones in the air.
While it's possible that the carbonates formed very early in Mars' history, when the atmospheric composition was a bit different than it is today, Franz and her colleagues suggest that the carbonates more likely formed in a freezing lake. In this scenario, the ice could have sucked up heavy oxygen isotopes and left the lightest ones to form carbonates later. Other Curiosity scientists have also presented evidence suggesting that ice-covered lakes could have existed in Gale Crater.
So where is all the carbon?
The low abundance of carbonates on Mars is puzzling, scientists say. If there aren't many of these minerals at Gale Crater, perhaps the early atmosphere was thinner than predicted. Or maybe something else is storing the missing atmospheric carbon.
Based on their analysis, Franz and her colleagues suggest that some carbon could be sequestered in other minerals, such as oxalates, which store carbon and oxygen in a different structure than carbonates. Their hypothesis is based on the temperatures at which CO2 was released from some samples inside SAM -- too low for carbonates, but just right for oxalates -- and on the different carbon and oxygen isotope ratios than the scientists saw in the carbonates.
A model of a carbonate molecule next to an oxalate molecule
Oxalates are the most common type of organic mineral produced by plants on Earth. But oxalates also can be produced without biology. One way is through the interaction of atmospheric CO2 with surface minerals, water, and sunlight, in a process known as abiotic photosynthesis. This type of chemistry is hard to find on Earth because there's abundant life here, but Franz's team hopes to create abiotic photosynthesis in the lab to figure out if it actually could be responsible for the carbon chemistry they're seeing in Gale Crater.
Read more at Science Daily
ALMA discovers massive rotating disk in early universe
In our 13.8 billion-year-old universe, most galaxies like our Milky Way form gradually, reaching their large mass relatively late. But a new discovery made with the Atacama Large Millimeter/submillimeter Array (ALMA) of a massive rotating disk galaxy, seen when the universe was only ten percent of its current age, challenges the traditional models of galaxy formation. This research appears on 20 May 2020 in the journal Nature.
Galaxy DLA0817g, nicknamed the Wolfe Disk after the late astronomer Arthur M. Wolfe, is the most distant rotating disk galaxy ever observed. The unparalleled power of ALMA made it possible to see this galaxy spinning at 170 miles (272 kilometers) per second, similar to our Milky Way.
"While previous studies hinted at the existence of these early rotating gas-rich disk galaxies, thanks to ALMA we now have unambiguous evidence that they occur as early as 1.5 billion years after the Big Bang," said lead author Marcel Neeleman of the Max Planck Institute for Astronomy in Heidelberg, Germany.
How did the Wolfe Disk form?
The discovery of the Wolfe Disk provides a challenge for many galaxy formation simulations, which predict that massive galaxies at this point in the evolution of the cosmos grew through many mergers of smaller galaxies and hot clumps of gas.
"Most galaxies that we find early in the universe look like train wrecks because they underwent consistent and often 'violent' merging," explained Neeleman. "These hot mergers make it difficult to form well-ordered, cold rotating disks like we observe in our present universe."
In most galaxy formation scenarios, galaxies only start to show a well-formed disk around 6 billion years after the Big Bang. The fact that the astronomers found such a disk galaxy when the universe was only ten percent of its current age, indicates that other growth processes must have dominated.
"We think the Wolfe Disk has grown primarily through the steady accretion of cold gas," said J. Xavier Prochaska, of the University of California, Santa Cruz and coauthor of the paper. "Still, one of the questions that remains is how to assemble such a large gas mass while maintaining a relatively stable, rotating disk."
Star formation
The team also used the National Science Foundation's Karl G. Jansky Very Large Array (VLA) and the NASA/ESA Hubble Space Telescope to learn more about star formation in the Wolfe Disk. In radio wavelengths, ALMA looked at the galaxy's movements and mass of atomic gas and dust while the VLA measured the amount of molecular mass -- the fuel for star formation. In UV-light, Hubble observed massive stars. "The star formation rate in the Wolfe Disk is at least ten times higher than in our own galaxy," explained Prochaska. "It must be one of the most productive disk galaxies in the early universe."
A 'normal' galaxy
The Wolfe Disk was first discovered by ALMA in 2017. Neeleman and his team found the galaxy when they examined the light from a more distant quasar. The light from the quasar was absorbed as it passed through a massive reservoir of hydrogen gas surrounding the galaxy -- which is how it revealed itself. Rather than looking for direct light from extremely bright, but more rare galaxies, astronomers used this 'absorption' method to find fainter, and more 'normal' galaxies in the early universe.
"The fact that we found the Wolfe Disk using this method, tells us that it belongs to the normal population of galaxies present at early times," said Neeleman. "When our newest observations with ALMA surprisingly showed that it is rotating, we realized that early rotating disk galaxies are not as rare as we thought and that there should be a lot more of them out there."
Read more at Science Daily
Galaxy DLA0817g, nicknamed the Wolfe Disk after the late astronomer Arthur M. Wolfe, is the most distant rotating disk galaxy ever observed. The unparalleled power of ALMA made it possible to see this galaxy spinning at 170 miles (272 kilometers) per second, similar to our Milky Way.
"While previous studies hinted at the existence of these early rotating gas-rich disk galaxies, thanks to ALMA we now have unambiguous evidence that they occur as early as 1.5 billion years after the Big Bang," said lead author Marcel Neeleman of the Max Planck Institute for Astronomy in Heidelberg, Germany.
How did the Wolfe Disk form?
The discovery of the Wolfe Disk provides a challenge for many galaxy formation simulations, which predict that massive galaxies at this point in the evolution of the cosmos grew through many mergers of smaller galaxies and hot clumps of gas.
"Most galaxies that we find early in the universe look like train wrecks because they underwent consistent and often 'violent' merging," explained Neeleman. "These hot mergers make it difficult to form well-ordered, cold rotating disks like we observe in our present universe."
In most galaxy formation scenarios, galaxies only start to show a well-formed disk around 6 billion years after the Big Bang. The fact that the astronomers found such a disk galaxy when the universe was only ten percent of its current age, indicates that other growth processes must have dominated.
"We think the Wolfe Disk has grown primarily through the steady accretion of cold gas," said J. Xavier Prochaska, of the University of California, Santa Cruz and coauthor of the paper. "Still, one of the questions that remains is how to assemble such a large gas mass while maintaining a relatively stable, rotating disk."
Star formation
The team also used the National Science Foundation's Karl G. Jansky Very Large Array (VLA) and the NASA/ESA Hubble Space Telescope to learn more about star formation in the Wolfe Disk. In radio wavelengths, ALMA looked at the galaxy's movements and mass of atomic gas and dust while the VLA measured the amount of molecular mass -- the fuel for star formation. In UV-light, Hubble observed massive stars. "The star formation rate in the Wolfe Disk is at least ten times higher than in our own galaxy," explained Prochaska. "It must be one of the most productive disk galaxies in the early universe."
A 'normal' galaxy
The Wolfe Disk was first discovered by ALMA in 2017. Neeleman and his team found the galaxy when they examined the light from a more distant quasar. The light from the quasar was absorbed as it passed through a massive reservoir of hydrogen gas surrounding the galaxy -- which is how it revealed itself. Rather than looking for direct light from extremely bright, but more rare galaxies, astronomers used this 'absorption' method to find fainter, and more 'normal' galaxies in the early universe.
"The fact that we found the Wolfe Disk using this method, tells us that it belongs to the normal population of galaxies present at early times," said Neeleman. "When our newest observations with ALMA surprisingly showed that it is rotating, we realized that early rotating disk galaxies are not as rare as we thought and that there should be a lot more of them out there."
Read more at Science Daily
Why cats have more lives than dogs when it comes to snakebite
Cats are twice as likely to survive a venomous snakebite than dogs, and the reasons behind this strange phenomenon have been revealed by University of Queensland research.
The research team, led by PhD student Christina Zdenek and Associate Professor Bryan Fry, compared the effects of snake venoms on the blood clotting agents in dogs and cats, hoping to help save the lives of our furry friends.
"Snakebite is a common occurrence for pet cats and dogs across the globe and can be fatal," Dr Fry said.
"This is primarily due to a condition called 'venom-induced consumptive coagulopathy' -- where an animal loses its ability to clot blood and sadly bleeds to death.
"In Australia, the eastern brown snake (Pseudonaja textilis) alone is responsible for an estimated 76 per cent of reported domestic pet snakebites each year.
"And while only 31 per cent of dogs survive being bitten by an eastern brown snake without antivenom, cats are twice as likely to survive -- at 66 per cent."
Cats also have a significantly higher survival rate if given antivenom treatment and, until now, the reasons behind this disparity were unknown.
Dr Fry and his team used a coagulation analyser to test the effects of eastern brown snake venom -- as well as 10 additional venoms found around the world -- on dog and cat plasma in the lab.
"All venoms acted faster on dog plasma than cat or human," Mrs Zdenek said.
"This indicates that dogs would likely enter a state where blood clotting fails sooner and are therefore more vulnerable to these snake venoms.
"The spontaneous clotting time of the blood -- even without venom -- was dramatically faster in dogs than in cats.
"This suggests that the naturally faster clotting blood of dogs makes them more vulnerable to these types of snake venoms.
"And this is consistent with clinical records showing more rapid onset of symptoms and lethal effects in dogs than cats."
Several behavioural differences between cats and dogs are also highly likely to increase the chances of dogs dying from venomous snake bite.
"Dogs typically investigate with their nose and mouth, which are highly vascularised areas, whereas cats often swat with their paws," Dr Fry said.
"And dogs are usually more active than cats, which is not great after a bite has taken place because the best practice is to remain as still as possible to slow the spread of venom through the body."
The researchers hope their insights can lead to a better awareness of the critically short period of time to get treatment for dogs envenomed by snakes.
"As dog lovers ourselves, this study strikes close to home but it also has global implications," Dr Fry said.
"I've had two friends lose big dogs to snakebites, dying in less than ten minutes even though the eastern brown snakes responsible were not particularly large specimens.
Read more at Science Daily
The research team, led by PhD student Christina Zdenek and Associate Professor Bryan Fry, compared the effects of snake venoms on the blood clotting agents in dogs and cats, hoping to help save the lives of our furry friends.
"Snakebite is a common occurrence for pet cats and dogs across the globe and can be fatal," Dr Fry said.
"This is primarily due to a condition called 'venom-induced consumptive coagulopathy' -- where an animal loses its ability to clot blood and sadly bleeds to death.
"In Australia, the eastern brown snake (Pseudonaja textilis) alone is responsible for an estimated 76 per cent of reported domestic pet snakebites each year.
"And while only 31 per cent of dogs survive being bitten by an eastern brown snake without antivenom, cats are twice as likely to survive -- at 66 per cent."
Cats also have a significantly higher survival rate if given antivenom treatment and, until now, the reasons behind this disparity were unknown.
Dr Fry and his team used a coagulation analyser to test the effects of eastern brown snake venom -- as well as 10 additional venoms found around the world -- on dog and cat plasma in the lab.
"All venoms acted faster on dog plasma than cat or human," Mrs Zdenek said.
"This indicates that dogs would likely enter a state where blood clotting fails sooner and are therefore more vulnerable to these snake venoms.
"The spontaneous clotting time of the blood -- even without venom -- was dramatically faster in dogs than in cats.
"This suggests that the naturally faster clotting blood of dogs makes them more vulnerable to these types of snake venoms.
"And this is consistent with clinical records showing more rapid onset of symptoms and lethal effects in dogs than cats."
Several behavioural differences between cats and dogs are also highly likely to increase the chances of dogs dying from venomous snake bite.
"Dogs typically investigate with their nose and mouth, which are highly vascularised areas, whereas cats often swat with their paws," Dr Fry said.
"And dogs are usually more active than cats, which is not great after a bite has taken place because the best practice is to remain as still as possible to slow the spread of venom through the body."
The researchers hope their insights can lead to a better awareness of the critically short period of time to get treatment for dogs envenomed by snakes.
"As dog lovers ourselves, this study strikes close to home but it also has global implications," Dr Fry said.
"I've had two friends lose big dogs to snakebites, dying in less than ten minutes even though the eastern brown snakes responsible were not particularly large specimens.
Read more at Science Daily
Walking or cycling to work associated with reduced risk of early death and illness
People who walk, cycle and travel by train to work are at reduced risk of early death or illness compared with those who commute by car.
These are the findings of a study of over 300,000 commuters in England and Wales, by researchers from Imperial College London and the University of Cambridge.
The researchers say the findings suggest increased walking and cycling post-lockdown may reduce deaths from heart disease and cancer.
The study, published in The Lancet Planetary Health, used Census data to track the same people for up to 25 years, between 1991-2016.
It found that, compared with those who drove, those who cycled to work had a 20 per cent reduced rate of early death, 24 per cent reduced rate of death from cardiovascular disease (which includes heart attack and stroke) during the study period, a 16 per cent reduced rate of death from cancer, and an 11 per cent reduced rate of a cancer diagnosis.
Walking to work was associated with a 7 per cent reduced rate in cancer diagnosis, compared to driving. The team explain that associations between walking and other outcomes, such as rates of death from cancer and heart disease, were less certain. One potential reason for this is people who walk to work are, on average, in less affluent occupations than people who drive to work, and more likely to have underlying health conditions which could not be fully accounted for.
The paper also revealed that compared with those who drove to work, rail commuters had a 10 per cent reduced rate of early death, a 20 per cent reduced rate of death from cardiovascular disease, and a 12 per cent reduced rate of cancer diagnosis. This is likely due to them walking or cycling to transit points, although rail commuters also tend to be more affluent and less likely to have other underlying conditions, say the team.
Dr Richard Patterson from the MRC Epidemiology Unit at the University of Cambridge who led the research said: "As large numbers of people begin to return to work as the COVID-19 lockdown eases, it is a good time for everyone to rethink their transport choices. With severe and prolonged limits in public transport capacity likely, switching to private car use would be disastrous for our health and the environment. Encouraging more people to walk and cycle will help limit the longer-term consequences of the pandemic."
The study also assessed whether the benefits of each mode of travel differed between occupational groups and found that potential health benefits were similar across these groups.
The team used data from the UK Office for National Statistics Longitudinal Study of England and Wales, a dataset that links data from several sources including the Census of England and Wales, and registrations of death and cancer diagnoses.
The data revealed overall 66 per cent of people drove to work, 19 per cent used public transport, 12 per cent walked, and 3 per cent cycled. Men were more likely than women to drive or cycle to work, but were less likely to use public transport or walk.
Dr Anthony Laverty, senior author from the School of Public Health at Imperial College London explained: "It's great to see that the government is providing additional investment to encourage more walking and cycling during the post-lockdown period. While not everyone is able to walk or cycle to work, the government can support people to ensure that beneficial shifts in travel behaviour are sustained in the longer term. Additional benefits include better air quality which has improved during lockdown and reduced carbon emissions which is crucial to address the climate emergency."
The team add that the benefits of cycling and walking are well-documented, but use of Census data in this new study allowed large numbers of people to be followed up for a longer time. They explain that these analyses were unable to account for differences in participants' dietary intakes, smoking, other physical activity or underlying health conditions. However, they add these findings are compatible with evidence from other studies.
Read more at Science Daily
These are the findings of a study of over 300,000 commuters in England and Wales, by researchers from Imperial College London and the University of Cambridge.
The researchers say the findings suggest increased walking and cycling post-lockdown may reduce deaths from heart disease and cancer.
The study, published in The Lancet Planetary Health, used Census data to track the same people for up to 25 years, between 1991-2016.
It found that, compared with those who drove, those who cycled to work had a 20 per cent reduced rate of early death, 24 per cent reduced rate of death from cardiovascular disease (which includes heart attack and stroke) during the study period, a 16 per cent reduced rate of death from cancer, and an 11 per cent reduced rate of a cancer diagnosis.
Walking to work was associated with a 7 per cent reduced rate in cancer diagnosis, compared to driving. The team explain that associations between walking and other outcomes, such as rates of death from cancer and heart disease, were less certain. One potential reason for this is people who walk to work are, on average, in less affluent occupations than people who drive to work, and more likely to have underlying health conditions which could not be fully accounted for.
The paper also revealed that compared with those who drove to work, rail commuters had a 10 per cent reduced rate of early death, a 20 per cent reduced rate of death from cardiovascular disease, and a 12 per cent reduced rate of cancer diagnosis. This is likely due to them walking or cycling to transit points, although rail commuters also tend to be more affluent and less likely to have other underlying conditions, say the team.
Dr Richard Patterson from the MRC Epidemiology Unit at the University of Cambridge who led the research said: "As large numbers of people begin to return to work as the COVID-19 lockdown eases, it is a good time for everyone to rethink their transport choices. With severe and prolonged limits in public transport capacity likely, switching to private car use would be disastrous for our health and the environment. Encouraging more people to walk and cycle will help limit the longer-term consequences of the pandemic."
The study also assessed whether the benefits of each mode of travel differed between occupational groups and found that potential health benefits were similar across these groups.
The team used data from the UK Office for National Statistics Longitudinal Study of England and Wales, a dataset that links data from several sources including the Census of England and Wales, and registrations of death and cancer diagnoses.
The data revealed overall 66 per cent of people drove to work, 19 per cent used public transport, 12 per cent walked, and 3 per cent cycled. Men were more likely than women to drive or cycle to work, but were less likely to use public transport or walk.
Dr Anthony Laverty, senior author from the School of Public Health at Imperial College London explained: "It's great to see that the government is providing additional investment to encourage more walking and cycling during the post-lockdown period. While not everyone is able to walk or cycle to work, the government can support people to ensure that beneficial shifts in travel behaviour are sustained in the longer term. Additional benefits include better air quality which has improved during lockdown and reduced carbon emissions which is crucial to address the climate emergency."
The team add that the benefits of cycling and walking are well-documented, but use of Census data in this new study allowed large numbers of people to be followed up for a longer time. They explain that these analyses were unable to account for differences in participants' dietary intakes, smoking, other physical activity or underlying health conditions. However, they add these findings are compatible with evidence from other studies.
Read more at Science Daily
May 19, 2020
Climate change threatens progress in cancer control
Climate change threatens prospects for further progress in cancer prevention and control, increasing exposure to cancer risk factors and impacting access to cancer care, according to a new commentary by scientists from the American Cancer Society and Harvard T. H. Chan School of Public Health.
The commentary, appearing in CA: A Cancer Journal for Clinicians, says that progress in the fight against cancer has been achieved through the identification and control of cancer risk factors and access to and receipt of care. And both these factors are impacted by climate change.
The authors say climate change creates conditions favorable to greater production of and exposure to known carcinogens. Climate change has been linked to an increase in extreme weather events, like hurricanes and wildfires, which can impact cancer. Hurricane Harvey for example inundated chemical plants, oil refineries, and Superfund sites that contained vast amounts of carcinogens that were released into the Houston community. Wildfires release immense amounts of air pollutants known to cause cancer. Both events can affect patients' exposure to carcinogens and ability to seek preventive care and treatment; they threaten the laboratory and clinic infrastructure dedicated to cancer care in the United States.
The authors also propose ways to diminish the impact of climate change on cancer, because climate change mitigation efforts also have health benefits, especially to cancer prevention and outcomes. For example, air pollutants directly harmful to health are emitted by combustion processes that also contribute to greenhouse gas emissions. Some dietary patterns are also detrimental to both health and the environment. The agricultural sector contributes to approximately 30% of anthropogenic greenhouse gas emissions worldwide. Meat from ruminants have the highest environmental impact, while plant-based foods cause fewer adverse environmental effects per unit weight, per serving, per unit of energy, or per protein weight. Replacing animal source foods with plant-based foods, through guidelines provided to patients and changes made in the food services provided at cancer treatment facilities, would confer both environmental and health benefits.
"While some may view these issues as beyond the scope of responsibility of the nation's cancer treatment facilities, one need look no further than their mission statements, all of which speak to eradicating cancer," write the authors. "Climate change and continued reliance on fossil fuels push that noble goal further from reach. However, if all those whose life work is to care for those with cancer made clear to the communities they serve that actions to combat climate change and lessen our use of fossil fuels could prevent cancers and improve cancer outcomes, we might see actions that address climate change flourish, and the attainment of our missions to reduce suffering from cancer grow nearer."
From Science Daily
The commentary, appearing in CA: A Cancer Journal for Clinicians, says that progress in the fight against cancer has been achieved through the identification and control of cancer risk factors and access to and receipt of care. And both these factors are impacted by climate change.
The authors say climate change creates conditions favorable to greater production of and exposure to known carcinogens. Climate change has been linked to an increase in extreme weather events, like hurricanes and wildfires, which can impact cancer. Hurricane Harvey for example inundated chemical plants, oil refineries, and Superfund sites that contained vast amounts of carcinogens that were released into the Houston community. Wildfires release immense amounts of air pollutants known to cause cancer. Both events can affect patients' exposure to carcinogens and ability to seek preventive care and treatment; they threaten the laboratory and clinic infrastructure dedicated to cancer care in the United States.
The authors also propose ways to diminish the impact of climate change on cancer, because climate change mitigation efforts also have health benefits, especially to cancer prevention and outcomes. For example, air pollutants directly harmful to health are emitted by combustion processes that also contribute to greenhouse gas emissions. Some dietary patterns are also detrimental to both health and the environment. The agricultural sector contributes to approximately 30% of anthropogenic greenhouse gas emissions worldwide. Meat from ruminants have the highest environmental impact, while plant-based foods cause fewer adverse environmental effects per unit weight, per serving, per unit of energy, or per protein weight. Replacing animal source foods with plant-based foods, through guidelines provided to patients and changes made in the food services provided at cancer treatment facilities, would confer both environmental and health benefits.
"While some may view these issues as beyond the scope of responsibility of the nation's cancer treatment facilities, one need look no further than their mission statements, all of which speak to eradicating cancer," write the authors. "Climate change and continued reliance on fossil fuels push that noble goal further from reach. However, if all those whose life work is to care for those with cancer made clear to the communities they serve that actions to combat climate change and lessen our use of fossil fuels could prevent cancers and improve cancer outcomes, we might see actions that address climate change flourish, and the attainment of our missions to reduce suffering from cancer grow nearer."
From Science Daily
New study estimates the odds of life and intelligence emerging beyond our planet
Alien planet illustration |
We know from the geological record that life started relatively quickly, as soon our planet's environment was stable enough to support it. We also know that the first multicellular organism, which eventually produced today's technological civilization, took far longer to evolve, approximately 4 billion years.
But despite knowing when life first appeared on Earth, scientists still do not understand how life occurred, which has important implications for the likelihood of finding life elsewhere in the universe.
In a new paper published in the Proceeding of the National Academy of Sciences today, David Kipping, an assistant professor in Columbia's Department of Astronomy, shows how an analysis using a statistical technique called Bayesian inference could shed light on how complex extraterrestrial life might evolve in alien worlds.
"The rapid emergence of life and the late evolution of humanity, in the context of the timeline of evolution, are certainly suggestive," Kipping said. "But in this study it's possible to actually quantify what the facts tell us."
To conduct his analysis, Kipping used the chronology of the earliest evidence for life and the evolution of humanity. He asked how often we would expect life and intelligence to re-emerge if Earth's history were to repeat, re-running the clock over and over again.
He framed the problem in terms of four possible answers: Life is common and often develops intelligence, life is rare but often develops intelligence, life is common and rarely develops intelligence and, finally, life is rare and rarely develops intelligence.
This method of Bayesian statistical inference -- used to update the probability for a hypothesis as evidence or information becomes available -- states prior beliefs about the system being modeled, which are then combined with data to cast probabilities of outcomes.
"The technique is akin to betting odds," Kipping said. "It encourages the repeated testing of new evidence against your position, in essence a positive feedback loop of refining your estimates of likelihood of an event."
From these four hypotheses, Kipping used Bayesian mathematical formulas to weigh the models against one another. "In Bayesian inference, prior probability distributions always need to be selected," Kipping said. "But a key result here is that when one compares the rare-life versus common-life scenarios, the common-life scenario is always at least nine times more likely than the rare one."
The analysis is based on evidence that life emerged within 300 million years of the formation of the Earth's oceans as found in carbon-13-depleted zircon deposits, a very fast start in the context of Earth's lifetime. Kipping emphasizes that the ratio is at least 9:1 or higher, depending on the true value of how often intelligence develops.
Kipping's conclusion is that if planets with similar conditions and evolutionary time lines to Earth are common, then the analysis suggests that life should have little problem spontaneously emerging on other planets. And what are the odds that these extraterrestrial lives could be complex, differentiated and intelligent? Here, Kipping's inquiry is less assured, finding just 3:2 odds in favor of intelligent life.
This result stems from humanity's relatively late appearance in Earth's habitable window, suggesting that its development was neither an easy nor ensured process. "If we played Earth's history again, the emergence of intelligence is actually somewhat unlikely," he said.
Kipping points out that the odds in the study aren't overwhelming, being quite close to 50:50, and the findings should be treated as no more than a gentle nudge toward a hypothesis.
Read more at Science Daily
Long-term data show hurricanes are getting stronger
Illustration of hurricane seen from space |
A warming planet may be fueling the increase.
"Through modeling and our understanding of atmospheric physics, the study agrees with what we would expect to see in a warming climate like ours," says James Kossin, a NOAA scientist based at UW-Madison and lead author of the paper, which is published today (May 18, 2020) in the Proceedings of the National Academy of Sciences.
The research builds on Kossin's previous work, published in 2013, which identified trends in hurricane intensification across a 28-year data set. However, says Kossin, that timespan was less conclusive and required more hurricane case studies to demonstrate statistically significant results.
To increase confidence in the results, the researchers extended the study to include global hurricane data from 1979-2017. Using analytical techniques, including the CIMSS Advanced Dvorak Technique that relies on infrared temperature measurements from geostationary satellites to estimate hurricane intensity, Kossin and his colleagues were able to create a more uniform data set with which to identify trends.
"The main hurdle we have for finding trends is that the data are collected using the best technology at the time," says Kossin. "Every year the data are a bit different than last year, each new satellite has new tools and captures data in different ways, so in the end we have a patchwork quilt of all the satellite data that have been woven together."
Kossin's previous research has shown other changes in hurricane behavior over the decades, such as where they travel and how fast they move. In 2014, he identified poleward migrations of hurricanes, where tropical cyclones are travelling farther north and south, exposing previously less-affected coastal populations to greater risk.
In 2018, he demonstrated that hurricanes are moving more slowly across land due to changes in Earth's climate. This has resulted in greater flood risks as storms hover over cities and other areas, often for extended periods of time.
"Our results show that these storms have become stronger on global and regional levels, which is consistent with expectations of how hurricanes respond to a warming world," says Kossin. "It's a good step forward and increases our confidence that global warming has made hurricanes stronger, but our results don't tell us precisely how much of the trends are caused by human activities and how much may be just natural variability."
Read more at Science Daily
Scientists find brain center that 'profoundly' shuts down pain
Neurons illustration |
Somewhat unexpectedly, this brain center turns pain off, not on. It's also located in an area where few people would have thought to look for an anti-pain center, the amygdala, which is often considered the home of negative emotions and responses, like the fight or flight response and general anxiety.
"People do believe there is a central place to relieve pain, that's why placebos work," said senior author Fan Wang, the Morris N. Broad Distinguished Professor of neurobiology in the School of Medicine. "The question is where in the brain is the center that can turn off pain."
"Most of the previous studies have focused on which regions are turned ON by pain," Wang said. "But there are so many regions processing pain, you'd have to turn them all off to stop pain. Whereas this one center can turn off the pain by itself."
The work is a follow-up to earlier research in Wang's lab looking at neurons that are activated, rather than suppressed, by general anesthetics. In a 2019 study, they found that general anesthesia promotes slow-wave sleep by activating the supraoptic nucleus of the brain. But sleep and pain are separate, an important clue that led to the new finding, which appears online May 18 in Nature Neuroscience.
The researchers found that general anesthesia also activates a specific subset of inhibitory neurons in the central amygdala, which they have called the CeAga neurons (CeA stands for central amygdala; ga indicates activation by general anesthesia). Mice have a relatively larger central amygdala than humans, but Wang said she had no reason to think we have a different system for controlling pain.
Using technologies that Wang's lab has pioneered to track the paths of activated neurons in mice, the team found the CeAga was connected to many different areas of the brain, "which was a surprise," Wang said.
By giving mice a mild pain stimulus, the researchers could map all of the pain-activated brain regions. They discovered that at least 16 brain centers known to process the sensory or emotional aspects of pain were receiving inhibitory input from the CeAga.
"Pain is a complicated brain response," Wang said. "It involves sensory discrimination, emotion, and autonomic (involuntary nervous system) responses. Treating pain by dampening all of these brain processes in many areas is very difficult to achieve. But activating a key node that naturally sends inhibitory signals to these pain-processing regions would be more robust."
Using a technology called optogenetics, which uses light to activate a small population of cells in the brain, the researchers found they could turn off the self-caring behaviors a mouse exhibits when it feels uncomfortable by activating the CeAga neurons. Paw-licking or face-wiping behaviors were "completely abolished" the moment the light was switched on to activate the anti-pain center.
"It's so drastic," Wang said. "They just instantaneously stop licking and rubbing."
When the scientists dampened the activity of these CeAga neurons, the mice responded as if a temporary insult had become intense or painful again. They also found that low-dose ketamine, an anesthetic drug that allows sensation but blocks pain, activated the CeAga center and wouldn't work without it.
Now the researchers are going to look for drugs that can activate only these cells to suppress pain as potential future pain killers, Wang said.
Read more at Science Daily
May 18, 2020
New model to accurately date historic earthquakes
Three earthquakes in the Monterey Bay Area, occurring in 1838, 1890 and 1906, happened without a doubt on the San Andreas Fault, according to a new paper by a Portland State University researcher.
The paper, "New Insights into Paleoseismic Age Models on the Northern San Andreas Fault: Charcoal In-built ages and Updated Earthquake Correlations," was recently published in the Bulletin of the Seismological Society of America.
Assistant Professor of Geology at PSU Ashley Streig said the new research confirms what her team first discovered in 2014: three earthquakes occurred within a 68-year period in the Bay Area on the San Andreas Fault.
"This is the first time there's been geologic evidence of a surface rupture from the historic 1838 and 1890 earthquakes that we knew about from newspapers and other historical documents," Streig said. "It basically meant that the 1800s were a century of doom."
Building on the 2014 study, Streig said they were able to excavate a redwood slab from a tree felled by early Europeans, from one meter below the surface in the Bay Area. The tree was toppled before the three earthquakes in question occurred. That slab was used to determine the precise date logging first occurred in the area, and pinpointed the historic dates of the earthquakes. Further, they were able use the slab to develop a new model for determining recurrence intervals and more exact dating.
Streig used the dating technique wiggle matching for several measured carbon 14 samples from the tree slab and compared them with fluctuations in atmospheric carbon 14 concentrations over time to fingerprint the exact death of the tree and confirm the timing of the earthquakes. Because the researchers had an exact age from the slab, they were able to test how well the most commonly used material, charcoal, works in earthquake age models.
Charcoal is commonly used for dating and to constrain the ages of prehistoric earthquakes and develop an earthquake recurrence interval, but Streig said the charcoal can be hundreds of years older than the stratigraphic layer containing it, yielding an offset between what has been dated and the actual age of the earthquake. The new technique accounts for inbuilt charcoal ages -- which account for the difference in time between the wood's formation and the fire that generated said charcoal -- and can better estimate the age of the event being studied.
"We were able to evaluate the inbuilt age of the charcoal incorporated in the deposits and find that charcoal ages are approximately 322 years older than the actual age of the deposit -- so previous earthquake age models in this area using detrital charcoal would be offset roughly by this amount," she said.
New earthquake age modeling using a method to correct for this charcoal inbuilt age, and age results from the tree stump are what give Streig absolute certainly that the 1838 and 1890 earthquakes in question occurred on the San Andreas Fault and during those years.
Read more at Science Daily
The paper, "New Insights into Paleoseismic Age Models on the Northern San Andreas Fault: Charcoal In-built ages and Updated Earthquake Correlations," was recently published in the Bulletin of the Seismological Society of America.
Assistant Professor of Geology at PSU Ashley Streig said the new research confirms what her team first discovered in 2014: three earthquakes occurred within a 68-year period in the Bay Area on the San Andreas Fault.
"This is the first time there's been geologic evidence of a surface rupture from the historic 1838 and 1890 earthquakes that we knew about from newspapers and other historical documents," Streig said. "It basically meant that the 1800s were a century of doom."
Building on the 2014 study, Streig said they were able to excavate a redwood slab from a tree felled by early Europeans, from one meter below the surface in the Bay Area. The tree was toppled before the three earthquakes in question occurred. That slab was used to determine the precise date logging first occurred in the area, and pinpointed the historic dates of the earthquakes. Further, they were able use the slab to develop a new model for determining recurrence intervals and more exact dating.
Streig used the dating technique wiggle matching for several measured carbon 14 samples from the tree slab and compared them with fluctuations in atmospheric carbon 14 concentrations over time to fingerprint the exact death of the tree and confirm the timing of the earthquakes. Because the researchers had an exact age from the slab, they were able to test how well the most commonly used material, charcoal, works in earthquake age models.
Charcoal is commonly used for dating and to constrain the ages of prehistoric earthquakes and develop an earthquake recurrence interval, but Streig said the charcoal can be hundreds of years older than the stratigraphic layer containing it, yielding an offset between what has been dated and the actual age of the earthquake. The new technique accounts for inbuilt charcoal ages -- which account for the difference in time between the wood's formation and the fire that generated said charcoal -- and can better estimate the age of the event being studied.
"We were able to evaluate the inbuilt age of the charcoal incorporated in the deposits and find that charcoal ages are approximately 322 years older than the actual age of the deposit -- so previous earthquake age models in this area using detrital charcoal would be offset roughly by this amount," she said.
New earthquake age modeling using a method to correct for this charcoal inbuilt age, and age results from the tree stump are what give Streig absolute certainly that the 1838 and 1890 earthquakes in question occurred on the San Andreas Fault and during those years.
Read more at Science Daily
Eavesdropping crickets drop from the sky to evade capture by bats
Researchers have uncovered the highly efficient strategy used by a group of crickets to distinguish the calls of predatory bats from the incessant noises of the nocturnal jungle. The findings, led by scientists at the Universities of Bristol and Graz in Austria and published in Philosophical Transactions of the Royal Society B, reveal the crickets eavesdrop on the vocalisations of bats to help them escape their grasp when hunted.
Sword-tailed crickets of Barro Colorado Island, Panama, are quite unlike many of their nocturnal, flying-insect neighbours. Instead of employing a variety of responses to bat calls of varying amplitudes, these crickets simply stop in mid-air, effectively dive-bombing out of harm's way. The higher the bat call amplitude, the longer they cease flight and further they fall. Biologists from Bristol's School of Biological Sciences and Graz's Inst of Zoology discovered why these crickets evolved significantly higher response thresholds than other eared insects.
Within the plethora of jungle sounds, it is important to distinguish possible threats. This is complicated by the cacophony of katydid (bush-cricket) calls, which are acoustically similar to bat calls and form 98 per cent of high-frequency background noise in a nocturnal rainforest. Consequently, sword-tailed crickets need to employ a reliable method to distinguish between calls of predatory bats and harmless katydids.
Responding only to ultrasonic calls above a high-amplitude threshold is their solution to this evolutionary challenge. Firstly, it allows the crickets to completely avoid accidentally responding to katydids. Secondly, they do not respond to all bat calls but only sufficiently loud ones, which indicates the bat is within seven metres of the insect. This is the exact distance at which a bat can detect the echo of the crickets, which ensures the crickets only respond to bats that have already detected them when trying to evade capture.
This type of approach is rare in nature with most other eavesdropping insects living in less noisy environments being able to rely on differences in call patterns to distinguish bat predators.
Dr Marc Holderied, senior author on the study from Bristol's School of Biological Sciences, explained: "The beauty of this simple avoidance rule is how the crickets respond at call amplitudes that exactly match the distance over which bats would detect them anyway -- in their noisy world it pays to only respond when it really counts."
Read more at Science Daily
Sword-tailed crickets of Barro Colorado Island, Panama, are quite unlike many of their nocturnal, flying-insect neighbours. Instead of employing a variety of responses to bat calls of varying amplitudes, these crickets simply stop in mid-air, effectively dive-bombing out of harm's way. The higher the bat call amplitude, the longer they cease flight and further they fall. Biologists from Bristol's School of Biological Sciences and Graz's Inst of Zoology discovered why these crickets evolved significantly higher response thresholds than other eared insects.
Within the plethora of jungle sounds, it is important to distinguish possible threats. This is complicated by the cacophony of katydid (bush-cricket) calls, which are acoustically similar to bat calls and form 98 per cent of high-frequency background noise in a nocturnal rainforest. Consequently, sword-tailed crickets need to employ a reliable method to distinguish between calls of predatory bats and harmless katydids.
Responding only to ultrasonic calls above a high-amplitude threshold is their solution to this evolutionary challenge. Firstly, it allows the crickets to completely avoid accidentally responding to katydids. Secondly, they do not respond to all bat calls but only sufficiently loud ones, which indicates the bat is within seven metres of the insect. This is the exact distance at which a bat can detect the echo of the crickets, which ensures the crickets only respond to bats that have already detected them when trying to evade capture.
This type of approach is rare in nature with most other eavesdropping insects living in less noisy environments being able to rely on differences in call patterns to distinguish bat predators.
Dr Marc Holderied, senior author on the study from Bristol's School of Biological Sciences, explained: "The beauty of this simple avoidance rule is how the crickets respond at call amplitudes that exactly match the distance over which bats would detect them anyway -- in their noisy world it pays to only respond when it really counts."
Read more at Science Daily
Lack of insects in cities limits breeding success of urban birds
Urban insect populations would need to increase by a factor of at least 2.5 for urban great tits to have same breeding success as those living in forests according to research published in the British Ecological Society's Journal of Animal Ecology.
Researchers at the University of Pannonia, Hungary and the University of Sheffield, UK found that providing high quality supplementary food to urban great tits, in the form of nutritionally enriched mealworms, can dramatically boost their breeding success.
"Urban nestlings had considerably higher survival chance and gained an extra two grams of body mass when provided with an insect-rich diet, an increase of 15% compared to the weight of chicks that didn't receive extra food. This is a substantial difference." said Dr Gábor Seress, lead author of the research. "This greater body mass when leaving the nest may increase the chicks' chance of surviving to spring and breeding themselves."
These beneficial effects of food supplementation were not seen in forest dwelling great tits where high quality nestling food is abundant. Although the free meals were also readily received by forest parents.
Reduced breeding success in urban bird populations is well documented but this study is the first to show that insect-rich supplementary food during nestling development largely mitigates these habitat differences. The findings indicate that food limitation in urban environments plays a crucial role in reducing the breeding success of insect-eating birds.
Dr Seress said: "Given the popularity of year-round bird feeding and the abundance of anthropogenic food sources in cities it might seem unlikely that urban birds have limited food. But quantity is not quality. Most songbirds require an insect-rich diet to successfully raise many and vigorous young, and urban areas generally support fewer insects than more natural habitats, especially caterpillars, which are key components of the optimal nestling diet for many species."
The authors say that artificially providing insect-rich food for birds in cities may not be the best solution. "Instead of directly supplying high-quality bird food to enhance urban birds' breeding success, we believe that management activities that aim to increase the abundance of insects in the birds' environment, would be more effective. Insects are the cornerstone of healthy and complex ecosystems and it is clear that we need more in our cities." said Dr Seress.
Increasing insect populations in cities in no easy task. The authors highlight that most urban green spaces are often highly managed which can reduce insect abundance. Modifying how green spaces are managed and encouraging practices like planting trees is likely to benefit both insect-eating birds as well as people.
In the experiment, the researchers studied great tits in nest boxes at urban and forest sites in Hungary, 2017. The urban sites were in the city of Veszprém with nest boxes placed in public green spaces such as parks and cemeteries. The forest site was three kilometres outside of Veszprém in deciduous woodland. At both sites there were broods that did not receive supplementary food to act as controls.
For the supplementary fed broods, the researchers provided nutritionally enhanced mealworms throughout brood rearing period on a daily basis, adjusting the amount in accordance to the brood size to meet 40-50% of food requirements. When nestlings were 15 days old (a few days from leaving the nest) the researchers recorded the size, weight and survival rate of chicks.
To estimate the amount of supplementary food consumed by the chicks and their parents, the researchers mounted small, hidden cameras on the nest boxes.
While the findings demonstrate that providing high quality additional food can boost breeding success, it is unclear to what extent this could increase population size and stability, further work is needed to explore this.
Read more at Science Daily
Researchers at the University of Pannonia, Hungary and the University of Sheffield, UK found that providing high quality supplementary food to urban great tits, in the form of nutritionally enriched mealworms, can dramatically boost their breeding success.
"Urban nestlings had considerably higher survival chance and gained an extra two grams of body mass when provided with an insect-rich diet, an increase of 15% compared to the weight of chicks that didn't receive extra food. This is a substantial difference." said Dr Gábor Seress, lead author of the research. "This greater body mass when leaving the nest may increase the chicks' chance of surviving to spring and breeding themselves."
These beneficial effects of food supplementation were not seen in forest dwelling great tits where high quality nestling food is abundant. Although the free meals were also readily received by forest parents.
Reduced breeding success in urban bird populations is well documented but this study is the first to show that insect-rich supplementary food during nestling development largely mitigates these habitat differences. The findings indicate that food limitation in urban environments plays a crucial role in reducing the breeding success of insect-eating birds.
Dr Seress said: "Given the popularity of year-round bird feeding and the abundance of anthropogenic food sources in cities it might seem unlikely that urban birds have limited food. But quantity is not quality. Most songbirds require an insect-rich diet to successfully raise many and vigorous young, and urban areas generally support fewer insects than more natural habitats, especially caterpillars, which are key components of the optimal nestling diet for many species."
The authors say that artificially providing insect-rich food for birds in cities may not be the best solution. "Instead of directly supplying high-quality bird food to enhance urban birds' breeding success, we believe that management activities that aim to increase the abundance of insects in the birds' environment, would be more effective. Insects are the cornerstone of healthy and complex ecosystems and it is clear that we need more in our cities." said Dr Seress.
Increasing insect populations in cities in no easy task. The authors highlight that most urban green spaces are often highly managed which can reduce insect abundance. Modifying how green spaces are managed and encouraging practices like planting trees is likely to benefit both insect-eating birds as well as people.
In the experiment, the researchers studied great tits in nest boxes at urban and forest sites in Hungary, 2017. The urban sites were in the city of Veszprém with nest boxes placed in public green spaces such as parks and cemeteries. The forest site was three kilometres outside of Veszprém in deciduous woodland. At both sites there were broods that did not receive supplementary food to act as controls.
For the supplementary fed broods, the researchers provided nutritionally enhanced mealworms throughout brood rearing period on a daily basis, adjusting the amount in accordance to the brood size to meet 40-50% of food requirements. When nestlings were 15 days old (a few days from leaving the nest) the researchers recorded the size, weight and survival rate of chicks.
To estimate the amount of supplementary food consumed by the chicks and their parents, the researchers mounted small, hidden cameras on the nest boxes.
While the findings demonstrate that providing high quality additional food can boost breeding success, it is unclear to what extent this could increase population size and stability, further work is needed to explore this.
Read more at Science Daily
Double helix of masonry: Researchers discover the secret of Italian renaissance domes
In a collaborative study in this month's issue of Engineering Structures, researchers at Princeton University and the University of Bergamo revealed the engineering techniques behind self-supporting masonry domes inherent to the Italian renaissance. Researchers analyzed how cupolas like the famous duomo, part of the Cathedral of Santa Maria del Fiore in Florence, were built as self-supporting, without the use of shoring or forms typically required.
Sigrid Adriaenssens, professor of civil and environmental engineering at Princeton, collaborated on the analysis with graduate student Vittorio Paris and Attilio Pizzigoni, professor engineering and applied sciences, both of the University of Bergamo. Their study is the first ever to quantitatively prove the physics at work in Italian renaissance domes and to explain the forces which allow such structures to have been built without formwork typically required, even for modern construction. Previously, there were only hypotheses in the field about how forces flowed through such edifices, and it was unknown how they were built without the use of temporary structures to hold them up during construction.
For Adriaenssens, the project advances two significant questions. "How can mankind construct such a large and beautiful structure without any formwork -- mechanically, what's the innovation?" she asked. Secondly, "What can we learn?" Is there some "forgotten technology that we can use today?"
The detailed computer analysis accounts for the forces at work down to the individual brick, explaining how equilibrium is leveraged. The technique called discrete element modelling (DEM) analyzed the structure at several layers and stages of construction. A limit state analysis determined the overall equilibrium state, or stability, of the completed structure. Not only do these tests verify the mechanics of the structures, but they also make it possible to recreate the techniques for modern construction.
Applying their findings to modern construction, the researchers anticipate that this study could have practical applications for developing construction techniques deploying aerial drones and robots. Using these unmanned machines for construction would increase worker safety, as well as enhance construction speed and reduce building costs.
Another advantage of unearthing new building techniques from ancient sources is that it can yield environmental benefits. "The construction industry is one of the most wasteful ones, so that means if we don't change anything, there will be a lot more construction waste," said Adriaenssens, who is interested in using drone techniques for building very large span roofs that are self-supporting and require no shoring or formwork.
Read more at Science Daily
Sigrid Adriaenssens, professor of civil and environmental engineering at Princeton, collaborated on the analysis with graduate student Vittorio Paris and Attilio Pizzigoni, professor engineering and applied sciences, both of the University of Bergamo. Their study is the first ever to quantitatively prove the physics at work in Italian renaissance domes and to explain the forces which allow such structures to have been built without formwork typically required, even for modern construction. Previously, there were only hypotheses in the field about how forces flowed through such edifices, and it was unknown how they were built without the use of temporary structures to hold them up during construction.
For Adriaenssens, the project advances two significant questions. "How can mankind construct such a large and beautiful structure without any formwork -- mechanically, what's the innovation?" she asked. Secondly, "What can we learn?" Is there some "forgotten technology that we can use today?"
The detailed computer analysis accounts for the forces at work down to the individual brick, explaining how equilibrium is leveraged. The technique called discrete element modelling (DEM) analyzed the structure at several layers and stages of construction. A limit state analysis determined the overall equilibrium state, or stability, of the completed structure. Not only do these tests verify the mechanics of the structures, but they also make it possible to recreate the techniques for modern construction.
Applying their findings to modern construction, the researchers anticipate that this study could have practical applications for developing construction techniques deploying aerial drones and robots. Using these unmanned machines for construction would increase worker safety, as well as enhance construction speed and reduce building costs.
Another advantage of unearthing new building techniques from ancient sources is that it can yield environmental benefits. "The construction industry is one of the most wasteful ones, so that means if we don't change anything, there will be a lot more construction waste," said Adriaenssens, who is interested in using drone techniques for building very large span roofs that are self-supporting and require no shoring or formwork.
Read more at Science Daily
May 17, 2020
Novel treatment using patient's own cells opens new possibilities to treat Parkinson's disease
Reprogramming a patient's own skin cells to replace cells in the brain that are progressively lost during Parkinson's disease (PD) has been shown to be technically feasible, reports a team of investigators from McLean Hospital and Massachusetts General Hospital (MGH) in the most recent issue of the New England Journal of Medicine.
PD is the second most common degenerative disease of the brain, and millions of people world-wide experience its symptoms, which include tremor, stiffness, and difficulty with speech and walking. The progressive loss of brain cells called dopaminergic neurons plays a major role in the disease's development. As described in the current report, the use of a patient's own reprogrammed cells is an advance that overcomes barriers associated with the use of cells from another individual.
"Because the cells come from the patient, they are readily available and can be reprogrammed in such a way that they are not rejected on implantation. This represents a milestone in 'personalized medicine' for Parkinson's," says senior author Kwang-Soo Kim, PhD, director of the Molecular Neurobiology Laboratory at McLean Hospital, the largest clinical neuroscience and psychiatric affiliate of Harvard Medical School.
The McLean-MGH team reprogrammed a 69-year-old patient's skin cells to embryo-like pluripotent stem cells (called induced pluripotent stem cells) and then differentiated them to take on the character-istics of dopaminergic neurons, which are lost in Parkinson's. After extensive testing of the cells, Kim ap-plied to the FDA for a single-patient, Investigational New Drug (IND) application and also received the approval of the hospital human subjects ethical review board to implant the cells into the patient's brain.
Bob Carter, MD, PhD, chief of Neurosurgery at MGH and co-senior author, says: "This strategy highlights the emerging power of using one's own cells to try and reverse a condition -- Parkinson's disease -- that has been very challenging to treat. I am very pleased by the extensive collaboration across multiple institutions, scientists, physicians, and surgeons that came together to make this a possibility."
In a series of two separate surgeries in 2017 and 2018 at Weill Cornell Medical Center and MGH, the patient underwent transplantation of the replacement dopamine neurons. Lead author Jeffrey Schweitzer, MD, PhD, a Parkinson's specialized neurosurgeon and director of the Neurosurgical Neurodegenerative Cell Therapy program at MGH, designed a novel minimally invasive neurosurgical implantation procedure to deliver the cells, working in collaboration with Carter at MGH and Michael G. Kaplitt, MD, PhD, a neurosurgeon at Weill Cornell.
Two years later, imaging tests indicate that the transplanted cells are alive and functioning correctly as dopaminergic neurons in the brain. Because the implanted cells originated from the patient, they did not trigger an immune response and were not rejected without the use of an immunosuppressant drug. Kim also noted, "We have shown for the first time in this study that these reprogrammed cells are still recognized as self by the patient's immune system and won't be rejected." These results indicate that this personalized cell-replacement strategy was a technical success, with the cells surviving and functioning in the intended manner. The patient has not developed any side effects, and there are no signs that the cells have caused any unwanted growth or tumors.
As for how the patient feels, in the time that has passed since surgery, the patient has enjoyed improvements in his day-to-day activities and reports an improvement in his quality of life. Routine activities, such as tying his shoes, walking with an improved stride, and speaking with a clearer voice, have become possible again. Some activities -- such as swimming, skiing, and biking, which he had given up years ago -- are now back on his agenda. While it is too early to know whether this treatment approach is viable based on a single patient, the authors have the goal of continuing to test the treatment in formal clinical trials.
"Current drugs and surgical treatments for Parkinson's disease are intended to address symptoms that result from the loss of dopaminergic neurons, but our strategy attempts to go further by directly replacing those neurons," says Kim.
"As a neurologist, my goal is to make state-of-the-art treatments available to patients with Parkinson's," says Todd Herrington, MD, PhD, lead study neurologist at the MGH and Parkinson's expert. "This is a first step in developing this therapy. Parkinson's patients should understand that this therapy is not cur-rently available and there is a lot of work still required to prove this is an effective treatment."
While there is optimism about the future of Parkinson's disease treatments because of their work, Schweitzer cautions against declaring victory against the disease. "These results reflect the experience of one individual patient and a formal clinical trial will be required to determine if the therapy is effec-tive," says Schweitzer.
Read more at Science Daily
PD is the second most common degenerative disease of the brain, and millions of people world-wide experience its symptoms, which include tremor, stiffness, and difficulty with speech and walking. The progressive loss of brain cells called dopaminergic neurons plays a major role in the disease's development. As described in the current report, the use of a patient's own reprogrammed cells is an advance that overcomes barriers associated with the use of cells from another individual.
"Because the cells come from the patient, they are readily available and can be reprogrammed in such a way that they are not rejected on implantation. This represents a milestone in 'personalized medicine' for Parkinson's," says senior author Kwang-Soo Kim, PhD, director of the Molecular Neurobiology Laboratory at McLean Hospital, the largest clinical neuroscience and psychiatric affiliate of Harvard Medical School.
The McLean-MGH team reprogrammed a 69-year-old patient's skin cells to embryo-like pluripotent stem cells (called induced pluripotent stem cells) and then differentiated them to take on the character-istics of dopaminergic neurons, which are lost in Parkinson's. After extensive testing of the cells, Kim ap-plied to the FDA for a single-patient, Investigational New Drug (IND) application and also received the approval of the hospital human subjects ethical review board to implant the cells into the patient's brain.
Bob Carter, MD, PhD, chief of Neurosurgery at MGH and co-senior author, says: "This strategy highlights the emerging power of using one's own cells to try and reverse a condition -- Parkinson's disease -- that has been very challenging to treat. I am very pleased by the extensive collaboration across multiple institutions, scientists, physicians, and surgeons that came together to make this a possibility."
In a series of two separate surgeries in 2017 and 2018 at Weill Cornell Medical Center and MGH, the patient underwent transplantation of the replacement dopamine neurons. Lead author Jeffrey Schweitzer, MD, PhD, a Parkinson's specialized neurosurgeon and director of the Neurosurgical Neurodegenerative Cell Therapy program at MGH, designed a novel minimally invasive neurosurgical implantation procedure to deliver the cells, working in collaboration with Carter at MGH and Michael G. Kaplitt, MD, PhD, a neurosurgeon at Weill Cornell.
Two years later, imaging tests indicate that the transplanted cells are alive and functioning correctly as dopaminergic neurons in the brain. Because the implanted cells originated from the patient, they did not trigger an immune response and were not rejected without the use of an immunosuppressant drug. Kim also noted, "We have shown for the first time in this study that these reprogrammed cells are still recognized as self by the patient's immune system and won't be rejected." These results indicate that this personalized cell-replacement strategy was a technical success, with the cells surviving and functioning in the intended manner. The patient has not developed any side effects, and there are no signs that the cells have caused any unwanted growth or tumors.
As for how the patient feels, in the time that has passed since surgery, the patient has enjoyed improvements in his day-to-day activities and reports an improvement in his quality of life. Routine activities, such as tying his shoes, walking with an improved stride, and speaking with a clearer voice, have become possible again. Some activities -- such as swimming, skiing, and biking, which he had given up years ago -- are now back on his agenda. While it is too early to know whether this treatment approach is viable based on a single patient, the authors have the goal of continuing to test the treatment in formal clinical trials.
"Current drugs and surgical treatments for Parkinson's disease are intended to address symptoms that result from the loss of dopaminergic neurons, but our strategy attempts to go further by directly replacing those neurons," says Kim.
"As a neurologist, my goal is to make state-of-the-art treatments available to patients with Parkinson's," says Todd Herrington, MD, PhD, lead study neurologist at the MGH and Parkinson's expert. "This is a first step in developing this therapy. Parkinson's patients should understand that this therapy is not cur-rently available and there is a lot of work still required to prove this is an effective treatment."
While there is optimism about the future of Parkinson's disease treatments because of their work, Schweitzer cautions against declaring victory against the disease. "These results reflect the experience of one individual patient and a formal clinical trial will be required to determine if the therapy is effec-tive," says Schweitzer.
Read more at Science Daily
Binge drinkers beware, Drunkorexia is calling
Mojito, appletini or a simple glass of fizz -- they may take the edge off a busy day, but if you find yourself bingeing on more than a few, you could be putting your physical and mental health at risk according new research at the University of South Australia.
Examining the drinking patterns of 479 female Australian university students aged 18-24 years, the world-first empirical study explored the underlying belief patterns than can contribute to Drunkorexia -- a damaging and dangerous behaviour where disordered patterns of eating are used to offset negative effects of consuming excess alcohol, such as gaining weight.
Concerningly, researchers found that a staggering 82.7 per cent of female university students surveyed had engaged in Drunkorexic behaviours over the past three months. And, more than 28 per cent were regularly and purposely skipping meals, consuming low-calorie or sugar-free alcoholic beverages, purging or exercising after drinking to help reduce ingested calories from alcohol, at least 25 per cent of the time.
Clinical psychologist and lead UniSA researcher Alycia Powell-Jones says the prevalence of Drunkorexic behaviours among Australian female university students is concerning.
"Due to their age and stage of development, young adults are more likely to engage in risk-taking behaviours, which can include drinking excess alcohol," Powell-Jones says.
"Excess alcohol consumption combined with restrictive and disordered eating patterns is extremely dangerous and can dramatically increase the risk of developing serious physical and psychological consequences, including hypoglycaemia, liver cirrhosis, nutritional deficits, brain and heart damage, memory lapses, blackouts, depression and cognitive deficits.
"Certainly, many of us have drunk too much alcohol at some point in time, and we know just by how we feel the next day, that this is not good for us, but when nearly a third of young female uni students are intentionally cutting back on food purely to offset alcohol calories; it's a serious health concern."
The harmful use of alcohol is a global issue, with excess consumption causing millions of deaths, including many thousands of young lives.
In Australia, one in six people consume alcohol at dangerous levels, placing them at lifetime risk of an alcohol-related disease or injury. The combination of excessive alcohol intake with restrictive eating behaviours to offset calories can result in a highly toxic cocktail for this population.
The study was undertaken in two stages. The first measured the prevalence of self-reported, compensative and restrictive activities in relation to their alcohol consumption.
The second stage identified participants' Early Maladaptive Schemes (EMS) -- or thought patterns -- finding that that the subset of schemas most predictive of Drunkorexia were 'insufficient self-control', 'emotional deprivation' and 'social isolation'.
Powell-Jones says identifying the early maladaptive schemas linked to Drunkorexia is key to understanding the harmful condition.
These are deeply held and pervasive themes regarding oneself and one's relationship with others, that can develop in childhood and then can influence all areas of life, often in dysfunctional ways. Early maladaptive schemas can also be influenced by cultural and social norms.
Drunkorexic behaviour appears to be motivated by two key social norms for young adults -- consuming alcohol and thinness.
"This study has provided preliminary insight into better understanding why young female adults make these decisions to engage in Drunkorexic behaviours," Powell-Jones says.
"Not only may it be a coping strategy to manage social anxieties through becoming accepted and fitting in with peer group or cultural expectations, but it also shows a reliance on avoidant coping strategies.
"It is important that clinicians, educators, parents and friends are aware of the factors that motivate young women to engage in this harmful and dangerous behaviour, including cultural norms, beliefs that drive self-worth, a sense of belonging, and interpersonal connectedness.
Read more at Science Daily
Examining the drinking patterns of 479 female Australian university students aged 18-24 years, the world-first empirical study explored the underlying belief patterns than can contribute to Drunkorexia -- a damaging and dangerous behaviour where disordered patterns of eating are used to offset negative effects of consuming excess alcohol, such as gaining weight.
Concerningly, researchers found that a staggering 82.7 per cent of female university students surveyed had engaged in Drunkorexic behaviours over the past three months. And, more than 28 per cent were regularly and purposely skipping meals, consuming low-calorie or sugar-free alcoholic beverages, purging or exercising after drinking to help reduce ingested calories from alcohol, at least 25 per cent of the time.
Clinical psychologist and lead UniSA researcher Alycia Powell-Jones says the prevalence of Drunkorexic behaviours among Australian female university students is concerning.
"Due to their age and stage of development, young adults are more likely to engage in risk-taking behaviours, which can include drinking excess alcohol," Powell-Jones says.
"Excess alcohol consumption combined with restrictive and disordered eating patterns is extremely dangerous and can dramatically increase the risk of developing serious physical and psychological consequences, including hypoglycaemia, liver cirrhosis, nutritional deficits, brain and heart damage, memory lapses, blackouts, depression and cognitive deficits.
"Certainly, many of us have drunk too much alcohol at some point in time, and we know just by how we feel the next day, that this is not good for us, but when nearly a third of young female uni students are intentionally cutting back on food purely to offset alcohol calories; it's a serious health concern."
The harmful use of alcohol is a global issue, with excess consumption causing millions of deaths, including many thousands of young lives.
In Australia, one in six people consume alcohol at dangerous levels, placing them at lifetime risk of an alcohol-related disease or injury. The combination of excessive alcohol intake with restrictive eating behaviours to offset calories can result in a highly toxic cocktail for this population.
The study was undertaken in two stages. The first measured the prevalence of self-reported, compensative and restrictive activities in relation to their alcohol consumption.
The second stage identified participants' Early Maladaptive Schemes (EMS) -- or thought patterns -- finding that that the subset of schemas most predictive of Drunkorexia were 'insufficient self-control', 'emotional deprivation' and 'social isolation'.
Powell-Jones says identifying the early maladaptive schemas linked to Drunkorexia is key to understanding the harmful condition.
These are deeply held and pervasive themes regarding oneself and one's relationship with others, that can develop in childhood and then can influence all areas of life, often in dysfunctional ways. Early maladaptive schemas can also be influenced by cultural and social norms.
Drunkorexic behaviour appears to be motivated by two key social norms for young adults -- consuming alcohol and thinness.
"This study has provided preliminary insight into better understanding why young female adults make these decisions to engage in Drunkorexic behaviours," Powell-Jones says.
"Not only may it be a coping strategy to manage social anxieties through becoming accepted and fitting in with peer group or cultural expectations, but it also shows a reliance on avoidant coping strategies.
"It is important that clinicians, educators, parents and friends are aware of the factors that motivate young women to engage in this harmful and dangerous behaviour, including cultural norms, beliefs that drive self-worth, a sense of belonging, and interpersonal connectedness.
Read more at Science Daily
Subscribe to:
Posts (Atom)