What rules shaped humanity's original social networks? Researchers in Japan developed new mathematical models to understand what conditions produced traditional community structures and conventions around the world, including taboos about incest.
"We think this is the first time cultural anthropology and computer simulations have met in a single research study," said Professor Kunihiko Kaneko, an expert in theoretical biology and physics from the University of Tokyo Research Center for Complex Systems Biology.
Researchers used statistical physics and computer models common in evolutionary biology to explain the origin of common community structures documented by cultural anthropologists around the world.
The earliest social networks were tightly knit cultural groups made of multiple biologically related families. That single group would then develop relationships with other cultural groups in their local area.
In the 1960s, cultural anthropologists documented social networks of indigenous communities and identified two kinship structures common around the world. In areas with hunter-gatherer communities, anthropologists documented direct-exchange kinship structures where women from two communities change places when they marry. In areas with agrarian farming communities, kinship structures of generalized exchange developed where women move between multiple communities to marry.
"Anthropologists have documented kinship structures all over the world, but it still remains unclear how those structures emerged and why they have common properties," said Kenji Itao, a first year master's degree student in Kaneko's laboratory, whose interdisciplinary interests in physics, math and anthropology motivated this research study.
Experts in anthropology consider the incest taboo to be an extremely common social rule affecting kinship structures. The ancient incest taboo focused on social closeness, rather than genetic or blood relationships, meaning it was taboo to marry anyone born into the same cultural group.
Itao and Kaneko designed a mathematical model and computer simulation to test what external factors might cause generations of biologically related families to organize into communities with incest taboos and direct or generalized exchange of brides.
"Traditionally, it is more common for women to move to a new community when they marry, but we did not include any gender differences in this computer simulation," explained Itao.
Simulated family groups with shared traits and desires naturally grouped together into distinct cultural groups. However, the traits the group possessed were different from the traits they desired in marriage partners, meaning they did not desire spouses similar to themselves. This is the underlying cause of the traditional community-based incest taboo suggested by the study.
When the computer simulation pushed communities to cooperate, generalized exchange kinship structures arose. The simulation demonstrated different kinship structures, including the direct exchange basic structure, emerge depending on the strength of conflict to find brides and the necessity of cooperation with specific other communities.
"It is rewarding to see that the combination of statistical physics and evolution theory, together with computer simulations, will be relevant to identify universal properties that affect human societies," said Kaneko.
The current computer model is simple and only included factors of conflict and cooperation affecting marriage, but researchers hope to continue developing the model to also consider economic factors that might cause communities to separate into different classes. With these additions, the theory can hopefully be extended to explore different communities in the modern, global society.
Read more at Science Daily
Jan 25, 2020
Traces of the European enlightenment found in the DNA of western sign languages
Sign languages throughout North and South America and Europe have centuries-long roots in five European locations, a finding that gives new insight into the influence of the European Enlightenment on many of the world's signing communities and the evolution of their languages.
Linguists and biologists from The University of Texas at Austin and the Max Planck Institute for the Science of Human History adapted techniques from genetic research to study the origins of Western sign languages, identifying five European lineages -- Austrian, British, French, Spanish and Swedish -- that began spreading to other parts of the world in the late 18th century. The study, published in Royal Society Open Science, highlights the establishment and growth of European deaf communities during an age of widespread enlightenment and their impact on sign languages used today.
"While the evolution of spoken languages has been studied for more than 200 years, research on sign language evolution is still in its infancy," said Justin Power, a doctoral candidate in linguistics at UT Austin and the study's first author, noting that most research has been based on historical records from deaf educators and institutions. "But there is no a priori reason that one should only study spoken languages if one wants to better understand human language evolution in general."
Much like a geneticist would look to DNA to study traits passed down through generations, the study's researchers investigated data from dozens of Western sign languages, in particular, the manual alphabets, or sets of handshapes signers use to spell written words. As certain handshape forms were passed on within a lineage, they became characteristic of the lineage itself and could be used as markers to identify it.
To identify such markers and decipher the origins of each language, the researchers built the largest cross-linguistic comparative database to map the complex evolutionary relationships among 40 contemporary and 36 historical manual alphabets.
"In both biological and linguistic evolution, traits are passed on from generation to generation. But the types of traits that are passed on and the ways in which they are passed on differ. So, we might expect many differences in the ways that humans and their languages evolve," Power said. "This database allowed us to analyze a variety of reasons for commonalities between languages, such as inheriting from a common ancestral language, borrowing from an unrelated language, or developing similar forms independently."
The researchers grouped the sign languages into five main evolutionary lineages, which developed independently of one another between the mid-18th to early 19th centuries. They were also interested to find that three of the main continental lineages -- Austrian, French and Spanish -- all appeared to have been influenced by early Spanish manual alphabets, which represented the 22 letters of Latin.
"It's likely that the early Spanish manual alphabets were used in limited ways by clergy or itinerant teachers of the deaf, but later signing communities added new handshapes to represent letters in the alphabets of their written languages," Power said. "When large-scale schools for the deaf were established, the manual alphabets came into use in signing communities by relatively large groups of people. It is at this point where we put the beginnings of most of these five lineages."
Data from the languages themselves confirmed many of the sign language dispersal events known from historical records, such as the influence of French Sign Language on deaf education and signing communities in many countries, including in Western Europe and the Americas. However, the researchers were surprised to trace the dispersal of Austrian Sign Language to central and northern Europe, as well as to Russia -- a lineage about which little was previously known.
Read more at Science Daily
Linguists and biologists from The University of Texas at Austin and the Max Planck Institute for the Science of Human History adapted techniques from genetic research to study the origins of Western sign languages, identifying five European lineages -- Austrian, British, French, Spanish and Swedish -- that began spreading to other parts of the world in the late 18th century. The study, published in Royal Society Open Science, highlights the establishment and growth of European deaf communities during an age of widespread enlightenment and their impact on sign languages used today.
"While the evolution of spoken languages has been studied for more than 200 years, research on sign language evolution is still in its infancy," said Justin Power, a doctoral candidate in linguistics at UT Austin and the study's first author, noting that most research has been based on historical records from deaf educators and institutions. "But there is no a priori reason that one should only study spoken languages if one wants to better understand human language evolution in general."
Much like a geneticist would look to DNA to study traits passed down through generations, the study's researchers investigated data from dozens of Western sign languages, in particular, the manual alphabets, or sets of handshapes signers use to spell written words. As certain handshape forms were passed on within a lineage, they became characteristic of the lineage itself and could be used as markers to identify it.
To identify such markers and decipher the origins of each language, the researchers built the largest cross-linguistic comparative database to map the complex evolutionary relationships among 40 contemporary and 36 historical manual alphabets.
"In both biological and linguistic evolution, traits are passed on from generation to generation. But the types of traits that are passed on and the ways in which they are passed on differ. So, we might expect many differences in the ways that humans and their languages evolve," Power said. "This database allowed us to analyze a variety of reasons for commonalities between languages, such as inheriting from a common ancestral language, borrowing from an unrelated language, or developing similar forms independently."
The researchers grouped the sign languages into five main evolutionary lineages, which developed independently of one another between the mid-18th to early 19th centuries. They were also interested to find that three of the main continental lineages -- Austrian, French and Spanish -- all appeared to have been influenced by early Spanish manual alphabets, which represented the 22 letters of Latin.
"It's likely that the early Spanish manual alphabets were used in limited ways by clergy or itinerant teachers of the deaf, but later signing communities added new handshapes to represent letters in the alphabets of their written languages," Power said. "When large-scale schools for the deaf were established, the manual alphabets came into use in signing communities by relatively large groups of people. It is at this point where we put the beginnings of most of these five lineages."
Data from the languages themselves confirmed many of the sign language dispersal events known from historical records, such as the influence of French Sign Language on deaf education and signing communities in many countries, including in Western Europe and the Americas. However, the researchers were surprised to trace the dispersal of Austrian Sign Language to central and northern Europe, as well as to Russia -- a lineage about which little was previously known.
Read more at Science Daily
Jan 24, 2020
Will the future's super batteries be made of seawater?
We all know the rechargeable and efficient lithium ion (Li-ion) batteries sitting in our smartphones, laptops and also in electric cars.
Unfortunately, lithium is a limited resource, so it will be a challenge to satisfy the worlds' growing demand for relatively cheap batteries. Therefore, researchers are now looking for alternatives to the Li-ion battery.
A promising alternative is to replace lithium with the metal sodium -- to make Na-ion batteries. Sodium is found in large quantities in seawater and can be easily extracted from it.
"The Na-ion battery is still under development, and researchers are working on increasing its service life, lowering its charging time and making batteries that can deliver many watts," says research leader Dorthe Bomholdt Ravnsbæk of the Department of Physics, Chemistry and Pharmacy at University of Southern Denmark.
She and her team are preoccupied with developing new and better rechargeable batteries that can replace todays' widely used Li-ion batteries.
For the Na-ion batteries to become an alternative, better electrode materials must be developed -- something she and colleagues from the University of Technology and the Massachusetts Institute of Technology, USA, have looked at in a new study published in the journal ACS Applied Energy Materials.
But before looking at the details of this study, let's take a look at why the Na-ion battery has the potential to become the next big battery success.
"An obvious advantage is that sodium is a very readily available resource, which is found in very large quantities in seawater. Lithium, on the other hand, is a limited resource that is mined only in a few places in the world," explains Dorthe Bomholdt Ravnsbæk.
Another advantage is that Na-ion batteries do not need cobalt, which is still needed in Li-ion batteries. The majority of the cobalt used today to make Li-ion batteries, is mined in the Democratic Republic of Congo, where rebellion, disorganized mining and child labor create uncertainty and moral qualms regarding the country's cobalt trade.
It also counts on the plus side that Na-ion batteries can be produced at the same factories that make Li-ion batteries today.
In their new study, Dorthe Bomholdt Ravnsbæk and her colleagues have investigated a new electrode material based on iron, manganese and phosphorus.
The new thing about the material is the addition of the element manganese, which not only gives the battery a higher voltage (volts), but also increases the capacity of the battery and is likely to deliver more watts. This is because the transformations that occur at the atomic level during the discharge and charge are significantly changed by the presence of manganese.
"Similar effects have been seen in Li-ion batteries, but it is very surprising that the effect is retained in a Na-ion battery, since the interaction between the electrode and Na-ions is very different from that of Li-ions," says Dorthe Bomholdt Ravnsbæk.
She will not try and predict when we can expect to find seawater-based Na-ion batteries in our phones and electric cars, because there are still some challenges to be solved.
One challenge is that it can be difficult to make small Na-ion batteries. But large batteries also have value -- for example, when it comes to storing wind or solar energy.
Read more at Science Daily
Unfortunately, lithium is a limited resource, so it will be a challenge to satisfy the worlds' growing demand for relatively cheap batteries. Therefore, researchers are now looking for alternatives to the Li-ion battery.
A promising alternative is to replace lithium with the metal sodium -- to make Na-ion batteries. Sodium is found in large quantities in seawater and can be easily extracted from it.
"The Na-ion battery is still under development, and researchers are working on increasing its service life, lowering its charging time and making batteries that can deliver many watts," says research leader Dorthe Bomholdt Ravnsbæk of the Department of Physics, Chemistry and Pharmacy at University of Southern Denmark.
She and her team are preoccupied with developing new and better rechargeable batteries that can replace todays' widely used Li-ion batteries.
For the Na-ion batteries to become an alternative, better electrode materials must be developed -- something she and colleagues from the University of Technology and the Massachusetts Institute of Technology, USA, have looked at in a new study published in the journal ACS Applied Energy Materials.
But before looking at the details of this study, let's take a look at why the Na-ion battery has the potential to become the next big battery success.
"An obvious advantage is that sodium is a very readily available resource, which is found in very large quantities in seawater. Lithium, on the other hand, is a limited resource that is mined only in a few places in the world," explains Dorthe Bomholdt Ravnsbæk.
Another advantage is that Na-ion batteries do not need cobalt, which is still needed in Li-ion batteries. The majority of the cobalt used today to make Li-ion batteries, is mined in the Democratic Republic of Congo, where rebellion, disorganized mining and child labor create uncertainty and moral qualms regarding the country's cobalt trade.
It also counts on the plus side that Na-ion batteries can be produced at the same factories that make Li-ion batteries today.
In their new study, Dorthe Bomholdt Ravnsbæk and her colleagues have investigated a new electrode material based on iron, manganese and phosphorus.
The new thing about the material is the addition of the element manganese, which not only gives the battery a higher voltage (volts), but also increases the capacity of the battery and is likely to deliver more watts. This is because the transformations that occur at the atomic level during the discharge and charge are significantly changed by the presence of manganese.
"Similar effects have been seen in Li-ion batteries, but it is very surprising that the effect is retained in a Na-ion battery, since the interaction between the electrode and Na-ions is very different from that of Li-ions," says Dorthe Bomholdt Ravnsbæk.
She will not try and predict when we can expect to find seawater-based Na-ion batteries in our phones and electric cars, because there are still some challenges to be solved.
One challenge is that it can be difficult to make small Na-ion batteries. But large batteries also have value -- for example, when it comes to storing wind or solar energy.
Read more at Science Daily
The color of your clothing can impact wildlife
Your choice of clothing could affect the behavioral habits of wildlife around you, according to a study conducted by a team of researchers, including faculty at Binghamton University, State University of New York.
Lindsey Swierk, assistant research professor of biological sciences at Binghamton University, collaborated on a study with the goal of seeing how ecotourists could unintentionally have an effect on wildlife native to the area. Swierk and her team went to Costa Rica to conduct this study on water anoles (Anolis aquaticus), a variety of the anole lizard.
"I've studied water anoles for five years now, and I still find myself surprised by their unique natural history and behavior," Swierk said. "One reason water anoles were chosen is because they are restricted to a fairly small range and we could be pretty sure that these particular populations hadn't seen many humans in their lifetimes. So we had a lot of confidence that these populations were not biased by previous human interactions."
The researchers went to the Las Cruces Biological Center in Costa Rica. To collect data, they visited three different river locations wearing one of three different colored shirts: orange, green or blue. The study's focus was to see how these water anoles reacted to the different colors. Orange was chosen because the water anole has orange sexual signals. Blue was chosen as a contrast, as the water anole's body lacks the color blue. Green was selected as a similar color to the tropical forest environment of the testing site.
"Based on previous work on how animals respond to color stimuli, we developed a hypothesis that wearing colors that are 'worn' by water anoles themselves would be less frightening to these lizards," Swierk said.
The results of the study supported that hypothesis, with researchers with orange shirts reporting more anoles seen per hour and a higher anole capture percentage. Despite predicting the result, Swierk said that she was surprised by some of the findings. "It was still very surprising to see that the color green, which camouflaged us well in the forest, was less effective than wearing a very bright orange!"
Swierk said that one of the biggest takeaways from this study is that we may not yet quite understand how animals view the world.
"We (both researchers and ecotourists) need to recognize that animals perceive the world differently than we do as humans," said Swierk. "They have their own 'lenses' based on their unique evolutionary histories. What we imagine is frightening for an animal might not be; conversely, what we imagine is non-threatening could be terrifying in reality."
Swierk hopes that these results may be used within the ecotourism community to reduce impacts on the wildlife that they wish to view.
She said that more research needs to be done before this study can be related to other animals with less-sophisticated abilities of color perception, such as mammals.
Read more at Science Daily
Lindsey Swierk, assistant research professor of biological sciences at Binghamton University, collaborated on a study with the goal of seeing how ecotourists could unintentionally have an effect on wildlife native to the area. Swierk and her team went to Costa Rica to conduct this study on water anoles (Anolis aquaticus), a variety of the anole lizard.
"I've studied water anoles for five years now, and I still find myself surprised by their unique natural history and behavior," Swierk said. "One reason water anoles were chosen is because they are restricted to a fairly small range and we could be pretty sure that these particular populations hadn't seen many humans in their lifetimes. So we had a lot of confidence that these populations were not biased by previous human interactions."
The researchers went to the Las Cruces Biological Center in Costa Rica. To collect data, they visited three different river locations wearing one of three different colored shirts: orange, green or blue. The study's focus was to see how these water anoles reacted to the different colors. Orange was chosen because the water anole has orange sexual signals. Blue was chosen as a contrast, as the water anole's body lacks the color blue. Green was selected as a similar color to the tropical forest environment of the testing site.
"Based on previous work on how animals respond to color stimuli, we developed a hypothesis that wearing colors that are 'worn' by water anoles themselves would be less frightening to these lizards," Swierk said.
The results of the study supported that hypothesis, with researchers with orange shirts reporting more anoles seen per hour and a higher anole capture percentage. Despite predicting the result, Swierk said that she was surprised by some of the findings. "It was still very surprising to see that the color green, which camouflaged us well in the forest, was less effective than wearing a very bright orange!"
Swierk said that one of the biggest takeaways from this study is that we may not yet quite understand how animals view the world.
"We (both researchers and ecotourists) need to recognize that animals perceive the world differently than we do as humans," said Swierk. "They have their own 'lenses' based on their unique evolutionary histories. What we imagine is frightening for an animal might not be; conversely, what we imagine is non-threatening could be terrifying in reality."
Swierk hopes that these results may be used within the ecotourism community to reduce impacts on the wildlife that they wish to view.
She said that more research needs to be done before this study can be related to other animals with less-sophisticated abilities of color perception, such as mammals.
Read more at Science Daily
US households waste nearly a third of the food they acquire
American households waste, on average, almost a third of the food they acquire, according to economists, who say this wasted food has an estimated aggregate value of $240 billion annually. Divided among the nearly 128.6 million U.S. households, this waste could be costing the average household about $1,866 per year.
This inefficiency in the food economy has implications for health, food security, food marketing and climate change, noted Edward Jaenicke, professor of agricultural economics, College of Agricultural Sciences, Penn State.
"Our findings are consistent with previous studies, which have shown that 30% to 40% of the total food supply in the United States goes uneaten -- and that means that resources used to produce the uneaten food, including land, energy, water and labor, are wasted as well," Jaenicke said. "But this study is the first to identify and analyze the level of food waste for individual households, which has been nearly impossible to estimate because comprehensive, current data on uneaten food at the household level do not exist."
The researchers overcame this limitation by borrowing methodology from the fields of production economics -- which models the production function of transforming inputs into outputs -- and nutritional science, by which a person's height, weight, gender and age can be used to calculate metabolic energy requirements to maintain body weight.
In this novel approach, Jaenicke and Yang Yu, doctoral candidate in agricultural, environmental and regional economics, analyzed data primarily from 4,000 households that participated in the U.S. Department of Agriculture's National Household Food Acquisition and Purchase Survey, known as FoodAPS. Food-acquisition data from this survey were treated as the "input."
FoodAPS also collected biological measures of participants, enabling the researchers to apply formulas from nutritional science to determine basal metabolic rates and calculate the energy required for household members to maintain body weight, which is the "output." The difference between the amount of food acquired and the amount needed to maintain body weight represents the production inefficiency in the model, which translates to uneaten, and therefore wasted, food.
"Based on our estimation, the average American household wastes 31.9% of the food it acquires," Jaenicke said. "More than two-thirds of households in our study have food-waste estimates of between 20% and 50%. However, even the least wasteful household wastes 8.7% of the food it acquires."
In addition, demographic data collected as part of the survey were used to analyze the differences in food waste among households with a variety of characteristics.
For example, households with higher income generate more waste, and those with healthier diets that include more perishable fruits and vegetables also waste more food, according to the researchers, who reported their findings in the American Journal of Agricultural Economics.
"It's possible that programs encouraging healthy diets may unintentionally lead to more waste," Jaenicke said. "That may be something to think about from a policy perspective -- how can we fine-tune these programs to reduce potential waste."
Household types associated with less food waste include those with greater food insecurity -- especially those that participate in the federal SNAP food assistance program, previously known as "food stamps" -- as well as those households with a larger number of members.
"People in larger households have more meal-management options," Jaenicke explained. "More people means leftover food is more likely to be eaten."
In addition, some grocery items are sold in sizes that may influence waste, he said.
"A household of two may not eat an entire head of cauliflower, so some could be wasted, whereas a larger household is more likely to eat all of it, perhaps at a single meal."
Among other households with lower levels of waste are those who use a shopping list when visiting the supermarket and those who must travel farther to reach their primary grocery store.
"This suggests that planning and food management are factors that influence the amount of wasted food," Jaenicke said.
Beyond the economic and nutritional implications, reducing food waste could be a factor in minimizing the effects of climate change. Previous studies have shown that throughout its life cycle, discarded food is a major source of greenhouse gas emissions, the researchers pointed out.
"According to the U.N. Food and Agriculture Organization, food waste is responsible for about 3.3 gigatons of greenhouse gas annually, which would be, if regarded as a country, the third-largest emitter of carbon after the U.S. and China," Jaenicke said.
The researchers suggested that this study can help fill the need for comprehensive food-waste estimates at the household level that can be generalized to a wide range of household groups.
"While the precise measurement of food waste is important, it may be equally important to investigate further how household-specific factors influence how much food is wasted," said Jaenicke. "We hope our methodology provides a new lens through which to analyze individual household food waste."
Read more at Science Daily
This inefficiency in the food economy has implications for health, food security, food marketing and climate change, noted Edward Jaenicke, professor of agricultural economics, College of Agricultural Sciences, Penn State.
"Our findings are consistent with previous studies, which have shown that 30% to 40% of the total food supply in the United States goes uneaten -- and that means that resources used to produce the uneaten food, including land, energy, water and labor, are wasted as well," Jaenicke said. "But this study is the first to identify and analyze the level of food waste for individual households, which has been nearly impossible to estimate because comprehensive, current data on uneaten food at the household level do not exist."
The researchers overcame this limitation by borrowing methodology from the fields of production economics -- which models the production function of transforming inputs into outputs -- and nutritional science, by which a person's height, weight, gender and age can be used to calculate metabolic energy requirements to maintain body weight.
In this novel approach, Jaenicke and Yang Yu, doctoral candidate in agricultural, environmental and regional economics, analyzed data primarily from 4,000 households that participated in the U.S. Department of Agriculture's National Household Food Acquisition and Purchase Survey, known as FoodAPS. Food-acquisition data from this survey were treated as the "input."
FoodAPS also collected biological measures of participants, enabling the researchers to apply formulas from nutritional science to determine basal metabolic rates and calculate the energy required for household members to maintain body weight, which is the "output." The difference between the amount of food acquired and the amount needed to maintain body weight represents the production inefficiency in the model, which translates to uneaten, and therefore wasted, food.
"Based on our estimation, the average American household wastes 31.9% of the food it acquires," Jaenicke said. "More than two-thirds of households in our study have food-waste estimates of between 20% and 50%. However, even the least wasteful household wastes 8.7% of the food it acquires."
In addition, demographic data collected as part of the survey were used to analyze the differences in food waste among households with a variety of characteristics.
For example, households with higher income generate more waste, and those with healthier diets that include more perishable fruits and vegetables also waste more food, according to the researchers, who reported their findings in the American Journal of Agricultural Economics.
"It's possible that programs encouraging healthy diets may unintentionally lead to more waste," Jaenicke said. "That may be something to think about from a policy perspective -- how can we fine-tune these programs to reduce potential waste."
Household types associated with less food waste include those with greater food insecurity -- especially those that participate in the federal SNAP food assistance program, previously known as "food stamps" -- as well as those households with a larger number of members.
"People in larger households have more meal-management options," Jaenicke explained. "More people means leftover food is more likely to be eaten."
In addition, some grocery items are sold in sizes that may influence waste, he said.
"A household of two may not eat an entire head of cauliflower, so some could be wasted, whereas a larger household is more likely to eat all of it, perhaps at a single meal."
Among other households with lower levels of waste are those who use a shopping list when visiting the supermarket and those who must travel farther to reach their primary grocery store.
"This suggests that planning and food management are factors that influence the amount of wasted food," Jaenicke said.
Beyond the economic and nutritional implications, reducing food waste could be a factor in minimizing the effects of climate change. Previous studies have shown that throughout its life cycle, discarded food is a major source of greenhouse gas emissions, the researchers pointed out.
"According to the U.N. Food and Agriculture Organization, food waste is responsible for about 3.3 gigatons of greenhouse gas annually, which would be, if regarded as a country, the third-largest emitter of carbon after the U.S. and China," Jaenicke said.
The researchers suggested that this study can help fill the need for comprehensive food-waste estimates at the household level that can be generalized to a wide range of household groups.
"While the precise measurement of food waste is important, it may be equally important to investigate further how household-specific factors influence how much food is wasted," said Jaenicke. "We hope our methodology provides a new lens through which to analyze individual household food waste."
Read more at Science Daily
Researchers regrow damaged nerves with polymer and protein
University of Pittsburgh School of Medicine researchers have created a biodegradable nerve guide -- a polymer tube -- filled with growth-promoting protein that can regenerate long sections of damaged nerves, without the need for transplanting stem cells or a donor nerve.
So far, the technology has been tested in monkeys, and the results of those experiments appeared today in Science Translational Medicine.
"We're the first to show a nerve guide without any cells was able to bridge a large, 2-inch gap between the nerve stump and its target muscle," said senior author Kacey Marra, Ph.D., professor of plastic surgery at Pitt and core faculty at the McGowan Institute for Regenerative Medicine. "Our guide was comparable to, and in some ways better than, a nerve graft."
Half of wounded American soldiers return home with injuries to their arms and legs, which aren't well protected by body armor, often resulting in damaged nerves and disability. Among civilians, car crashes, machinery accidents, cancer treatment, diabetes and even birth trauma can cause significant nerve damage, affecting more than 20 million Americans.
Peripheral nerves can regrow up to a third of an inch on their own, but if the damaged section is longer than that, the nerve can't find its target. Often, the disoriented nerve gets knotted into a painful ball called a neuroma.
The most common treatment for longer segments of nerve damage is to remove a skinny sensory nerve at the back of the leg -- which causes numbness in the leg and other complications, but has the least chance of being missed -- chop it into thirds, bundle the pieces together and then sew them to the end of the damaged motor nerve, usually in the arm. But only about 40 to 60% of the motor function typically returns.
"It's like you're replacing a piece of linguini with a bundle of angel hair pasta," Marra said. "It just doesn't work as well."
Marra's nerve guide returned about 80% of fine motor control in the thumbs of four monkeys, each with a 2-inch nerve gap in the forearm.
The guide is made of the same material as dissolvable sutures and peppered with a growth-promoting protein -- the same one delivered to the brain in a recent Parkinson's trial -- which releases slowly over the course of months.
The experiment had two controls: an empty polymer tube and a nerve graft. Since monkeys' legs are relatively short, the usual clinical procedure of removing and dicing a leg nerve wouldn't work. So, the scientists removed a 2-inch segment of nerve from the forearm, flipped it around and sewed it into place, replacing linguini with linguini, and setting a high bar for the nerve guide to match.
Functional recovery was just as good with Marra's guide as it was with this best-case-scenario graft, and the guide outperformed the graft when it came to restoring nerve conduction and replenishing Schwann cells -- the insulating layer around nerves that boosts electrical signals and supports regeneration. In both scenarios, it took a year for the nerve to regrow. The empty guide performed significantly worse all around.
With these promising results in monkeys, Marra wants to bring her nerve guide to human patients. She's working with the Food and Drug Administration (FDA) on a first-in-human clinical trial and spinning out a startup company, AxoMax Technologies Inc.
Read more at Science Daily
So far, the technology has been tested in monkeys, and the results of those experiments appeared today in Science Translational Medicine.
"We're the first to show a nerve guide without any cells was able to bridge a large, 2-inch gap between the nerve stump and its target muscle," said senior author Kacey Marra, Ph.D., professor of plastic surgery at Pitt and core faculty at the McGowan Institute for Regenerative Medicine. "Our guide was comparable to, and in some ways better than, a nerve graft."
Half of wounded American soldiers return home with injuries to their arms and legs, which aren't well protected by body armor, often resulting in damaged nerves and disability. Among civilians, car crashes, machinery accidents, cancer treatment, diabetes and even birth trauma can cause significant nerve damage, affecting more than 20 million Americans.
Peripheral nerves can regrow up to a third of an inch on their own, but if the damaged section is longer than that, the nerve can't find its target. Often, the disoriented nerve gets knotted into a painful ball called a neuroma.
The most common treatment for longer segments of nerve damage is to remove a skinny sensory nerve at the back of the leg -- which causes numbness in the leg and other complications, but has the least chance of being missed -- chop it into thirds, bundle the pieces together and then sew them to the end of the damaged motor nerve, usually in the arm. But only about 40 to 60% of the motor function typically returns.
"It's like you're replacing a piece of linguini with a bundle of angel hair pasta," Marra said. "It just doesn't work as well."
Marra's nerve guide returned about 80% of fine motor control in the thumbs of four monkeys, each with a 2-inch nerve gap in the forearm.
The guide is made of the same material as dissolvable sutures and peppered with a growth-promoting protein -- the same one delivered to the brain in a recent Parkinson's trial -- which releases slowly over the course of months.
The experiment had two controls: an empty polymer tube and a nerve graft. Since monkeys' legs are relatively short, the usual clinical procedure of removing and dicing a leg nerve wouldn't work. So, the scientists removed a 2-inch segment of nerve from the forearm, flipped it around and sewed it into place, replacing linguini with linguini, and setting a high bar for the nerve guide to match.
Functional recovery was just as good with Marra's guide as it was with this best-case-scenario graft, and the guide outperformed the graft when it came to restoring nerve conduction and replenishing Schwann cells -- the insulating layer around nerves that boosts electrical signals and supports regeneration. In both scenarios, it took a year for the nerve to regrow. The empty guide performed significantly worse all around.
With these promising results in monkeys, Marra wants to bring her nerve guide to human patients. She's working with the Food and Drug Administration (FDA) on a first-in-human clinical trial and spinning out a startup company, AxoMax Technologies Inc.
Read more at Science Daily
Jan 23, 2020
How moon jellyfish get about
With their translucent bells, moon jellyfish (Aurelia aurita) move around the oceans in a very efficient way. Scientists at the University of Bonn have now used a mathematical model to investigate how these cnidarians manage to use their neural networks to control their locomotion even when they are injured. The results may also contribute to the optimization of underwater robots. The study has already been published online in the journal eLife; the final version will appear soon.
Moon jellyfish (Aurelia aurita) are common in almost all oceans. The cnidarians move about in the oceans with their translucent bells, which measure from three to 30 centimeters. "These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell," explains lead author Fabian Pallasdies from the Neural Network Dynamics and Computation research group at the Institute of Genetics at the University of Bonn.
Moon jellyfish are particularly efficient when it comes to getting around: They create vortices at the edge of their bell, which increase propulsion. Pallasdies: "Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape."
Jellyfish for research into the origins of the nervous system
The scientists of the research group have now developed a mathematical model of the neural networks of moon jellyfish and used this to investigate how these networks regulate the movement of the animals. "Jellyfish are among the oldest and simplest organisms that move around in water," says the head of the research group, Prof. Dr. Raoul-Martin Memmesheimer. On the basis of them and other early organisms, the origins of the nervous system will now be investigated.
Especially in the 50s and 80s of the last century, extensive experimental neurophysiological data were obtained on jellyfish, providing the researchers at the University of Bonn with a basis for their mathematical model. In several steps, they considered individual nerve cells, nerve cell networks, the entire animal and the surrounding water. "The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish," says Pallasdies.
The jellyfish can perceive their position with light stimuli and with a balance organ. If a moon jellyfish is turned by the ocean current, the animal compensates for this and moves further to the water surface, for example. With their model, the researchers were able to confirm the assumption that the jellyfish uses one neural network for swimming straight ahead and two for rotational movements.
Wave-shaped propagation of the excitation
The activity of the nerve cells spreads in the jellyfish's bell in a wave-like pattern. As experiments from the 19th century already show, the locomotion even works when large parts of the bell are injured. Scientists at the University of Bonn are now able to explain this phenomenon with their simulations: "Jellyfish can pick up and transmit signals on their bell at any point," says Pallasdies. When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.
However, the wave-like propagation of the excitation in the jellyfish's bell would be disrupted if the nerve cells fired randomly. As the researchers have now discovered on the basis of their model, this risk is prevented by the nerve cells not being able to become active again so quickly after firing.
Read more at Science Daily
Moon jellyfish (Aurelia aurita) are common in almost all oceans. The cnidarians move about in the oceans with their translucent bells, which measure from three to 30 centimeters. "These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell," explains lead author Fabian Pallasdies from the Neural Network Dynamics and Computation research group at the Institute of Genetics at the University of Bonn.
Moon jellyfish are particularly efficient when it comes to getting around: They create vortices at the edge of their bell, which increase propulsion. Pallasdies: "Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape."
Jellyfish for research into the origins of the nervous system
The scientists of the research group have now developed a mathematical model of the neural networks of moon jellyfish and used this to investigate how these networks regulate the movement of the animals. "Jellyfish are among the oldest and simplest organisms that move around in water," says the head of the research group, Prof. Dr. Raoul-Martin Memmesheimer. On the basis of them and other early organisms, the origins of the nervous system will now be investigated.
Especially in the 50s and 80s of the last century, extensive experimental neurophysiological data were obtained on jellyfish, providing the researchers at the University of Bonn with a basis for their mathematical model. In several steps, they considered individual nerve cells, nerve cell networks, the entire animal and the surrounding water. "The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish," says Pallasdies.
The jellyfish can perceive their position with light stimuli and with a balance organ. If a moon jellyfish is turned by the ocean current, the animal compensates for this and moves further to the water surface, for example. With their model, the researchers were able to confirm the assumption that the jellyfish uses one neural network for swimming straight ahead and two for rotational movements.
Wave-shaped propagation of the excitation
The activity of the nerve cells spreads in the jellyfish's bell in a wave-like pattern. As experiments from the 19th century already show, the locomotion even works when large parts of the bell are injured. Scientists at the University of Bonn are now able to explain this phenomenon with their simulations: "Jellyfish can pick up and transmit signals on their bell at any point," says Pallasdies. When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.
However, the wave-like propagation of the excitation in the jellyfish's bell would be disrupted if the nerve cells fired randomly. As the researchers have now discovered on the basis of their model, this risk is prevented by the nerve cells not being able to become active again so quickly after firing.
Read more at Science Daily
Inner complexity of Saturn moon, Enceladus, revealed
A Southwest Research Institute team developed a new geochemical model that reveals that carbon dioxide (CO2) from within Enceladus, an ocean-harboring moon of Saturn, may be controlled by chemical reactions at its seafloor. Studying the plume of gases and frozen sea spray released through cracks in the moon's icy surface suggests an interior more complex than previously thought.
"By understanding the composition of the plume, we can learn about what the ocean is like, how it got to be this way and whether it provides environments where life as we know it could survive," said SwRI's Dr. Christopher Glein, lead author of a paper in Geophysical Research Letters outlining the research. "We came up with a new technique for analyzing the plume composition to estimate the concentration of dissolved CO2 in the ocean. This enabled modeling to probe deeper interior processes."
Analysis of mass spectrometry data from NASA's Cassini spacecraft indicates that the abundance of CO2 is best explained by geochemical reactions between the moon's rocky core and liquid water from its subsurface ocean. Integrating this information with previous discoveries of silica and molecular hydrogen (H2) points to a more complex, geochemically diverse core.
"Based on our findings, Enceladus appears to demonstrate a massive carbon sequestration experiment," Glein said. "On Earth, climate scientists are exploring whether a similar process can be utilized to mitigate industrial emissions of CO2. Using two different data sets, we derived CO2 concentration ranges that are intriguingly similar to what would be expected from the dissolution and formation of certain mixtures of silicon- and carbon-bearing minerals at the seafloor."
Another phenomenon that contributes to this complexity is the likely presence of hydrothermal vents inside Enceladus. At Earth's ocean floor, hydrothermal vents emit hot, energy-rich, mineral-laden fluids that allow unique ecosystems teeming with unusual creatures to thrive.
"The dynamic interface of a complex core and seawater could potentially create energy sources that might support life," said SwRI's Dr. Hunter Waite, principal investigator of Cassini's Ion Neutral Mass Spectrometer (INMS). "While we have not found evidence of the presence of microbial life in the ocean of Enceladus, the growing evidence for chemical disequilibrium offers a tantalizing hint that habitable conditions could exist beneath the moon's icy crust."
The scientific community continues reaping the benefits of Cassini's close flyby of Enceladus on Oct. 28, 2015, prior to the end of the mission. INMS detected H2 as the spacecraft flew through the plume, and a different instrument had earlier detected tiny particles of silica, two chemicals that are considered to be markers for hydrothermal processes.
"Distinct sources of observed CO2, silica and H2 imply mineralogically and thermally diverse environments in a heterogeneous rocky core," Glein said. "We suggest that the core is composed of a carbonated upper layer and a serpentinized interior." Carbonates commonly occur as sedimentary rocks such as limestone on Earth, while serpentine minerals are formed from igneous seafloor rocks that are rich in magnesium and iron.
It is proposed that hydrothermal oxidation of reduced iron deep in the core creates H2, while hydrothermal activity intersecting quartz-bearing carbonated rocks produces silica-rich fluids. Such rocks also have potential to influence the CO2 chemistry of the ocean via low-temperature reactions involving silicates and carbonates at the seafloor.
Read more at Science Daily
"By understanding the composition of the plume, we can learn about what the ocean is like, how it got to be this way and whether it provides environments where life as we know it could survive," said SwRI's Dr. Christopher Glein, lead author of a paper in Geophysical Research Letters outlining the research. "We came up with a new technique for analyzing the plume composition to estimate the concentration of dissolved CO2 in the ocean. This enabled modeling to probe deeper interior processes."
Analysis of mass spectrometry data from NASA's Cassini spacecraft indicates that the abundance of CO2 is best explained by geochemical reactions between the moon's rocky core and liquid water from its subsurface ocean. Integrating this information with previous discoveries of silica and molecular hydrogen (H2) points to a more complex, geochemically diverse core.
"Based on our findings, Enceladus appears to demonstrate a massive carbon sequestration experiment," Glein said. "On Earth, climate scientists are exploring whether a similar process can be utilized to mitigate industrial emissions of CO2. Using two different data sets, we derived CO2 concentration ranges that are intriguingly similar to what would be expected from the dissolution and formation of certain mixtures of silicon- and carbon-bearing minerals at the seafloor."
Another phenomenon that contributes to this complexity is the likely presence of hydrothermal vents inside Enceladus. At Earth's ocean floor, hydrothermal vents emit hot, energy-rich, mineral-laden fluids that allow unique ecosystems teeming with unusual creatures to thrive.
"The dynamic interface of a complex core and seawater could potentially create energy sources that might support life," said SwRI's Dr. Hunter Waite, principal investigator of Cassini's Ion Neutral Mass Spectrometer (INMS). "While we have not found evidence of the presence of microbial life in the ocean of Enceladus, the growing evidence for chemical disequilibrium offers a tantalizing hint that habitable conditions could exist beneath the moon's icy crust."
The scientific community continues reaping the benefits of Cassini's close flyby of Enceladus on Oct. 28, 2015, prior to the end of the mission. INMS detected H2 as the spacecraft flew through the plume, and a different instrument had earlier detected tiny particles of silica, two chemicals that are considered to be markers for hydrothermal processes.
"Distinct sources of observed CO2, silica and H2 imply mineralogically and thermally diverse environments in a heterogeneous rocky core," Glein said. "We suggest that the core is composed of a carbonated upper layer and a serpentinized interior." Carbonates commonly occur as sedimentary rocks such as limestone on Earth, while serpentine minerals are formed from igneous seafloor rocks that are rich in magnesium and iron.
It is proposed that hydrothermal oxidation of reduced iron deep in the core creates H2, while hydrothermal activity intersecting quartz-bearing carbonated rocks produces silica-rich fluids. Such rocks also have potential to influence the CO2 chemistry of the ocean via low-temperature reactions involving silicates and carbonates at the seafloor.
Read more at Science Daily
Mars' water was mineral-rich and salty
Presently, Earth is the only known location where life exists in the Universe. This year the Nobel Prize in physics was awarded to three astronomers who proved, almost 20 years ago, that planets are common around stars beyond the solar system. Life comes in various forms, from cell-phone-toting organisms like humans to the ubiquitous micro-organisms that inhabit almost every square inch of the planet Earth, affecting almost everything that happens on it. It will likely be some time before it is possible to measure or detect life beyond the solar system, but the solar system offers a host of sites that might get a handle on how hard it is for life to start.
Mars is at the top of this list for two reasons. First, it is relatively close to Earth compared to the moons of Saturn and Jupiter (which are also considered good candidates for discovering life beyond Earth in the solar system, and are targeted for exploration in the coming decade). Second, Mars is extremely observable because it lacks a thick atmosphere like Venus, and so far, there are pretty good evidence that Mars' surface temperature and pressure hovers around the point liquid water -- considered essential for life -- can exist. Further, there is good evidence in the form of observable river deltas, and more recent measurements made on Mars' surface, that liquid water did in fact flow on Mars billions of years ago.
Scientists are becoming increasingly convinced that billions of years Mars was habitable. Whether it was in fact inhabited, or is still inhabited, remains hotly debated. To better constrain these questions, scientists are trying to understand the kinds of water chemistry that could have generated the minerals observed on Mars today, which were produced billions of years ago.
Salinity (how much salt was present), pH (a measure of how acidic the water was), and redox state (roughly a measure of the abundance of gases such as hydrogen [H2, which are termed reducing environments] or oxygen [O2, which are termed oxidising environments; the two types are generally mutually incompatible]) are fundamental properties of natural waters. As an example, Earth's modern atmosphere is highly oxygenated (containing large amounts of O2), but one need only dig a few inches into the bottom of a beach or lake today on Earth to find environments which are highly reduced.
Read more at Science Daily
Mars is at the top of this list for two reasons. First, it is relatively close to Earth compared to the moons of Saturn and Jupiter (which are also considered good candidates for discovering life beyond Earth in the solar system, and are targeted for exploration in the coming decade). Second, Mars is extremely observable because it lacks a thick atmosphere like Venus, and so far, there are pretty good evidence that Mars' surface temperature and pressure hovers around the point liquid water -- considered essential for life -- can exist. Further, there is good evidence in the form of observable river deltas, and more recent measurements made on Mars' surface, that liquid water did in fact flow on Mars billions of years ago.
Scientists are becoming increasingly convinced that billions of years Mars was habitable. Whether it was in fact inhabited, or is still inhabited, remains hotly debated. To better constrain these questions, scientists are trying to understand the kinds of water chemistry that could have generated the minerals observed on Mars today, which were produced billions of years ago.
Salinity (how much salt was present), pH (a measure of how acidic the water was), and redox state (roughly a measure of the abundance of gases such as hydrogen [H2, which are termed reducing environments] or oxygen [O2, which are termed oxidising environments; the two types are generally mutually incompatible]) are fundamental properties of natural waters. As an example, Earth's modern atmosphere is highly oxygenated (containing large amounts of O2), but one need only dig a few inches into the bottom of a beach or lake today on Earth to find environments which are highly reduced.
Read more at Science Daily
Sea level rise could reshape the United States, trigger migration inland
Water coming over road in Kemah, Texas during Hurricane Harvey |
The study, published in PLOS ONE, Jan. 22, is the first to use machine learning to project migration patterns resulting from sea-level rise. The researchers found the impact of rising oceans will ripple across the country, beyond coastal areas at risk of flooding, as affected people move inland.
In the US alone, 13 million people could be forced to relocate due to rising sea levels by 2100. As a result, cities throughout the country will grapple with new populations. Effects could include more competition for jobs, increased housing prices, and more pressure on infrastructure networks.
"Sea level rise will affect every county in the US, including inland areas," said Dilkina, the study's corresponding author, a WiSE Gabilan Assistant Professor in computer science at USC and associate director of USC's Center for AI for Society.
"We hope this research will empower urban planners and local decision-makers to prepare to accept populations displaced by sea-level rise. Our findings indicate that everybody should care about sea-level rise, whether they live on the coast or not. This is a global impact issue."
According to the research team, most popular relocation choices will include land-locked cities such as Atlanta, Houston, Dallas, Denver and Las Vegas. The model also predicts suburban and rural areas in the Midwest will experience disproportionately large influx of people relative to their smaller local populations.
Predicting relocation areas
Sea-level rise is caused primarily by two factors related to global warming: added water from melting ice sheets and glaciers and the expansion of sea water as it warms. Within just a few decades, hundreds of thousands homes on the US coast will be flooded. In fact, by the end of the century, 6 feet of ocean-level rise would redraw the coastline of southern Florida, parts of North Carolina and Virginia and most of Boston and New Orleans.
To predict the trajectory of sea-level rise migration, the researchers took existing projections of rising sea levels and combined this with population projections. Based on migration patterns after Hurricane Katrina and Hurricane Rita, the team trained machine learning models -- a subset of artificial intelligence -- to predict where people would relocate.
"We talk about rising sea levels, but the effects go much further than those directly affected on the coasts," said Caleb Robinson, a visiting doctoral researcher from Georgia Tech advised by Dilkina and the study's first author. "We wanted to look not only at who would be displaced, but also where they would go." Dilkina and Robinson worked with co-author Juan Moreno Cruz, an economist and professor at the University of Waterloo.
As expected, the researchers found the greatest effects of sea-level rise migration will be felt by inland areas immediately adjacent to the coast, as well as urban areas in the southeast US. But their model also showed more incoming migrants to Houston and Dallas than previous studies, which flagged Austin as the top destination for climate migrants from the southeastern coast.
This result, notes the researchers, shows that population movement under climate change will not necessarily follow previously established patterns. In other words: it is not business as usual.
Sea-level rise could also reroute people relocating from unaffected areas. Counties surrounding Los Angeles, in particular, could see tens of thousands of migrants whose preferred coastal destinations are now flooded choosing alternative destinations.
The results of this study could help city planners and policymakers plan to expand critical infrastructure, from roads to medical services, to ensure the influx of people has a positive impact on local economies and social well-being.
"When migration occurs naturally, it is a great engine for economic activity and growth," said co-author Juan Moreno Cruz, an economist and professor at the University of Waterloo.
Read more at Science Daily
Jan 22, 2020
Insect bites and warmer climate means double-trouble for plants
Caterpillars on leaf |
Michigan State University scientists think that these models are incomplete and that we may be underestimating the losses. A new study shows that infested tomato plants, in their efforts to fight off caterpillars, don't adapt well to rising temperatures. This double-edged sword worsens their productivity.
According to the study, two factors are at play. The first is rising temperatures. Insect metabolism speeds up with heat and they eat more. Also, warmer temperatures could open up a wider range of hospitable habitats to insects.
Second, and this is what current models ignore, is how the infested plants react to the heat.
"We know that there are constraints that prevent plants from dealing with two stresses simultaneously," said Gregg Howe, University Distinguished Professor at the MSU-DOE Plant Research Laboratory. "In this case, little is known about how plants cope with increased temperature and insect attack at the same time, so we wanted to try and fill that gap."
Plants have systems to deal with different threats. Caterpillar attack? There is a system for that. When a caterpillar takes a bite off a leaf, the plant produces a hormone, called Jasmonate, or JA. JA tells the plant to quickly produce defense compounds to thwart the caterpillar.
Temperatures too hot? Overheated crops have another bag of tricks to cool themselves down. Obviously, they can't make a run for the inviting shade under a tree. They lift their leaves away from the hot soil. They also "sweat" by opening their stomata -- similar to skin pores -- so that water can evaporate to cool the leaves.
Nathan Havko, a postdoctoral researcher in the Howe lab, had a breakthrough when he grew tomato plants in hot growth chambers, which are kept at 38 degrees Celsius. He also let hungry caterpillars loose on them.
"I was shocked when I opened the doors to the growth chamber where the two sets of plants were growing at 'normal' and 'high' temperatures," Howe said. "The caterpillars in the warmer space were much bigger; they had almost wiped the plant out."
"When temperatures are higher, a wounded tomato plant cranks out even more JA, leading to a stronger defense response," Havko said. "Somehow, that does not deter the caterpillars. Moreover, we found that JA blocks the plant's ability to cool itself down, it can't lift its leaves or sweat."
Perhaps, the plants close their pores to stop losing water from the wounded sites, but they end up suffering the equivalent of a heat stroke. It's even possible that the caterpillars are crafty and do extra damage to keep the leaf pores closed and leaf temperatures elevated, which will speed up the insect's growth and development.
And, there are consequences.
"We see photosynthesis, which is how crops produce biomass, is strongly impaired in these plants," Havko said. "The resources to produce biomass are there, but somehow they aren't used properly and crop productivity decreases."
There are many open questions to be resolved but, as of right now, the study suggests that when global temperatures rise, plants might have too many balls to juggle.
Read more at Science Daily
Solving a biological puzzle: How stress causes gray hair
Graying hair |
For a long time, anecdotes have connected stressful experiences with the phenomenon of hair graying. Now, for the first time, Harvard University scientists have discovered exactly how the process plays out: stress activates nerves that are part of the fight-or-flight response, which in turn cause permanent damage to pigment-regenerating stem cells in hair follicles.
The study, published in Nature, advances scientists' knowledge of how stress can impact the body.
"Everyone has an anecdote to share about how stress affects their body, particularly in their skin and hair -- the only tissues we can see from the outside," said senior author Ya-Chieh Hsu, the Alvin and Esta Star Associate Professor of Stem Cell and Regenerative Biology at Harvard. "We wanted to understand if this connection is true, and if so, how stress leads to changes in diverse tissues. Hair pigmentation is such an accessible and tractable system to start with -- and besides, we were genuinely curious to see if stress indeed leads to hair graying. "
Narrowing down the culprit
Because stress affects the whole body, researchers first had to narrow down which body system was responsible for connecting stress to hair color. The team first hypothesized that stress causes an immune attack on pigment-producing cells. However, when mice lacking immune cells still showed hair graying, researchers turned to the hormone cortisol. But once more, it was a dead end.
"Stress always elevates levels of the hormone cortisol in the body, so we thought that cortisol might play a role," Hsu said. "But surprisingly, when we removed the adrenal gland from the mice so that they couldn't produce cortisol-like hormones, their hair still turned gray under stress."
After systematically eliminating different possibilities, researchers honed in on the sympathetic nerve system, which is responsible for the body's fight-or-flight response.
Sympathetic nerves branch out into each hair follicle on the skin. The researchers found that stress causes these nerves to release the chemical norepinephrine, which gets taken up by nearby pigment-regenerating stem cells.
Permanent damage
In the hair follicle, certain stem cells act as a reservoir of pigment-producing cells. When hair regenerates, some of the stem cells convert into pigment-producing cells that color the hair.
Researchers found that the norepinephrine from sympathetic nerves causes the stem cells to activate excessively. The stem cells all convert into pigment-producing cells, prematurely depleting the reservoir.
"When we started to study this, I expected that stress was bad for the body -- but the detrimental impact of stress that we discovered was beyond what I imagined," Hsu said. "After just a few days, all of the pigment-regenerating stem cells were lost. Once they're gone, you can't regenerate pigment anymore. The damage is permanent."
The finding underscores the negative side effects of an otherwise protective evolutionary response, the researchers said.
"Acute stress, particularly the fight-or-flight response, has been traditionally viewed to be beneficial for an animal's survival. But in this case, acute stress causes permanent depletion of stem cells," said postdoctoral fellow Bing Zhang, the lead author of the study.
Answering a fundamental question
To connect stress with hair graying, the researchers started with a whole-body response and progressively zoomed into individual organ systems, cell-to-cell interaction and, eventually, all the way down to molecular dynamics. The process required a variety of research tools along the way, including methods to manipulate organs, nerves, and cell receptors.
"To go from the highest level to the smallest detail, we collaborated with many scientists across a wide range of disciplines, using a combination of different approaches to solve a very fundamental biological question," Zhang said.
The collaborators included Isaac Chiu, assistant professor of immunology at Harvard Medical School who studies the interplay between nervous and immune systems.
"We know that peripheral neurons powerfully regulate organ function, blood vessels, and immunity, but less is known about how they regulate stem cells," Chiu said.
"With this study, we now know that neurons can control stem cells and their function, and can explain how they interact at the cellular and molecular level to link stress with hair graying."
The findings can help illuminate the broader effects of stress on various organs and tissues. This understanding will pave the way for new studies that seek to modify or block the damaging effects of stress.
Read more at Science Daily
Earth's oldest asteroid strike linked to 'big thaw'
Illustration of asteroid hitting Earth. |
The research, published in the leading journal Nature Communications, used isotopic analysis of minerals to calculate the precise age of the Yarrabubba crater for the first time, putting it at 2.229 billion years old -- making it 200 million years older than the next oldest impact.
Lead author Dr Timmons Erickson, from Curtin's School of Earth and Planetary Sciences and NASA's Johnson Space Center, together with a team including Professor Chris Kirkland, Associate Professor Nicholas Timms and Senior Research Fellow Dr Aaron Cavosie, all from Curtin's School of Earth and Planetary Sciences, analysed the minerals zircon and monazite that were 'shock recrystallized' by the asteroid strike, at the base of the eroded crater to determine the exact age of Yarrabubba.
The team inferred that the impact may have occurred into an ice-covered landscape, vaporised a large volume of ice into the atmosphere, and produced a 70km diameter crater in the rocks beneath.
Professor Kirkland said the timing raised the possibility that the Earth's oldest asteroid impact may have helped lift the planet out of a deep freeze.
"Yarrabubba, which sits between Sandstone and Meekatharra in central WA, had been recognised as an impact structure for many years, but its age wasn't well determined," Professor Kirkland said.
"Now we know the Yarrabubba crater was made right at the end of what's commonly referred to as the early Snowball Earth -- a time when the atmosphere and oceans were evolving and becoming more oxygenated and when rocks deposited on many continents recorded glacial conditions."
Associate Professor Nicholas Timms noted the precise coincidence between the Yarrabubba impact and the disappearance of glacial deposits.
"The age of the Yarrabubba impact matches the demise of a series of ancient glaciations. After the impact, glacial deposits are absent in the rock record for 400 million years. This twist of fate suggests that the large meteorite impact may have influenced global climate," Associate Professor Timms said.
"Numerical modelling further supports the connection between the effects of large impacts into ice and global climate change. Calculations indicated that an impact into an ice-covered continent could have sent half a trillion tons of water vapour -- an important greenhouse gas -- into the atmosphere. This finding raises the question whether this impact may have tipped the scales enough to end glacial conditions."
Dr Aaron Cavosie said the Yarrabubba study may have potentially significant implications for future impact crater discoveries.
Read more at Science Daily
Emissions of potent greenhouse gas have grown, contradicting reports of huge reductions
View of Earth's atmosphere |
Over the last two decades, scientists have been keeping a close eye on the atmospheric concentration of a hydrofluorocarbon (HFC) gas, known as HFC-23.
This gas has very few industrial applications. However, levels have been soaring because it is vented to the atmosphere during the production of another chemical widely used in cooling systems in developing countries.
Scientists are concerned, because HFC-23 is a very potent greenhouse gas, with one tonne of its emissions being equivalent to the release of more than 12,000 tonnes of carbon dioxide.
Starting in 2015, India and China, thought to be the main emitters of HFC-23, announced ambitious plans to abate emissions in factories that produce the gas. As a result, they reported that they had almost completely eliminated HFC-23 emissions by 2017.
In response to these measures, scientists were expecting to see global emissions drop by almost 90 percent between 2015 and 2017, which should have seen growth in atmospheric levels grind to a halt.
Now, an international team of researchers have shown, in a paper published today in the journal Nature Communications, that concentrations were increasing at an all-time record by 2018.
Dr Matt Rigby, who co-authored the study, is a Reader in Atmospheric Chemistry at the University of Bristol and a member of the Advanced Global Atmospheric Gases Experiment (AGAGE), which measures the concentration of greenhouse gases around the world, said: "When we saw the reports of enormous emissions reductions from India and China, we were excited to take a close look at the atmospheric data.
"This potent greenhouse gas has been growing rapidly in the atmosphere for decades now, and these reports suggested that the rise should have almost completely stopped in the space of two or three years. This would have been a big win for climate."
The fact that this reduction has not materialised, and that, instead, global emissions have actually risen, is a puzzle, and one that may have implications for the Montreal Protocol, the international treaty that was designed to protect the stratospheric ozone layer.
In 2016, Parties to the Montreal Protocol signed the Kigali Amendment, aiming to reduce the climate impact of HFCs, whose emissions have grown in response to their use as replacements to ozone depleting substances.
Dr Kieran Stanley, the lead author of the study, visiting research fellow in the University of Bristol's School of Chemistry and a post-doctoral researcher at the Goethe University Frankfurt, added: "To be compliant with the Kigali Amendment to the Montreal Protocol, countries who have ratified the agreement are required to destroy HFC-23 as far as possible.
"Although China and India are not yet bound by the Amendment, their reported abatement would have put them on course to be consistent with Kigali. However, it looks like there is still work to do.
"Our study finds that it is very likely that China has not been as successful in reducing HFC-23 emissions as reported. However, without additional measurements, we can't be sure whether India has been able to implement its abatement programme."
Had these HFC-23 emissions reductions been as large as reported, the researchers estimate that the equivalent of a whole year of Spain's CO2 emissions could have been avoided between 2015 and 2017.
Dr Rigby added: "The magnitude of the CO2-equivalent emissions shows just how potent this greenhouse gas is.
"We now hope to work with other international groups to better quantify India and China's individual emissions using regional, rather than global, data and models."
Dr Stanley added: "This is not the first time that HFC-23 reduction measures attracted controversy.
"Previous studies found that HFC-23 emissions declined between 2005 and 2010, as developed countries funded abatement in developing countries through the purchase of credits under the United Nations Framework Convention on Climate Change Clean Development Mechanism.
Read more at Science Daily
Platypus on brink of extinction
Platypus |
Platypuses were once considered widespread across the eastern Australian mainland and Tasmania, although not a lot is known about their distribution or abundance because of the species' secretive and nocturnal nature.
A new study led by UNSW Sydney's Centre for Ecosystem Science, funded through a UNSW-led Australian Research Council project and supported by the Taronga Conservation Society, has for the first time examined the risks of extinction for this intriguing animal.
Published in the international scientific journal Biological Conservation this month, the study examined the potentially devastating combination of threats to platypus populations, including water resource development, land clearing, climate change and increasingly severe periods of drought.
Lead author Dr Gilad Bino, a researcher at the UNSW Centre for Ecosystem Science, said action must be taken now to prevent the platypus from disappearing from our waterways.
"There is an urgent need for a national risk assessment for the platypus to assess its conservation status, evaluate risks and impacts, and prioritise management in order to minimise any risk of extinction," Dr Bino said.
Alarmingly, the study estimated that under current climate conditions and due to land clearing and fragmentation by dams, platypus numbers almost halved, leading to the extinction of local populations across about 40 per cent of the species' range, reflecting ongoing declines since European colonisation.
Under predicted climate change, the losses forecast were far greater because of increases in extreme drought frequencies and duration, such as the current dry spell.
Dr Bino added: "These dangers further expose the platypus to even worse local extinctions with no capacity to repopulate areas."
Documented declines and local extinctions of the platypus show a species facing considerable risks, while the International Union for Conservation of Nature (IUCN) recently downgraded the platypus' conservation status to "Near Threatened."
But the platypus remains unlisted in most jurisdictions in Australia -- except South Australia, where it is endangered.
Director of the UNSW Centre for Ecosystem Science and study co-author Professor Richard Kingsford said it was unfortunate that platypuses lived in areas undergoing extensive human development that threatened their lives and long-term viability.
"These include dams that stop their movements, agriculture which can destroy their burrows, fishing gear and yabby traps which can drown them and invasive foxes which can kill them," Prof Kingsford said.
Study co-author Professor Brendan Wintle at The University of Melbourne said it was important that preventative measures were taken now.
"Even for a presumed 'safe' species such as the platypus, mitigating or even stopping threats, such as new dams, is likely to be more effective than waiting for the risk of extinction to increase and possible failure," Prof Wintle said.
"We should learn from the peril facing the koala to understand what happens when we ignore the warning signs."
Dr Bino said the researchers' paper added to the increasing body of evidence which showed that the platypus, like many other native Australian species, was on the path to extinction.
"There is an urgent need to implement national conservation efforts for this unique mammal and other species by increasing monitoring, tracking trends, mitigating threats, and protecting and improving management of freshwater habitats," Dr Bino said.
Read more at Science Daily
Jan 21, 2020
Walking sharks discovered in the tropics
Four new species of tropical sharks that use their fins to walk are causing a stir in waters off northern Australia and New Guinea.
While that might strike fear into the hearts of some people, University of Queensland researchers say the only creatures with cause to worry are small fish and invertebrates.
The walking sharks were discovered during a 12-year study with Conservation International, the CSIRO, Florida Museum of Natural History, the Indonesian Institute of Sciences and Indonesian Ministry of Marine Affairs and Fisheries.
UQ's Dr Christine Dudgeon said the ornately patterned sharks were the top predator on reefs during low tides when they used their fins to walk in very shallow water.
"At less than a metre long on average, walking sharks present no threat to people but their ability to withstand low oxygen environments and walk on their fins gives them a remarkable edge over their prey of small crustaceans and molluscs," Dr Dudgeon said.
"These unique features are not shared with their closest relatives the bamboo sharks or more distant relatives in the carpet shark order including wobbegongs and whale sharks.
The four new species almost doubled the total number of known walking sharks to nine.
Dr Dudgeon said they live in coastal waters around northern Australia and the island of New Guinea, and occupy their own separate region.
"We estimated the connection between the species based on comparisons between their mitochondrial DNA which is passed down through the maternal lineage. This DNA codes for the mitochondria which are the parts of cells that transform oxygen and nutrients from food into energy for cells," Dr Dudgeon said.
"Data suggests the new species evolved after the sharks moved away from their original population, became genetically isolated in new areas and developed into new species," she said.
"They may have moved by swimming or walking on their fins, but it's also possible they 'hitched' a ride on reefs moving westward across the top of New Guinea, about two million years ago.
"We believe there are more walking shark species still waiting to be discovered."
Read more at Science Daily
While that might strike fear into the hearts of some people, University of Queensland researchers say the only creatures with cause to worry are small fish and invertebrates.
The walking sharks were discovered during a 12-year study with Conservation International, the CSIRO, Florida Museum of Natural History, the Indonesian Institute of Sciences and Indonesian Ministry of Marine Affairs and Fisheries.
UQ's Dr Christine Dudgeon said the ornately patterned sharks were the top predator on reefs during low tides when they used their fins to walk in very shallow water.
"At less than a metre long on average, walking sharks present no threat to people but their ability to withstand low oxygen environments and walk on their fins gives them a remarkable edge over their prey of small crustaceans and molluscs," Dr Dudgeon said.
"These unique features are not shared with their closest relatives the bamboo sharks or more distant relatives in the carpet shark order including wobbegongs and whale sharks.
The four new species almost doubled the total number of known walking sharks to nine.
Dr Dudgeon said they live in coastal waters around northern Australia and the island of New Guinea, and occupy their own separate region.
"We estimated the connection between the species based on comparisons between their mitochondrial DNA which is passed down through the maternal lineage. This DNA codes for the mitochondria which are the parts of cells that transform oxygen and nutrients from food into energy for cells," Dr Dudgeon said.
"Data suggests the new species evolved after the sharks moved away from their original population, became genetically isolated in new areas and developed into new species," she said.
"They may have moved by swimming or walking on their fins, but it's also possible they 'hitched' a ride on reefs moving westward across the top of New Guinea, about two million years ago.
"We believe there are more walking shark species still waiting to be discovered."
Read more at Science Daily
Feeding the world without wrecking the planet is possible
"When looking at the status of planet Earth and the influence of current global agriculture practices upon it, there's a lot of reason to worry, but also reason for hope -- if we see decisive actions very soon," Dieter Gerten says, lead author from PIK and professor at Humboldt University of Berlin. "Currently, almost half of global food production relies on crossing Earth's environmental boundaries. We appropriate too much land for crops and livestock, fertilize too heavily and irrigate too extensively. To solve this issue in the face of a still growing world population, we collectively need to rethink how to produce food. Excitingly, our research shows that such transformations will make it possible to provide enough food for up to 10 billion people."
The researchers ask the question how many people could be fed while keeping a strict standard of environmental sustainability worldwide. These environmental capacities are defined in terms of a set of planetary boundaries -- scientifically defined targets of maximum allowed human interference with processes that regulate the state of the planet. The present study accounts for four of nine boundaries most relevant for agriculture: Biosphere integrity (keeping biodiversity and ecosystems intact), land-system change, freshwater use, and nitrogen flows. Based on a sophisticated simulation model, the impacts of food on these boundaries are scrutinised at a level of spatial and process detail never accomplished before, and moreover aggregated to the entire planet. This analysis demonstrates where and how many boundaries are being violated by current food production and in which ways this development could be reverted through adopting more sustainable forms of agriculture.
Globally differentiated picture: In some regions, less would be more
The encouraging result is that, in theory, 10 billion people can be fed without compromising the Earth system. This leads to very interesting conclusions, as Johan Rockström, director of PIK points out: "We find that currently, agriculture in many regions is using too much water, land, or fertilizer. Production in these regions thus needs to be brought into line with environmental sustainability. Yet, there are huge opportunities to sustainably increase agricultural production in these and other regions. This goes for large parts of Sub-Saharan Africa, for example, where more efficient water and nutrient management could strongly improve yields."
As a positive side effect, sustainable agriculture can increase overall climate resilience while also limiting global warming. In other places, however, farming is so far off local and Earth's boundaries that even more sustainable systems could not completely balance the pressure on the environment, such as in parts of the Middle East, Indonesia, and to some extent in Central Europe. Even after recalibrating agricultural production, international trade will remain a key element of a sustainably fed world.
Hard to chew: Dietary changes needed
Importantly, there is the consumers' end, too. Large-scale dietary shifts seem to be inevitable for turning the tide to a sustainable food system. For example, regarding China's currently rising meat consumption, parts of animal proteins would need to be substituted by more legumes and other vegetables. "Changes like this might seem hard to chew at first. But in the long run, dietary changes towards a more sustainable mix on your plate will not only benefit the planet, but also people's health," adds Vera Heck from PIK. Another crucial factor is reducing food loss. In line with scenarios adopted in the present study, the most recent IPCC Special Report on land use found that currently, up to 30 percent of all food produced is lost to waste. "This situation clearly calls for resolute policy measures to set incentives right on both the producers' and consumers' ends," Heck further lays out.
Read more at Science Daily
The researchers ask the question how many people could be fed while keeping a strict standard of environmental sustainability worldwide. These environmental capacities are defined in terms of a set of planetary boundaries -- scientifically defined targets of maximum allowed human interference with processes that regulate the state of the planet. The present study accounts for four of nine boundaries most relevant for agriculture: Biosphere integrity (keeping biodiversity and ecosystems intact), land-system change, freshwater use, and nitrogen flows. Based on a sophisticated simulation model, the impacts of food on these boundaries are scrutinised at a level of spatial and process detail never accomplished before, and moreover aggregated to the entire planet. This analysis demonstrates where and how many boundaries are being violated by current food production and in which ways this development could be reverted through adopting more sustainable forms of agriculture.
Globally differentiated picture: In some regions, less would be more
The encouraging result is that, in theory, 10 billion people can be fed without compromising the Earth system. This leads to very interesting conclusions, as Johan Rockström, director of PIK points out: "We find that currently, agriculture in many regions is using too much water, land, or fertilizer. Production in these regions thus needs to be brought into line with environmental sustainability. Yet, there are huge opportunities to sustainably increase agricultural production in these and other regions. This goes for large parts of Sub-Saharan Africa, for example, where more efficient water and nutrient management could strongly improve yields."
As a positive side effect, sustainable agriculture can increase overall climate resilience while also limiting global warming. In other places, however, farming is so far off local and Earth's boundaries that even more sustainable systems could not completely balance the pressure on the environment, such as in parts of the Middle East, Indonesia, and to some extent in Central Europe. Even after recalibrating agricultural production, international trade will remain a key element of a sustainably fed world.
Hard to chew: Dietary changes needed
Importantly, there is the consumers' end, too. Large-scale dietary shifts seem to be inevitable for turning the tide to a sustainable food system. For example, regarding China's currently rising meat consumption, parts of animal proteins would need to be substituted by more legumes and other vegetables. "Changes like this might seem hard to chew at first. But in the long run, dietary changes towards a more sustainable mix on your plate will not only benefit the planet, but also people's health," adds Vera Heck from PIK. Another crucial factor is reducing food loss. In line with scenarios adopted in the present study, the most recent IPCC Special Report on land use found that currently, up to 30 percent of all food produced is lost to waste. "This situation clearly calls for resolute policy measures to set incentives right on both the producers' and consumers' ends," Heck further lays out.
Read more at Science Daily
Addressing global warming with new nanoparticles and sunshine
Harvesting sunlight, researchers of the Center for Integrated Nanostructure Physics, within the Institute for Basic Science (IBS, South Korea) published in Materials Today a new strategy to transform carbon dioxide (CO2) into oxygen (O2) and pure carbon monoxide (CO) without side-products in water. This artificial photosynthesis method could bring new solutions to environmental pollution and global warming.
While, in green plants, photosynthesis fixes CO2 into sugars, the artificial photosynthesis reported in this study can convert CO2 into oxygen and pure CO as output. The latter can then be employed for a broad range of applications in electronics, semiconductor, pharmaceutical, and chemical industries. The key is to find the right high-performance photocatalyst to help the photosynthesis take place by absorbing light, convert CO2, and ensuring an efficient flow of electrons, which is essential for the entire system.
Titanium oxide (TiO2) is a well-known photocatalyst. It has already attracted significant attention in the fields of solar energy conversion and environmental protection due to its high reactivity, low toxicity, chemical stability, and low cost. While conventional TiO2 can absorb only UV light, the IBS research team reported previously two different types of blue-colored TiO2 (or "blue titania") nanoparticles that could absorb visible light thanks to a reduced bandgap of about 2.7 eV. They were made of ordered anatase/disordered rutile (Ao/Rd) TiO2 (called, HYL's blue TiO2-I) (Energy & Environmental Science, 2016), and disordered anatase/ordered rutile (Ad/Ro) TiO2 (called, HYL's blue TiO2-II) (ACS Applied Materials & Interfaces, 2019), where anatase and rutile refer to two crystalline forms of TiO2 and the introduction of irregularities (disorder) in the crystal enhances the absorption of visible and infra-red light.
For the efficient artificial photosynthesis for the conversion of CO2 into oxygen and pure CO, IBS researchers aimed to improve the performance of these nanoparticles by combining blue (Ao/Rd) TiO2 with other semiconductors and metals that can enhance water oxidation to oxygen, in parallel to CO2 reduction into CO only. The research team obtained the best results with hybrid nanoparticles made of blue titania, tungsten trioxide (WO3), and 1% silver (TiO2/WO3-Ag). WO3 was chosen because of the low valence band position with its narrow bandgap of 2.6 eV, high stability, and low cost. Silver was added because it enhances visible light absorption, by creating a collective oscillation of free electrons excited by light, and also gives high CO selectivity. The hybrid nanoparticles showed about 200 times higher performance than nanoparticles made of TiO2 alone and TiO2/WO3 without silver.
Read more at Science Daily
While, in green plants, photosynthesis fixes CO2 into sugars, the artificial photosynthesis reported in this study can convert CO2 into oxygen and pure CO as output. The latter can then be employed for a broad range of applications in electronics, semiconductor, pharmaceutical, and chemical industries. The key is to find the right high-performance photocatalyst to help the photosynthesis take place by absorbing light, convert CO2, and ensuring an efficient flow of electrons, which is essential for the entire system.
Titanium oxide (TiO2) is a well-known photocatalyst. It has already attracted significant attention in the fields of solar energy conversion and environmental protection due to its high reactivity, low toxicity, chemical stability, and low cost. While conventional TiO2 can absorb only UV light, the IBS research team reported previously two different types of blue-colored TiO2 (or "blue titania") nanoparticles that could absorb visible light thanks to a reduced bandgap of about 2.7 eV. They were made of ordered anatase/disordered rutile (Ao/Rd) TiO2 (called, HYL's blue TiO2-I) (Energy & Environmental Science, 2016), and disordered anatase/ordered rutile (Ad/Ro) TiO2 (called, HYL's blue TiO2-II) (ACS Applied Materials & Interfaces, 2019), where anatase and rutile refer to two crystalline forms of TiO2 and the introduction of irregularities (disorder) in the crystal enhances the absorption of visible and infra-red light.
For the efficient artificial photosynthesis for the conversion of CO2 into oxygen and pure CO, IBS researchers aimed to improve the performance of these nanoparticles by combining blue (Ao/Rd) TiO2 with other semiconductors and metals that can enhance water oxidation to oxygen, in parallel to CO2 reduction into CO only. The research team obtained the best results with hybrid nanoparticles made of blue titania, tungsten trioxide (WO3), and 1% silver (TiO2/WO3-Ag). WO3 was chosen because of the low valence band position with its narrow bandgap of 2.6 eV, high stability, and low cost. Silver was added because it enhances visible light absorption, by creating a collective oscillation of free electrons excited by light, and also gives high CO selectivity. The hybrid nanoparticles showed about 200 times higher performance than nanoparticles made of TiO2 alone and TiO2/WO3 without silver.
Read more at Science Daily
'Ancient' cellular discovery key to new cancer therapies
Australian researchers have uncovered a metabolic system which could lead to new strategies for therapeutic cancer treatment.
A team at Flinders University led by Professor Janni Petersen and the St Vincent's Institute of Medical Research have found a link between a metabolic system in a yeast, and now mammals, which is critical for the regulation of cell growth and proliferation.
"What is fascinating about this yeast is that it became evolutionarily distinct about 350 million years ago, so you could argue the discovery, that we subsequently confirmed occurs in mammals, is at least as ancient as that," said Associate Professor Jonathon Oakhill, Head, Metabolic Signalling Laboratory at SVI in Melbourne.
This project, outlined in a new paper in Nature Metabolism, looked at two major signalling networks.
Often referred to as the body's fuel gauge; a protein called AMP-Kinase, or AMPK, regulates cellular energy, slowing cell growth down when they don't have enough nutrients or energy to divide.
The other, that of a protein complex called mTORC1/TORC1, which also regulates cell growth, increases cell proliferation when it senses high levels of nutrients such as amino acids, insulin or growth factors.
A hallmark of cancer cells is their ability to over-ride these sensing systems and maintain uncontrolled proliferation.
"We have known for about 15 years that AMPK can 'put the brakes on' mTORC1, preventing cell proliferation" says Associate Professor Oakhill. "However, it was at this point that we discovered a mechanism whereby mTORC1 can reciprocally also inhibit AMPK and keep it in a suppressed state.
Professor Petersen, from the Flinders Centre for Innovation in Cancer in Adelaide, South Australia says the experiments showed the yeast cells "became highly sensitive to nutrient shortages when we disrupted the ability of mTORC1 to inhibit AMPK."
"The cells also divided at a smaller size, indicating disruption of normal cell growth regulation," she says.
"We measured the growth rates of cancerous mammalian cells by starving them of amino acids and energy (by depriving them of glucose) to mimic conditions found in a tumour.
"Surprisingly, we found that these combined stresses actually increased growth rates, which we determined was due to the cells entering a rogue 'survival' mode.
"When in this mode, they feed upon themselves so that even in the absence of appropriate nutrients the cells continue to grow.
"Importantly, this transition to survival mode was lost when we again removed the ability of mTORC1 to inhibit AMPK."
Read more at Science Daily
A team at Flinders University led by Professor Janni Petersen and the St Vincent's Institute of Medical Research have found a link between a metabolic system in a yeast, and now mammals, which is critical for the regulation of cell growth and proliferation.
"What is fascinating about this yeast is that it became evolutionarily distinct about 350 million years ago, so you could argue the discovery, that we subsequently confirmed occurs in mammals, is at least as ancient as that," said Associate Professor Jonathon Oakhill, Head, Metabolic Signalling Laboratory at SVI in Melbourne.
This project, outlined in a new paper in Nature Metabolism, looked at two major signalling networks.
Often referred to as the body's fuel gauge; a protein called AMP-Kinase, or AMPK, regulates cellular energy, slowing cell growth down when they don't have enough nutrients or energy to divide.
The other, that of a protein complex called mTORC1/TORC1, which also regulates cell growth, increases cell proliferation when it senses high levels of nutrients such as amino acids, insulin or growth factors.
A hallmark of cancer cells is their ability to over-ride these sensing systems and maintain uncontrolled proliferation.
"We have known for about 15 years that AMPK can 'put the brakes on' mTORC1, preventing cell proliferation" says Associate Professor Oakhill. "However, it was at this point that we discovered a mechanism whereby mTORC1 can reciprocally also inhibit AMPK and keep it in a suppressed state.
Professor Petersen, from the Flinders Centre for Innovation in Cancer in Adelaide, South Australia says the experiments showed the yeast cells "became highly sensitive to nutrient shortages when we disrupted the ability of mTORC1 to inhibit AMPK."
"The cells also divided at a smaller size, indicating disruption of normal cell growth regulation," she says.
"We measured the growth rates of cancerous mammalian cells by starving them of amino acids and energy (by depriving them of glucose) to mimic conditions found in a tumour.
"Surprisingly, we found that these combined stresses actually increased growth rates, which we determined was due to the cells entering a rogue 'survival' mode.
"When in this mode, they feed upon themselves so that even in the absence of appropriate nutrients the cells continue to grow.
"Importantly, this transition to survival mode was lost when we again removed the ability of mTORC1 to inhibit AMPK."
Read more at Science Daily
Jan 20, 2020
Local water availability is permanently reduced after planting forests
River flow is reduced in areas where forests have been planted and does not recover over time, a new study has shown. Rivers in some regions can completely disappear within a decade. This highlights the need to consider the impact on regional water availability, as well as the wider climate benefit, of tree-planting plans.
"Reforestation is an important part of tackling climate change, but we need to carefully consider the best places for it. In some places, changes to water availability will completely change the local cost-benefits of tree-planting programmes," said Laura Bentley, a plant scientist in the University of Cambridge Conservation Research Institute, and first author of the report.
Planting large areas of trees has been suggested as one of the best ways of reducing atmospheric carbon dioxide levels, since trees absorb and store this greenhouse gas as they grow. While it has long been known that planting trees reduces the amount of water flowing into nearby rivers, there has previously been no understanding of how this effect changes as forests age.
The study looked at 43 sites across the world where forests have been established, and used river flow as a measure of water availability in the region. It found that within five years of planting trees, river flow had reduced by an average of 25%. By 25 years, rivers had gone down by an average of 40% and in a few cases had dried up entirely. The biggest percentage reductions in water availability were in regions in Australia and South Africa.
"River flow does not recover after planting trees, even after many years, once disturbances in the catchment and the effects of climate are accounted for," said Professor David Coomes, Director of the University of Cambridge Conservation Research Institute, who led the study.
Published in the journal Global Change Biology, the research showed that the type of land where trees are planted determines the degree of impact they have on local water availability. Trees planted on natural grassland where the soil is healthy decrease river flow significantly. On land previously degraded by agriculture, establishing forest helps to repair the soil so it can hold more water and decreases nearby river flow by a lesser amount.
Counterintuitively, the effect of trees on river flow is smaller in drier years than wetter ones. When trees are drought-stressed they close the pores on their leaves to conserve water, and as a result draw up less water from the soil. In wet weather the trees use more water from the soil, and also catch the rainwater in their leaves.
Read more at Science Daily
"Reforestation is an important part of tackling climate change, but we need to carefully consider the best places for it. In some places, changes to water availability will completely change the local cost-benefits of tree-planting programmes," said Laura Bentley, a plant scientist in the University of Cambridge Conservation Research Institute, and first author of the report.
Planting large areas of trees has been suggested as one of the best ways of reducing atmospheric carbon dioxide levels, since trees absorb and store this greenhouse gas as they grow. While it has long been known that planting trees reduces the amount of water flowing into nearby rivers, there has previously been no understanding of how this effect changes as forests age.
The study looked at 43 sites across the world where forests have been established, and used river flow as a measure of water availability in the region. It found that within five years of planting trees, river flow had reduced by an average of 25%. By 25 years, rivers had gone down by an average of 40% and in a few cases had dried up entirely. The biggest percentage reductions in water availability were in regions in Australia and South Africa.
"River flow does not recover after planting trees, even after many years, once disturbances in the catchment and the effects of climate are accounted for," said Professor David Coomes, Director of the University of Cambridge Conservation Research Institute, who led the study.
Published in the journal Global Change Biology, the research showed that the type of land where trees are planted determines the degree of impact they have on local water availability. Trees planted on natural grassland where the soil is healthy decrease river flow significantly. On land previously degraded by agriculture, establishing forest helps to repair the soil so it can hold more water and decreases nearby river flow by a lesser amount.
Counterintuitively, the effect of trees on river flow is smaller in drier years than wetter ones. When trees are drought-stressed they close the pores on their leaves to conserve water, and as a result draw up less water from the soil. In wet weather the trees use more water from the soil, and also catch the rainwater in their leaves.
Read more at Science Daily
Setting controlled fires to avoid wildfires
Australians desperate for solutions to raging wildfires might find them 8,000 miles away, where a new Stanford-led study proposes ways of overcoming barriers to prescribed burns -- fires purposefully set under controlled conditions to clear ground fuels. The paper, published Jan. 20 in Nature Sustainability, outlines a range of approaches to significantly increase the deployment of prescribed burns in California and potentially in other regions, including Australia, that share similar climate, landscape and policy challenges.
"We need a colossal expansion of fuel treatments," said study lead author Rebecca Miller, a PhD student in the Emmett Interdisciplinary Program in Environment and Resources within the Stanford School of Earth, Energy & Environmental Sciences.
"Prescribed burns are effective and safe," said study co-author Chris Field, the Perry L. McCarty Director of the Stanford Woods Institute for the Environment and Melvin and Joan Lane Professor for Interdisciplinary Environmental Studies. "California needs to remove obstacles to their use so we can avoid more devastating wildfires."
Years of fire suppression in California have led to massive accumulations of wood and plant fuels in forests. Hotter, drier conditions have exacerbated the situation. Prescribed burns, in combination with thinning of vegetation that allows fire to climb into the tree canopy, have proven effective at reducing wildfire risks. They rarely escape their set boundaries and have ecological benefits that mimic the effects of naturally occurring fires, such as reducing the spread of disease and insects and increasing species diversity.
To put a meaningful dent in wildfire numbers, California needs fuel treatments -- whether prescribed burns or vegetation thinning -- on about 20 million acres or nearly 20 percent of the state's land area, according to the researchers. While ambitions for prescribed burns in California have been rising -- private, state and federal acres planned for the approach more than doubled between 2013 and 2018 -- up to half of that acreage has gone unburned due to concerns about risks like the resulting smoky air, outdated regulations and limited resources.
To better understand these barriers, the researchers interviewed federal and state government employees, state legislative staff and nonprofit representatives involved with wildfire management, as well as academics who study the field. They also analyzed legislative policies and combed through prescribed burn data to identify barriers and ultimately propose solutions.
Barriers to burning
Just about everyone the researchers interviewed described a risk-averse culture in the shadow of liability laws that place financial and legal responsibility for any prescribed burn that escapes on the burners. Private landowners explained how fears of bankruptcy swayed them to avoid burning on their property. Federal agency employees pointed to an absence of praise or rewards for doing prescribed burns, but punishment for any fires that escape. Federal and state employees claimed that negative public opinion -- fear of fires escaping into developed areas and smoke damaging health -- remains a challenge.
Limited finances, complex regulations and a lack of qualified burners also get in the way. For example, wildfire suppression has historically diverted funding from wildfire prevention, many state fire crews are seasonal employees hired during the worst wildfire months rather than the months when conditions are best for prescribed burn and burners who receive federal or state funds must undergo potentially expensive and time-consuming environmental reviews.
Toward solutions
California has taken some meaningful steps to make prescribed burning easier. Recent legislation makes private landowners who enroll in a certification and training program or take appropriate precautions before burning exempt from financial liability for any prescribed burns that escape. And new public education programs are improving public opinion of the practice.
To go further, stakeholders interviewed for the study suggested a range of improvements. They pointed to the need for consistent funding for wildfire prevention (rather than a primary focus on suppression), federal workforce rebuilding and training programs to bolster prescribed burn crews and standardization of regional air boards' burn evaluation and approval processes. Changing certain emissions calculations -- prescribed burn smoke is currently considered human-caused, whereas wildfires count as natural emissions -- may also incentivize treatments.
Making these changes will require a multi-year commitment by the executive and legislative branches, according to the researchers. The magnitude of the 2017 and 2018 wildfires prompted new wildfire-related policy proposals, but maintaining that focus during lighter fire seasons will be critical to protecting California's communities and managing its ecosystems.
Read more at Science Daily
"We need a colossal expansion of fuel treatments," said study lead author Rebecca Miller, a PhD student in the Emmett Interdisciplinary Program in Environment and Resources within the Stanford School of Earth, Energy & Environmental Sciences.
"Prescribed burns are effective and safe," said study co-author Chris Field, the Perry L. McCarty Director of the Stanford Woods Institute for the Environment and Melvin and Joan Lane Professor for Interdisciplinary Environmental Studies. "California needs to remove obstacles to their use so we can avoid more devastating wildfires."
Years of fire suppression in California have led to massive accumulations of wood and plant fuels in forests. Hotter, drier conditions have exacerbated the situation. Prescribed burns, in combination with thinning of vegetation that allows fire to climb into the tree canopy, have proven effective at reducing wildfire risks. They rarely escape their set boundaries and have ecological benefits that mimic the effects of naturally occurring fires, such as reducing the spread of disease and insects and increasing species diversity.
To put a meaningful dent in wildfire numbers, California needs fuel treatments -- whether prescribed burns or vegetation thinning -- on about 20 million acres or nearly 20 percent of the state's land area, according to the researchers. While ambitions for prescribed burns in California have been rising -- private, state and federal acres planned for the approach more than doubled between 2013 and 2018 -- up to half of that acreage has gone unburned due to concerns about risks like the resulting smoky air, outdated regulations and limited resources.
To better understand these barriers, the researchers interviewed federal and state government employees, state legislative staff and nonprofit representatives involved with wildfire management, as well as academics who study the field. They also analyzed legislative policies and combed through prescribed burn data to identify barriers and ultimately propose solutions.
Barriers to burning
Just about everyone the researchers interviewed described a risk-averse culture in the shadow of liability laws that place financial and legal responsibility for any prescribed burn that escapes on the burners. Private landowners explained how fears of bankruptcy swayed them to avoid burning on their property. Federal agency employees pointed to an absence of praise or rewards for doing prescribed burns, but punishment for any fires that escape. Federal and state employees claimed that negative public opinion -- fear of fires escaping into developed areas and smoke damaging health -- remains a challenge.
Limited finances, complex regulations and a lack of qualified burners also get in the way. For example, wildfire suppression has historically diverted funding from wildfire prevention, many state fire crews are seasonal employees hired during the worst wildfire months rather than the months when conditions are best for prescribed burn and burners who receive federal or state funds must undergo potentially expensive and time-consuming environmental reviews.
Toward solutions
California has taken some meaningful steps to make prescribed burning easier. Recent legislation makes private landowners who enroll in a certification and training program or take appropriate precautions before burning exempt from financial liability for any prescribed burns that escape. And new public education programs are improving public opinion of the practice.
To go further, stakeholders interviewed for the study suggested a range of improvements. They pointed to the need for consistent funding for wildfire prevention (rather than a primary focus on suppression), federal workforce rebuilding and training programs to bolster prescribed burn crews and standardization of regional air boards' burn evaluation and approval processes. Changing certain emissions calculations -- prescribed burn smoke is currently considered human-caused, whereas wildfires count as natural emissions -- may also incentivize treatments.
Making these changes will require a multi-year commitment by the executive and legislative branches, according to the researchers. The magnitude of the 2017 and 2018 wildfires prompted new wildfire-related policy proposals, but maintaining that focus during lighter fire seasons will be critical to protecting California's communities and managing its ecosystems.
Read more at Science Daily
Becoming less active and gaining weight: Downsides of becoming an adult
Leaving school and getting a job both lead to a drop in the amount of physical activity, while becoming a mother is linked to increased weight gain, conclude two reviews published today and led by researchers at the University of Cambridge.
Many people tend to put on weight as they leave adolescence and move into adulthood, and this is the age when the levels of obesity increase the fastest. This weight gain is related to changes in diet and physical activity behaviour across the life events of early adulthood, including the move from school to further education and employment, starting new relationships and having children.
Writing in Obesity Reviews, researchers from the Centre for Diet and Activity Research (CEDAR) at Cambridge looked at changes in physical activity, diet and body weight as young adults move from education into employment and to becoming a parent. To do this, they carried out systematic reviews and meta-analyses of existing scientific literature -- these approaches allow them to compare and consolidate results from a number of often-contradictory studies to reach more robust conclusions.
Leaving school
In the first of the two studies, the team looked at the evidence relating to the transition from high school into higher education or employment and how this affects body weight, diet and physical activity. In total, they found 19 studies covering ages 15-35 years, of which 17 assessed changes in physical activity, three body weight, and five diet or eating behaviours.
The team found that leaving high school was associated with a decrease of seven minutes per day of moderate-to-vigorous physical activity. The decrease was larger for males than it was for females (a decrease of 16.4 minutes per day for men compared to 6.7 minutes per day for women). More detailed analysis revealed that the change is largest when people go to university, with overall levels of moderate-to-vigorous physical activity falling by 11.4 minutes per day.
Three studies reported increases in body weight on leaving high school, though there were not enough studies to provide a mean weight increase. Two studies suggested that diets decrease in quality on leaving high school and one suggested the same on leaving university.
"Children have a relatively protected environment, with healthy food and exercise encouraged within schools, but this evidence suggests that the pressures of university, employment and childcare drive changes in behaviour which are likely to be bad for long-term health," said Dr Eleanor Winpenny from CEDAR and the MRC Epidemiology Unit at the University of Cambridge.
"This is a really important time when people are forming healthy or unhealthy habits that will continue through adult life. If we can pinpoint the factors in our adult lives which are driving unhealthy behaviours, we can then work to change them."
Becoming a parent
In the second study, the team looked at the impact of becoming a parent on weight, diet and physical activity.
A meta-analysis of six studies found the difference in change in body mass index (BMI) between remaining without children and becoming a parent was 17%: a woman of average height (164cm) who had no children gained around 7.5kg over five to six years, while a mother of the same height would gain an additional 1.3kg. These equate to increases in BMI of 2.8 versus 3.3.
Only one study looked at the impact of becoming a father and found no difference in change.
There was little evidence looking at physical activity and diet. Most studies including physical activity showed a greater decline in parents versus non-parents. The team found limited evidence for diet, which did not seem to differ between parents and non-parents.
"BMI increases for women over young adulthood, particularly among those becoming a mother. However, new parents could also be particularly willing to change their behaviour as it may also positively influence their children, rather than solely improve their own health," said Dr Kirsten Corder, also from CEDAR and the MRC Epidemiology Unit.
Read more at Science Daily
Many people tend to put on weight as they leave adolescence and move into adulthood, and this is the age when the levels of obesity increase the fastest. This weight gain is related to changes in diet and physical activity behaviour across the life events of early adulthood, including the move from school to further education and employment, starting new relationships and having children.
Writing in Obesity Reviews, researchers from the Centre for Diet and Activity Research (CEDAR) at Cambridge looked at changes in physical activity, diet and body weight as young adults move from education into employment and to becoming a parent. To do this, they carried out systematic reviews and meta-analyses of existing scientific literature -- these approaches allow them to compare and consolidate results from a number of often-contradictory studies to reach more robust conclusions.
Leaving school
In the first of the two studies, the team looked at the evidence relating to the transition from high school into higher education or employment and how this affects body weight, diet and physical activity. In total, they found 19 studies covering ages 15-35 years, of which 17 assessed changes in physical activity, three body weight, and five diet or eating behaviours.
The team found that leaving high school was associated with a decrease of seven minutes per day of moderate-to-vigorous physical activity. The decrease was larger for males than it was for females (a decrease of 16.4 minutes per day for men compared to 6.7 minutes per day for women). More detailed analysis revealed that the change is largest when people go to university, with overall levels of moderate-to-vigorous physical activity falling by 11.4 minutes per day.
Three studies reported increases in body weight on leaving high school, though there were not enough studies to provide a mean weight increase. Two studies suggested that diets decrease in quality on leaving high school and one suggested the same on leaving university.
"Children have a relatively protected environment, with healthy food and exercise encouraged within schools, but this evidence suggests that the pressures of university, employment and childcare drive changes in behaviour which are likely to be bad for long-term health," said Dr Eleanor Winpenny from CEDAR and the MRC Epidemiology Unit at the University of Cambridge.
"This is a really important time when people are forming healthy or unhealthy habits that will continue through adult life. If we can pinpoint the factors in our adult lives which are driving unhealthy behaviours, we can then work to change them."
Becoming a parent
In the second study, the team looked at the impact of becoming a parent on weight, diet and physical activity.
A meta-analysis of six studies found the difference in change in body mass index (BMI) between remaining without children and becoming a parent was 17%: a woman of average height (164cm) who had no children gained around 7.5kg over five to six years, while a mother of the same height would gain an additional 1.3kg. These equate to increases in BMI of 2.8 versus 3.3.
Only one study looked at the impact of becoming a father and found no difference in change.
There was little evidence looking at physical activity and diet. Most studies including physical activity showed a greater decline in parents versus non-parents. The team found limited evidence for diet, which did not seem to differ between parents and non-parents.
"BMI increases for women over young adulthood, particularly among those becoming a mother. However, new parents could also be particularly willing to change their behaviour as it may also positively influence their children, rather than solely improve their own health," said Dr Kirsten Corder, also from CEDAR and the MRC Epidemiology Unit.
Read more at Science Daily
Dozens of non-oncology drugs can kill cancer cells
Drugs for diabetes, inflammation, alcoholism -- and even for treating arthritis in dogs -- can also kill cancer cells in the lab, according to a study by scientists at the Broad Institute of MIT and Harvard and Dana-Farber Cancer Institute. The researchers systematically analyzed thousands of already developed drug compounds and found nearly 50 that have previously unrecognized anti-cancer activity. The surprising findings, which also revealed novel drug mechanisms and targets, suggest a possible way to accelerate the development of new cancer drugs or repurpose existing drugs to treat cancer.
"We thought we'd be lucky if we found even a single compound with anti-cancer properties, but we were surprised to find so many," said Todd Golub, chief scientific officer and director of the Cancer Program at the Broad, Charles A. Dana Investigator in Human Cancer Genetics at Dana-Farber, and professor of pediatrics at Harvard Medical School.
The new work appears in the journal Nature Cancer. It is the largest study yet to employ the Broad's Drug Repurposing Hub, a collection that currently comprises more than 6,000 existing drugs and compounds that are either FDA-approved or have been proven safe in clinical trials (at the time of the study, the Hub contained 4,518 drugs). The study also marks the first time researchers screened the entire collection of mostly non-cancer drugs for their anti-cancer capabilities.
Historically, scientists have stumbled upon new uses for a few existing medicines, such as the discovery of aspirin's cardiovascular benefits. "We created the repurposing hub to enable researchers to make these kinds of serendipitous discoveries in a more deliberate way," said study first author Steven Corsello, an oncologist at Dana-Farber, a member of the Golub lab, and founder of the Drug Repurposing Hub.
The researchers tested all the compounds in the Drug Repurposing Hub on 578 human cancer cell lines from the Broad's Cancer Cell Line Encyclopedia (CCLE). Using a molecular barcoding method known as PRISM, which was developed in the Golub lab, the researchers tagged each cell line with a DNA barcode, allowing them to pool several cell lines together in each dish and more quickly conduct a larger experiment. The team then exposed each pool of barcoded cells to a single compound from the repurposing library, and measured the survival rate of the cancer cells.
They found nearly 50 non-cancer drugs -- including those initially developed to lower cholesterol or reduce inflammation -- that killed some cancer cells while leaving others alone.
Some of the compounds killed cancer cells in unexpected ways. "Most existing cancer drugs work by blocking proteins, but we're finding that compounds can act through other mechanisms," said Corsello. Some of the four-dozen drugs he and his colleagues identified appear to act not by inhibiting a protein but by activating a protein or stabilizing a protein-protein interaction. For example, the team found that nearly a dozen non-oncology drugs killed cancer cells that express a protein called PDE3A by stabilizing the interaction between PDE3A and another protein called SLFN12 -- a previously unknown mechanism for some of these drugs.
These unexpected drug mechanisms were easier to find using the study's cell-based approach, which measures cell survival, than through traditional non-cell-based high-throughput screening methods, Corsello said.
Most of the non-oncology drugs that killed cancer cells in the study did so by interacting with a previously unrecognized molecular target. For example, the anti-inflammatory drug tepoxalin, originally developed for use in people but approved for treating osteoarthritis in dogs, killed cancer cells by hitting an unknown target in cells that overexpress the protein MDR1, which commonly drives resistance to chemotherapy drugs.
The researchers were also able to predict whether certain drugs could kill each cell line by looking at the cell line's genomic features, such as mutations and methylation levels, which were included in the CCLE database. This suggests that these features could one day be used as biomarkers to identify patients who will most likely benefit from certain drugs. For example, the alcohol dependence drug disulfiram (Antabuse) killed cell lines carrying mutations that cause depletion of metallothionein proteins. Compounds containing vanadium, originally developed to treat diabetes, killed cancer cells that expressed the sulfate transporter SLC26A2.
"The genomic features gave us some initial hypotheses about how the drugs could be acting, which we can then take back to study in the lab," said Corsello. "Our understanding of how these drugs kill cancer cells gives us a starting point for developing new therapies."
The researchers hope to study the repurposing library compounds in more cancer cell lines and to grow the hub to include even more compounds that have been tested in humans. The team will also continue to analyze the trove of data from this study, which have been shared openly (https://depmap.org) with the scientific community, to better understand what's driving the compounds' selective activity.
"This is a great initial dataset, but certainly there will be a great benefit to expanding this approach in the future," said Corsello.
Read more at Science Daily
"We thought we'd be lucky if we found even a single compound with anti-cancer properties, but we were surprised to find so many," said Todd Golub, chief scientific officer and director of the Cancer Program at the Broad, Charles A. Dana Investigator in Human Cancer Genetics at Dana-Farber, and professor of pediatrics at Harvard Medical School.
The new work appears in the journal Nature Cancer. It is the largest study yet to employ the Broad's Drug Repurposing Hub, a collection that currently comprises more than 6,000 existing drugs and compounds that are either FDA-approved or have been proven safe in clinical trials (at the time of the study, the Hub contained 4,518 drugs). The study also marks the first time researchers screened the entire collection of mostly non-cancer drugs for their anti-cancer capabilities.
Historically, scientists have stumbled upon new uses for a few existing medicines, such as the discovery of aspirin's cardiovascular benefits. "We created the repurposing hub to enable researchers to make these kinds of serendipitous discoveries in a more deliberate way," said study first author Steven Corsello, an oncologist at Dana-Farber, a member of the Golub lab, and founder of the Drug Repurposing Hub.
The researchers tested all the compounds in the Drug Repurposing Hub on 578 human cancer cell lines from the Broad's Cancer Cell Line Encyclopedia (CCLE). Using a molecular barcoding method known as PRISM, which was developed in the Golub lab, the researchers tagged each cell line with a DNA barcode, allowing them to pool several cell lines together in each dish and more quickly conduct a larger experiment. The team then exposed each pool of barcoded cells to a single compound from the repurposing library, and measured the survival rate of the cancer cells.
They found nearly 50 non-cancer drugs -- including those initially developed to lower cholesterol or reduce inflammation -- that killed some cancer cells while leaving others alone.
Some of the compounds killed cancer cells in unexpected ways. "Most existing cancer drugs work by blocking proteins, but we're finding that compounds can act through other mechanisms," said Corsello. Some of the four-dozen drugs he and his colleagues identified appear to act not by inhibiting a protein but by activating a protein or stabilizing a protein-protein interaction. For example, the team found that nearly a dozen non-oncology drugs killed cancer cells that express a protein called PDE3A by stabilizing the interaction between PDE3A and another protein called SLFN12 -- a previously unknown mechanism for some of these drugs.
These unexpected drug mechanisms were easier to find using the study's cell-based approach, which measures cell survival, than through traditional non-cell-based high-throughput screening methods, Corsello said.
Most of the non-oncology drugs that killed cancer cells in the study did so by interacting with a previously unrecognized molecular target. For example, the anti-inflammatory drug tepoxalin, originally developed for use in people but approved for treating osteoarthritis in dogs, killed cancer cells by hitting an unknown target in cells that overexpress the protein MDR1, which commonly drives resistance to chemotherapy drugs.
The researchers were also able to predict whether certain drugs could kill each cell line by looking at the cell line's genomic features, such as mutations and methylation levels, which were included in the CCLE database. This suggests that these features could one day be used as biomarkers to identify patients who will most likely benefit from certain drugs. For example, the alcohol dependence drug disulfiram (Antabuse) killed cell lines carrying mutations that cause depletion of metallothionein proteins. Compounds containing vanadium, originally developed to treat diabetes, killed cancer cells that expressed the sulfate transporter SLC26A2.
"The genomic features gave us some initial hypotheses about how the drugs could be acting, which we can then take back to study in the lab," said Corsello. "Our understanding of how these drugs kill cancer cells gives us a starting point for developing new therapies."
The researchers hope to study the repurposing library compounds in more cancer cell lines and to grow the hub to include even more compounds that have been tested in humans. The team will also continue to analyze the trove of data from this study, which have been shared openly (https://depmap.org) with the scientific community, to better understand what's driving the compounds' selective activity.
"This is a great initial dataset, but certainly there will be a great benefit to expanding this approach in the future," said Corsello.
Read more at Science Daily
Jan 19, 2020
Study uses eye movement test to confirm brain aging effects
A new study, published in PeerJ, shows how University of Liverpool researchers have used a newly developed eye movement test to improve the understanding of how parts of the brain work.
Healthy, older adults are widely reported to experience cognitive decline, including impairments in inhibitory control (the ability to stop ourselves thinking or doing things). However, because ageing effects on inhibitory control are highly variable between individuals, vary depending on tests used, and are sometimes not distinguished from general age-related slowing, this general view is a matter of debate.
Inhibitory control is also important in conditions like schizophrenia, ADHD and forms of Parkinson's disease; patients can become jumpy, distractible or have problems with unwanted thoughts.
Researchers from the University's Department of Eye and Vision Science, led by Dr Paul Knox, developed a new test, using measurements of eye movements, to provide an improved method of investigating inhibitory control, and have applied to study the effects of ageing on this ability.
Study
In the study two cohorts of healthy people were recruited from two different age groups, 19 to 27 years old and 50 to 72 years old. Participants viewed a dot in the centre of a computer a screen and then had to to look at a second dot that appeared to the left or right not when it appeared, but when it disappeared. As people instinctively look at things when they appear, this requires the inhibition of a normal automatic eye movement. Eye movements were measured precisely using an infrared eye tracker, revealing how often they looked too early.
Results
The results showed that older participants were much more likely to look at the dot when it appeared (not when it diapered) and were slower compared to younger participants.
Dr Paul Knox, said: "We are designed to react to things appearing in our visual world. It is something we do automatically. However, we also have the ability to stop ourselves responding and this prevents us becoming slaves to our sensory environment.
"This new test allows us to measure inhibitory behaviour precisely. It is clear that older participants found it more difficult to inhibit their actions, even once we had accounted for the general slowing that occurs with ageing.
Read more at Science Daily
Healthy, older adults are widely reported to experience cognitive decline, including impairments in inhibitory control (the ability to stop ourselves thinking or doing things). However, because ageing effects on inhibitory control are highly variable between individuals, vary depending on tests used, and are sometimes not distinguished from general age-related slowing, this general view is a matter of debate.
Inhibitory control is also important in conditions like schizophrenia, ADHD and forms of Parkinson's disease; patients can become jumpy, distractible or have problems with unwanted thoughts.
Researchers from the University's Department of Eye and Vision Science, led by Dr Paul Knox, developed a new test, using measurements of eye movements, to provide an improved method of investigating inhibitory control, and have applied to study the effects of ageing on this ability.
Study
In the study two cohorts of healthy people were recruited from two different age groups, 19 to 27 years old and 50 to 72 years old. Participants viewed a dot in the centre of a computer a screen and then had to to look at a second dot that appeared to the left or right not when it appeared, but when it disappeared. As people instinctively look at things when they appear, this requires the inhibition of a normal automatic eye movement. Eye movements were measured precisely using an infrared eye tracker, revealing how often they looked too early.
Results
The results showed that older participants were much more likely to look at the dot when it appeared (not when it diapered) and were slower compared to younger participants.
Dr Paul Knox, said: "We are designed to react to things appearing in our visual world. It is something we do automatically. However, we also have the ability to stop ourselves responding and this prevents us becoming slaves to our sensory environment.
"This new test allows us to measure inhibitory behaviour precisely. It is clear that older participants found it more difficult to inhibit their actions, even once we had accounted for the general slowing that occurs with ageing.
Read more at Science Daily
Researchers discover novel potential target for drug addiction treatment
New University of Minnesota Medical School research discovers a novel potential target for treating drug addiction through "the hidden stars of the brain."
Dopamine is one of the major reward molecules of the brain and contributes to learning, memory and motivated behaviors. Disruption of dopamine is associated with addiction-related disorders, such as amphetamine substance use and abuse.
A new study published in Neuron suggests that targeting astrocyte calcium signaling could decrease the behavioral effects of amphetamine. The study was co-led by Michelle Corkrum, PhD, a third-year medical student in the Medical Scientist Training Program (MD/PhD) at the University of Minnesota Medical School, and Ana Covelo, PhD, in the lab of Alfonso Araque, PhD, and in collaboration with Mark Thomas, PhD.
Named for their star-shape, Corkrum describes astrocytes as "the hidden stars of the brain." Astrocytes have traditionally been considered "support cells" of the brain and ignored in terms of actively contributing to brain function. This study shows that astrocytes do contribute to information processing and to how organisms think and function in this world.
Corkrum and colleagues found that astrocytes respond to dopamine with increases in calcium in the nucleus accumbens, one of the major reward centers in the brain. This increase in calcium was related to the release of ATP/adenosine to modulate neural activity in the nucleus accumbens. Then, they looked specifically at amphetamine because it is known to increase dopamine and psychomotor activity in organisms. They found that astrocytes respond to amphetamine with increases in calcium, and if astrocyte activity is ablated, the behavioral effect of amphetamine decreases in a mouse model.
"These findings suggest that astrocytes contribute to amphetamine signaling, dopamine signaling and overall reward signaling in the brain," Corkrum said. "Because of this, astrocytes are a potentially novel cell type that can be specifically targeted to develop efficacious therapies for diseases with dysregulated dopamine."
Corkrum attributes the success of this study to the collaborative nature of the University of Minnesota and throughout the graduate program in neuroscience. "We were able to integrate the phenomenal resources that the U of M offers to conduct state-of-the-art research and work with numerous different core facilities, which played key roles in this study," Corkrum said.
Read more at Science Daily
Dopamine is one of the major reward molecules of the brain and contributes to learning, memory and motivated behaviors. Disruption of dopamine is associated with addiction-related disorders, such as amphetamine substance use and abuse.
A new study published in Neuron suggests that targeting astrocyte calcium signaling could decrease the behavioral effects of amphetamine. The study was co-led by Michelle Corkrum, PhD, a third-year medical student in the Medical Scientist Training Program (MD/PhD) at the University of Minnesota Medical School, and Ana Covelo, PhD, in the lab of Alfonso Araque, PhD, and in collaboration with Mark Thomas, PhD.
Named for their star-shape, Corkrum describes astrocytes as "the hidden stars of the brain." Astrocytes have traditionally been considered "support cells" of the brain and ignored in terms of actively contributing to brain function. This study shows that astrocytes do contribute to information processing and to how organisms think and function in this world.
Corkrum and colleagues found that astrocytes respond to dopamine with increases in calcium in the nucleus accumbens, one of the major reward centers in the brain. This increase in calcium was related to the release of ATP/adenosine to modulate neural activity in the nucleus accumbens. Then, they looked specifically at amphetamine because it is known to increase dopamine and psychomotor activity in organisms. They found that astrocytes respond to amphetamine with increases in calcium, and if astrocyte activity is ablated, the behavioral effect of amphetamine decreases in a mouse model.
"These findings suggest that astrocytes contribute to amphetamine signaling, dopamine signaling and overall reward signaling in the brain," Corkrum said. "Because of this, astrocytes are a potentially novel cell type that can be specifically targeted to develop efficacious therapies for diseases with dysregulated dopamine."
Corkrum attributes the success of this study to the collaborative nature of the University of Minnesota and throughout the graduate program in neuroscience. "We were able to integrate the phenomenal resources that the U of M offers to conduct state-of-the-art research and work with numerous different core facilities, which played key roles in this study," Corkrum said.
Read more at Science Daily
Subscribe to:
Posts (Atom)