What clues does our memory use to connect a current situation to a situation from the past? The results of a study conducted by researchers from the University of Geneva (UNIGE), Switzerland, -- working in collaboration with CY Cergy Paris University in France -- contrast sharply with the explanations found until now in the existing literature. The researchers have demonstrated that similarities in structure and essence (the heart of a situation) guide our recollections rather than surface similarities (the general theme, for example, or the setting or protagonists). It is only when individuals lack sufficient knowledge that they turn to the surface clues -- the easiest to access -- to recollect a situation. These results, published in the journal Acta Psychologica, are particularly relevant in the field of education. They underline the need to focus on the conceptual aspects of situations that are tackled in class to help pupils make use of the relevant features, and not to be misled by apparent similarities.
Our memory organises our experiences based on two main features: surface features, which include superficial similarities between situations (the setting, for instance, or the people present); and structural features, which characterize the depth of the situation and its key issues. The existing literature argues that people tend to favour surface clues when dealing with a given situation. "This is often attributed to the fact that our brain looks for the easiest option when it comes to memory recall, and that in general the surface of a recollection correlates to its structure," begins Emmanuel Sander, a professor in the Faculty of Psychology and Educational Sciences (FPSE) at UNIGE.
On analysing the existing literature, the researchers realised that earlier work was based on frequent recalls of situations that did not only share surface features but also a part of the structure. They also noticed that participants did not possess the knowledge needed to grasp the deep stucture of the situations presented to them. "We wondered whether the surface features really dominate the structural features when a situation elicits the recollection of another one," explains Lucas Raynal, a researcher at CY Cergy Paris Université and an FPSE associate member.
Essence more important than form
To solve this problem, the researchers created six narratives that shared the surface, the structure or neither of the two (known as distractors) with a target narrative. "Our target narrative told the story of a pizzaiolo , Luigi, who worked in a busy square. A second pizzaiolo, Lorenzo, set up shop next door in direct competition with Luigi. But Lorenzo's pizzas weren't as good as Luigi's, who gave the newcomer a piece of advice about how to improve the way he made his pizzas. To thank him, Lorenzo moved his pizzeria to put an end to the direct competition," explains Evelyne Clément, a professor at CY Cergy Paris University. Some of the six stories created as part of the research put the emphasis on the pizzaioli; others emphasised the principle of competition that was amicably resolved; and others still did not highlight either of these features.
In the first experiment, the six stories were read by 81 adult participants before they came face-to-face with the story of Luigi and Lorenzo. They then had to say to what previous situation they related the story. "81.5% of the participants chose the story that had the same structure, i.e. the competition principle, compared to 18.5% for the account that shared the same surface (pizzaioli) and 0% for the distracting stories," says professor Sander. This indicates a clear predominance for structural features, contrary to the claims made by the existing literature. The researchers now took the experiment a stage further: they presented six stories to other participants once more, but this time the story highlighting the earlier competition principle was accompanied by several stories about pizzaioli (experiment 2). The third and fourth experiment also aimed to confirm the robustness of the results by increasing the number of stories to be remembered and by distracting the participants with activities unrelated to the task during a variable timeframe (5 minutes in experiment 3 and 45 minutes in experiment 4) before the target story was presented. "The results of the four experiments were clear-cut, with around 80% of participants choosing the story with the same structure rather than those that shared the same surface or had no similarity," points out Lucas Raynal.
Read more at Science Daily
Feb 15, 2020
Molecular switch mechanism explains how mutations shorten biological clocks
A new study of molecular interactions central to the functioning of biological clocks explains how certain mutations can shorten clock timing, making some people extreme "morning larks" because their internal clocks operate on a 20-hour cycle instead of being synchronized with the 24-hour cycle of day and night.
The study, published February 11 in eLife, shows that the same molecular switch mechanism affected by these mutations is at work in animals ranging from fruit flies to people.
"Many people with sleep phase disorders have changes in their clock proteins," said Carrie Partch, associate professor of chemistry and biochemistry at UC Santa Cruz and a corresponding author of the paper. "Generally, mutations that make the clock run shorter have a morning lark effect, and those that make the clock run longer have a pronounced night owl effect."
In the new study, researchers focused on mutations in an enzyme called casein kinase 1 (CK1), which regulates a core clock protein called PERIOD (or PER). Clock-altering mutations in CK1 had been known for years, but it was unclear how they changed the timing of the clock.
CK1 and other kinase enzymes carry out a reaction called phosphorylation, adding a phosphate to another protein. It turns out that CK1 can phosphorylate either of two sites on the PER protein. Modifying one site stabilizes PER, while the other modification triggers its degradation. Partch and her colleagues showed how mutations in either CK1 or PER itself can alter the balance, favoring degradation over stabilization.
PER proteins are part of a complex feedback loop in which changes in their abundance set the timing of circadian rhythms, so mutations that increase the rate of PER degradation throw off the clock.
"What we discovered is this neat molecular switch that controls the abundance of the PER proteins. When it's working right, it generates a beautiful 24-hour oscillation," Partch said.
Partch's lab performed structural and biochemical analyses of the CK1 and PER proteins that suggested how the switch works. To confirm that the interactions observed in the test tube matched the behavior of the proteins in living cells, they worked with researchers at the Duke-NUS Medical School in Singapore. Other collaborators at UC San Diego performed simulations of the molecular dynamics of the switch showing how the CK1 protein switches between two conformations, and how mutations cause it favor one conformation over another.
The switch involves a section of the CK1 protein called the activation loop. One conformation of this loop favors binding of CK1 to the "degron" region of PER, where phosphorylation leads to the protein's degradation. The clock-changing mutations in CK1 cause it to favor this degron-binding conformation.
The other conformation favors binding to a site on the PER protein known as the FASP region, because mutations in this region lead to an inherited sleep disorder called Familial Advanced Sleep Phase Syndrome. The stabilization of PER can be disrupted by either the FASP mutations, which interfere with the binding of CK1 to this region, or by the mutations in CK1 that favor the alternate conformation of the activation loop.
The new findings also suggest why binding of CK1 to the FASP region stabilizes PER. With phosphorylation of the FASP region, that region then acts to bind and inhibit CK1, preventing it from adopting the other conformation and phosphorylating the degron region.
"It binds and locks the kinase down, so it's like a pause button that prevents the PERIOD protein from being degraded too soon," Partch said. "This stabilizing region builds a delay into the clock to make it align with Earth's 24-hour day."
Partch noted that it is important to understand how these clock proteins regulate our circadian rhythms, because those rhythms affect not only the sleep cycle but almost every aspect of our physiology. Understanding these molecular mechanisms may enable scientists to develop therapies for intervening in the clock to alleviate disruptions, whether they are caused by inherited conditions or by shift work or jet lag.
"There might be ways to mitigate some of those effects," she said.
Read more at Science Daily
The study, published February 11 in eLife, shows that the same molecular switch mechanism affected by these mutations is at work in animals ranging from fruit flies to people.
"Many people with sleep phase disorders have changes in their clock proteins," said Carrie Partch, associate professor of chemistry and biochemistry at UC Santa Cruz and a corresponding author of the paper. "Generally, mutations that make the clock run shorter have a morning lark effect, and those that make the clock run longer have a pronounced night owl effect."
In the new study, researchers focused on mutations in an enzyme called casein kinase 1 (CK1), which regulates a core clock protein called PERIOD (or PER). Clock-altering mutations in CK1 had been known for years, but it was unclear how they changed the timing of the clock.
CK1 and other kinase enzymes carry out a reaction called phosphorylation, adding a phosphate to another protein. It turns out that CK1 can phosphorylate either of two sites on the PER protein. Modifying one site stabilizes PER, while the other modification triggers its degradation. Partch and her colleagues showed how mutations in either CK1 or PER itself can alter the balance, favoring degradation over stabilization.
PER proteins are part of a complex feedback loop in which changes in their abundance set the timing of circadian rhythms, so mutations that increase the rate of PER degradation throw off the clock.
"What we discovered is this neat molecular switch that controls the abundance of the PER proteins. When it's working right, it generates a beautiful 24-hour oscillation," Partch said.
Partch's lab performed structural and biochemical analyses of the CK1 and PER proteins that suggested how the switch works. To confirm that the interactions observed in the test tube matched the behavior of the proteins in living cells, they worked with researchers at the Duke-NUS Medical School in Singapore. Other collaborators at UC San Diego performed simulations of the molecular dynamics of the switch showing how the CK1 protein switches between two conformations, and how mutations cause it favor one conformation over another.
The switch involves a section of the CK1 protein called the activation loop. One conformation of this loop favors binding of CK1 to the "degron" region of PER, where phosphorylation leads to the protein's degradation. The clock-changing mutations in CK1 cause it to favor this degron-binding conformation.
The other conformation favors binding to a site on the PER protein known as the FASP region, because mutations in this region lead to an inherited sleep disorder called Familial Advanced Sleep Phase Syndrome. The stabilization of PER can be disrupted by either the FASP mutations, which interfere with the binding of CK1 to this region, or by the mutations in CK1 that favor the alternate conformation of the activation loop.
The new findings also suggest why binding of CK1 to the FASP region stabilizes PER. With phosphorylation of the FASP region, that region then acts to bind and inhibit CK1, preventing it from adopting the other conformation and phosphorylating the degron region.
"It binds and locks the kinase down, so it's like a pause button that prevents the PERIOD protein from being degraded too soon," Partch said. "This stabilizing region builds a delay into the clock to make it align with Earth's 24-hour day."
Partch noted that it is important to understand how these clock proteins regulate our circadian rhythms, because those rhythms affect not only the sleep cycle but almost every aspect of our physiology. Understanding these molecular mechanisms may enable scientists to develop therapies for intervening in the clock to alleviate disruptions, whether they are caused by inherited conditions or by shift work or jet lag.
"There might be ways to mitigate some of those effects," she said.
Read more at Science Daily
Feb 14, 2020
Air pollution's tiny particles may trigger nonfatal heart attacks
Yale-affiliated scientist finds that even a few hours' exposure to ambient ultrafine particles common in air pollution may potentially trigger a nonfatal heart attack.
Myocardial infarction is a major form of cardiovascular disease worldwide. Ultrafine particles (UFP) are 100 nanometers or smaller in size. In urban areas, automobile emissions are the primary source of UFP.
The study in the journal Environmental Health Perspectives is believed to be the first epidemiological investigation of the effects of UFP exposure and heart attacks using the number of particles and the particle length and surface area concentrations at hourly intervals of exposure.
"This study confirms something that has long been suspected -- air pollution's tiny particles can play a role in serious heart disease. This is particularly true within the first few hours of exposure," said Kai Chen, Ph.D., assistant professor at Yale School of Public Health and the study's first author. "Elevated levels of UFP are a serious public health concern."
UFP constitute a health risk due to their small size, large surface areas per unit of mass, and their ability to penetrate the cells and get into the blood system. "We were the first to demonstrate the effects of UFP on the health of asthmatics in an epidemiological study in the 1990s," said Annette Peters, director of the Institute of Epidemiology at Helmholtz Center Munich and a co-author of this paper. "Since then approximately 200 additional studies have been published. However, epidemiological evidence remains inconsistent and insufficient to infer a causal relationship."
The lack of consistent findings across epidemiological studies may be in part because of the different size ranges and exposure metrics examined to characterize ambient UFP exposure. Chen and his co-authors were interested in whether transient UFP exposure could trigger heart attacks and whether alternative metrics such as particle length and surface area concentrations could improve the investigation of UFP-related health effects.
With colleagues from Helmholtz Center Munich, Augsburg University Hospital and Nördlingen Hospital, Chen examined data from a registry of all nonfatal MI cases in Augsburg, Germany. The study looked at more than 5, 898 nonfatal heart attack patients between 2005 and 2015. The individual heart attacks were compared against air pollution UFP data on the hour of the heart attack and adjusted for a range of additional factors, such as the day of the week, long-term time trend and socioeconomic status.
Read more at Science Daily
Myocardial infarction is a major form of cardiovascular disease worldwide. Ultrafine particles (UFP) are 100 nanometers or smaller in size. In urban areas, automobile emissions are the primary source of UFP.
The study in the journal Environmental Health Perspectives is believed to be the first epidemiological investigation of the effects of UFP exposure and heart attacks using the number of particles and the particle length and surface area concentrations at hourly intervals of exposure.
"This study confirms something that has long been suspected -- air pollution's tiny particles can play a role in serious heart disease. This is particularly true within the first few hours of exposure," said Kai Chen, Ph.D., assistant professor at Yale School of Public Health and the study's first author. "Elevated levels of UFP are a serious public health concern."
UFP constitute a health risk due to their small size, large surface areas per unit of mass, and their ability to penetrate the cells and get into the blood system. "We were the first to demonstrate the effects of UFP on the health of asthmatics in an epidemiological study in the 1990s," said Annette Peters, director of the Institute of Epidemiology at Helmholtz Center Munich and a co-author of this paper. "Since then approximately 200 additional studies have been published. However, epidemiological evidence remains inconsistent and insufficient to infer a causal relationship."
The lack of consistent findings across epidemiological studies may be in part because of the different size ranges and exposure metrics examined to characterize ambient UFP exposure. Chen and his co-authors were interested in whether transient UFP exposure could trigger heart attacks and whether alternative metrics such as particle length and surface area concentrations could improve the investigation of UFP-related health effects.
With colleagues from Helmholtz Center Munich, Augsburg University Hospital and Nördlingen Hospital, Chen examined data from a registry of all nonfatal MI cases in Augsburg, Germany. The study looked at more than 5, 898 nonfatal heart attack patients between 2005 and 2015. The individual heart attacks were compared against air pollution UFP data on the hour of the heart attack and adjusted for a range of additional factors, such as the day of the week, long-term time trend and socioeconomic status.
Read more at Science Daily
Disappearing snakes and the biodiversity crisis
Tropical snake (eyelash viper). |
The loss of any species is devastating. However, the decline or extinction of one species can trigger an avalanche within an ecosystem, wiping out many species in the process. When biodiversity losses cause cascading effects within a region, they can eliminate many data-deficient species -- animals that have eluded scientific study or haven't been researched enough to understand how best to conserve them.
"Some species that are rare or hard to detect may be declining so quickly that we might not ever know that we're losing them," said Elise Zipkin, MSU integrative biologist and the study's lead author. "In fact, this study is less about snakes and more about the general loss of biodiversity and its consequences."
The snakes in question reside in a protected area near El Copé, Panama. The new study documents how the snake community plummeted after an invasive fungal pathogen wiped out most of the area's frogs, a primary food source. Thanks to the University of Maryland's long-term study tracking amphibians and reptiles, the team had seven years of data on the snake community before the loss of frogs and six years of data afterwards.
Yet even with that extensive dataset, many species were detected so infrequently that traditional analysis methods were impossible. To say that these snakes are highly elusive or rare would be an understatement. Of the 36 snake species observed during the study, 12 were detected only once and five species were detected twice.
"We need to reframe the question and accept that with data-deficient species, we won't often be able to assess population changes with high levels of certainty," Zipkin said. "Instead, we need to look at the probability that this snake community is worse off now than it used to be."
Using this approach, the team, which included former MSU integrative biologists Grace DiRenzo and Sam Rossman, built statistical models focused on estimating the probability that snake diversity metrics changed after the loss of amphibians, rather than trying to estimate the absolute number of species in the area, which is inherently difficult because snakes are so rare.
"We estimated an 85% probability that there are fewer snake species than there were before the amphibians declined," Zipkin said. "We also estimated high probabilities that the occurrence rates and body conditions of many of the individual snake species were lower after the loss of amphibians, despite no other systematic changes to the environment."
When animals die off en masse, such as what is happening with amphibians worldwide, researchers are dealing mainly with that discovery and are focused on determining the causes. But what happens to everything else that relies on those animals? Scientists don't often have accurate counts and observations of the other species in those ecosystems, leaving them guessing to the consequences of these changes. The challenge is exacerbated, of course, when it involves rare and data-deficient species.
"Because there will never be a ton of data, we can't pinpoint exactly why some snake species declined while others seemed to do okay or even prospered after the catastrophic loss of amphibians." Zipkin said. "But this phenomenon, in which a disturbance event indirectly produces a large number of 'losers' but also a few 'winners,' is increasingly common and leads to worldwide biotic homogenization, or the process of formally dissimilar ecosystems gradually becoming more similar."
The inability to put their finger on the exact cause, however, isn't the worst news to come from their results. The truly bad news is that the level of devastation portends to much greater worldwide loss than the scientific community has been estimating.
"The huge die-off of frogs is an even bigger problem than we thought," said Doug Levey, a program director in the National Science Foundation's Division of Environmental Biology. "Frogs' disappearance has had cascading effects in tropical food chains. This study reveals the importance of basic, long-term data. When these scientists started counting snakes in a rainforest, they had no idea what they'd eventually discover."
Zipkin agrees that long-term data is important to help stakeholders ascertain the extent of the issue.
"We have this unique dataset and we have found a clever way to estimate declines in rare species," she said. "It's sad, however, that the biodiversity crisis is probably worse than we thought because there are so many data-deficient species that we'll never be able to assess."
Read more at Science Daily
New Horizons team uncovers a critical piece of the planetary formation puzzle
Illustration of planets forming in early solar system. |
The New Horizons spacecraft flew past the ancient Kuiper Belt object Arrokoth (2014 MU69) on Jan. 1, 2019, providing humankind's first close-up look at one of the icy remnants of solar system formation in the vast region beyond the orbit of Neptune. Using detailed data on the object's shape, geology, color and composition -- gathered during a record-setting flyby that occurred more than four billion miles from Earth -- researchers have apparently answered a longstanding question about planetesimal origins, and therefore made a major advance in understanding how the planets themselves formed.
The team reports those findings in a set of three papers in the journal Science, and at a media briefing Feb. 13 at the annual American Association for the Advancement of Science meeting in Seattle.
"Arrokoth is the most distant, most primitive and most pristine object ever explored by spacecraft, so we knew it would have a unique story to tell," said New Horizons Principal Investigator Alan Stern, of the Southwest Research Institute in Boulder, Colorado. "It's teaching us how planetesimals formed, and we believe the result marks a significant advance in understanding overall planetesimal and planet formation."
The first post-flyby images transmitted from New Horizons last year showed that Arrokoth had two connected lobes, a smooth surface and a uniform composition, indicating it was likely pristine and would provide decisive information on how bodies like it formed. These first results were published in Science last May.
"This is truly an exciting find for what is already a very successful and history-making mission" said Lori Glaze, director of NASA's Planetary Science Division. "The continued discoveries of NASA's New Horizons spacecraft astound as it reshapes our knowledge and understanding of how planetary bodies form in solar systems across the universe."
Over the following months, working with more and higher-resolution data as well as sophisticated computer simulations, the mission team assembled a picture of how Arrokoth must have formed. Their analysis indicates that the lobes of this "contact binary" object were once separate bodies that formed close together and at low velocity, orbited each other, and then gently merged to create the 22-mile long object New Horizons observed.
This indicates Arrokoth formed during the gravity-driven collapse of a cloud of solid particles in the primordial solar nebula, rather than by the competing theory of planetesimal formation called hierarchical accretion. Unlike the high-speed collisions between planetesimals in hierarchical accretion, in particle-cloud collapse, particles merge gently, slowly growing larger.
"Just as fossils tell us how species evolved on Earth, planetesimals tell us how planets formed in space," said William McKinnon, a New Horizons co-investigator from Washington University in St. Louis, and lead author of an Arrokoth formation paper in Science this week. "Arrokoth looks the way it does not because it formed through violent collisions, but in more of an intricate dance, in which its component objects slowly orbited each other before coming together."
Two other important pieces of evidence support this conclusion. The uniform color and composition of Arrokoth's surface shows the KBO formed from nearby material, as local cloud collapse models predict, rather than a mishmash of matter from more separated parts of the nebula, as hierarchical models might predict.
The flattened shapes of each of Arrokoth's lobes, as well as the remarkably close alignment of their poles and equators, also point to a more orderly merger from a collapse cloud. Further still, Arrokoth's smooth, lightly cratered surface indicates its face has remained well preserved since the end of the planet formation era.
"Arrokoth has the physical features of a body that came together slowly, with 'local' materials in the solar nebula," said Will Grundy, New Horizons composition theme team lead from Lowell Observatory in Flagstaff, Arizona, and the lead author of a second Science paper. "An object like Arrokoth wouldn't have formed, or look the way it does, in a more chaotic accretion environment."
The latest Arrokoth reports significantly expand on the May 2019 Science paper, led by Stern. The three new papers are based on 10 times as much data as the first report, and together provide a far more complete picture of Arrokoth's origin.
"All of the evidence we've found points to particle-cloud collapse models, and all but rule out hierarchical accretion for the formation mode of Arrokoth, and by inference, other planetesimals," Stern said.
New Horizons continues to carry out new observations of additional Kuiper Belt objects it passes in the distance. New Horizons also continues to map the charged-particle radiation and dust environment in the Kuiper Belt. The new KBOs being observed now are too far away to reveal discoveries like those on Arrokoth, but the team can measure aspects such as each object's surface properties and shape. This summer the mission team will begin using large groundbased telescopes to search for new KBOs to study in this way, and even for another flyby target if fuel allows.
Read more at Science Daily
Huge bacteria-eating viruses close gap between life and non-life
Illustration of bacteriophages infecting bacterial cell. |
These phages -- short for bacteriophages, so-called because they "eat" bacteria -- are of a size and complexity considered typical of life, carry numerous genes normally found in bacteria and use these genes against their bacterial hosts.
University of California, Berkeley, researchers and their collaborators found these huge phages by scouring a large database of DNA that they generated from nearly 30 different Earth environments, ranging from the guts of premature infants and pregnant women to a Tibetan hot spring, a South African bioreactor, hospital rooms, oceans, lakes and deep underground.
Altogether they identified 351 different huge phages, all with genomes four or more times larger than the average genomes of viruses that prey on single-celled bacteria.
Among these is the largest bacteriophage discovered to date: Its genome, 735,000 base-pairs long, is nearly 15 times larger than the average phage. This largest known phage genome is much larger than the genomes of many bacteria.
"We are exploring Earth's microbiomes, and sometimes unexpected things turn up. These viruses of bacteria are a part of biology, of replicating entities, that we know very little about," said Jill Banfield, a UC Berkeley professor of earth and planetary science and of environmental science, policy and management, and senior author of a paper about the findings appearing Feb 12 in the journal Nature. "These huge phages bridge the gap between non-living bacteriophages, on the one hand, and bacteria and Archaea. There definitely seem to be successful strategies of existence that are hybrids between what we think of as traditional viruses and traditional living organisms."
Ironically, within the DNA that these huge phages lug around are parts of the CRISPR system that bacteria use to fight viruses. It's likely that once these phages inject their DNA into bacteria, the viral CRISPR system augments the CRISPR system of the host bacteria, probably mostly to target other viruses.
"It is fascinating how these phages have repurposed this system we thought of as bacterial or archaeal to use for their own benefit against their competition, to fuel warfare between these viruses," said UC Berkeley graduate student Basem Al-Shayeb. Al-Shayeb and research associate Rohan Sachdeva are co-first authors of the Nature paper.
New Cas protein
One of the huge phages also is able to make a protein analogous to the Cas9 protein that is part of the revolutionary tool CRISPR-Cas9 that Jennifer Doudna of UC Berkeley and her European colleague, Emmanuelle Charpentier, adapted for gene-editing. The team dubbed this tiny protein CasØ, because the Greek letter Ø, or phi, has traditionally been used to denote bacteriophage.
"In these huge phages, there is a lot of potential for finding new tools for genome engineering," Sachdeva said. "A lot of the genes we found are unknown, they don't have a putative function and may be a source of new proteins for industrial, medical or agricultural applications."
Aside from providing new insight into the constant warfare between phages and bacteria, the new findings also have implications for human disease. Viruses, in general, carry genes between cells, including genes that confer resistance to antibiotics. And since phages occur wherever bacteria and Archaea live, including the human gut microbiome, they can carry damaging genes into the bacteria that colonize humans.
"Some diseases are caused indirectly by phages, because phages move around genes involved in pathogenesis and antibiotic resistance," said Banfield, who is also director of microbial research at the Innovative Genomics Institute (IGI) and a CZ Biohub investigator. "And the larger the genome, the larger the capacity you have to move around those sorts of genes, and the higher the probability that you will be able to deliver undesirable genes to bacteria in human microbiomes."
Sequencing Earth's biomes
For more than 15 years, Banfield has been exploring the diversity of bacteria, Archaea -- which, she says, are fascinating cousins of bacteria -- and phages in different environments around the planet. She does this by sequencing all the DNA in a sample and then piecing the fragments together to assemble draft genomes or, in some cases, fully curated genomes of never-before-seen microbes.
In the process, she has found that many of the new microbes have extremely tiny genomes, seemingly insufficient to sustain independent life. Instead, they appear to depend on other bacteria and archaea to survive.
One year ago, she reported that some of the largest phages, a group she called Lak phages, can be found in our guts and mouths, where they prey on gut and saliva microbiomes.
The new Nature paper came out of a more thorough search for huge phages within all the metagenomic sequences Banfield has accumulated, plus new metagenomes provided by research collaborators around the globe. The metagenomes came from baboons, pigs, Alaskan moose, soil samples, oceans, rivers, lakes and groundwater, and included Bangladeshis who had been drinking arsenic-tainted water.
The team identified 351 phage genomes that were more than 200 kilobases long, four times the average phage genome length of 50 kilobytes (kb). They were able to establish the exact length of 175 phage genomes; the others could be much larger than 200 kb. One of the complete genomes, 735,000 base-pairs long, is now the largest known phage genome.
While most of the genes in these huge phages code for unknown proteins, the researchers were able to identify genes that code for proteins critical to the machinery, called the ribosome, that translates messenger RNA into protein. Such genes are not typically found in viruses, only in bacteria or archaea.
The researchers found many genes for transfer RNAs, which carry amino acids to the ribosome to be incorporated into new proteins; genes for proteins that load and regulate tRNAs; genes for proteins that turn on translation and even pieces of the ribosome itself.
"Typically, what separates life from non-life is to have ribosomes and the ability to do translation; that is one of the major defining features that separate viruses and bacteria, non-life and life," Sachdeva said. "Some large phages have a lot of this translational machinery, so they are blurring the line a bit."
Huge phages likely use these genes to redirect the ribosomes to make more copies of their own proteins at the expense of bacterial proteins. Some huge phages also have alternative genetic codes, the nucleic acid triplets that code for a specific amino acid, which could confuse the bacterial ribosome that decodes RNA.
In addition, some of the newly discovered huge phages carry genes for variants of the Cas proteins found in a variety of bacterial CRISPR systems, such as the Cas9, Cas12, CasX and CasY families. CasØ is a variant of the Cas12 family. Some of the huge phages also have CRISPR arrays, which are areas of the bacterial genome where snippets of viral DNA are stored for future reference, allowing bacteria to recognize returning phages and to mobilize their Cas proteins to target and cut them up.
"The high-level conclusion is that phages with large genomes are quite prominent across Earth's ecosystems, they are not a peculiarity of one ecosystem," Banfield said. "And phages which have large genomes are related, which means that these are established lineages with a long history of large genome size. Having large genomes is one successful strategy for existence, and a strategy we know very little about."
Read more at Science Daily
Feb 13, 2020
Babies mimic songs, study finds
Baby with microphone. |
As part of the study, scientists captured audio of a 15-month-old boy making sounds similar to the beginning of the song "Happy Birthday," hours after he heard the song played on a toy. An analysis of the sounds showed the boy hitting the first six notes of "Happy Birthday" almost spot-on, in G major.
"We know that throughout the first year of life babies become sophisticated music listeners -- they learn a lot about the patterns of pitches and rhythms in music," said Lucia Benetti, a doctoral student at The Ohio State University School of Music and lead author of the study, which was recently published in the Journal of Research in Music Education. "And infants become better at doing this spontaneously. But we don't know much about how exactly this happens."
The study is among the first to measure an infant's attempt to recreate music by following him for an entire day.
"And what we learned is that in this one case at least, the baby is trying to sing along to songs he's hearing," Benetti said.
For the study, Benetti recorded one infant, a 15-month-old boy named James, through one 16-hour period. James wore a small, light recording device throughout the day, which captured every sound he heard and made. Benetti and her adviser, Eugenia Costa-Giomi, a professor of music education at Ohio State, then analyzed that audio data using software designed to measure language -- things like the number of adult words the baby heard and tried to say. Benetti also listened to the recording and transcribed the music he heard and the music he made, searching for patterns or places where the child seemed to mimic what he heard happening around him.
That technology has primarily been used in previous research studies to collect data that shows how babies develop language, not to understand how they might begin to learn about music.
James' parents also logged his primary activities that day -- things like napping and meals.
In the morning, James spent about 10 minutes with a toy that played the melody of "Happy Birthday." Later that evening, the device recorded James making about 10 sounds, lasting about four seconds, that resembled the beginning of "Happy Birthday."
The researchers shared the recordings with people who didn't know James and didn't know what the researchers were studying. Those listeners reported that James was trying to sing "Happy Birthday."
Shortly after James ate lunch, his mom sang the song "Rain Rain" to him twice. (You know how it goes -- "Rain, rain, go away, come again some other day.") Six hours later, James was playing with his father and started singing a version of "Rain Rain." (The research notes that he sang it for seven seconds in the key of A-flat major.)
James' dad immediately picked up on what his son was trying to do, and sang the song back to him. James repeated it.
(James' dad also made up funny lyrics -- "Scrambled eggs, scrambled eggs, makes big muscles in my legs." James didn't repeat those.)
The study shows that it's possible for babies to learn melodies from the music they hear around them, Benetti said. She said future work could examine a larger group of babies, with more data, to see whether James' response was typical.
"We could try to do it systematically to really start to understand how this learning occurs," she said. "From this study, we at least know that it happens."
The takeaway for parents? It can't hurt to sing to your kids.
"The social aspect of music is important -- if a baby sees their mother singing, they know she's engaging with that song, that she's enjoying it, and they know it must be important," Benetti said.
Read more at Science Daily
Fossilized insect from 100 million years ago is oldest record of primitive bee with pollen
Beetle parasites clinging to a primitive bee 100 million years ago may have caused the flight error that, while deadly for the insect, is a boon for science today.
The female bee, which became stuck in tree resin and thus preserved in amber, has been identified by Oregon State University researcher George Poinar Jr. as a new family, genus and species.
The mid-Cretaceous fossil from Myanmar provides the first record of a primitive bee with pollen and also the first record of the beetle parasites, which continue to show up on modern bees today.
The findings, published in BioOne Complete, shed new light on the early days of bees, a key component in evolutionary history and the diversification of flowering plants.
Insect pollinators aid the reproduction of flowering plants around the globe and are also ecologically critical as promoters of biodiversity. Bees are the standard bearer because they're usually present in the greatest numbers and because they're the only pollinator group that feeds exclusively on nectar and pollen throughout their life cycle.
Bees evolved from apoid wasps, which are carnivores. Not much is known, however, about the changes wasps underwent as they made that dietary transition.
Poinar, professor emeritus in the OSU College of Science and an international expert in using plant and animal life forms preserved in amber to learn more about the biology and ecology of the distant past, classified the new find as Discoscapa apicula, in the family Discoscapidae.
The fossilized bee shares traits with modern bees -- including plumose hairs, a rounded pronotal lobe, and a pair of spurs on the hind tibia -- and also those of apoid wasps, such as very low-placed antennal sockets and certain wing-vein features.
"Something unique about the new family that's not found on any extant or extinct lineage of apoid wasps or bees is a bifurcated scape," Poinar said, referring to a two-segment antennae base. "The fossil record of bees is pretty vast, but most are from the last 65 million years and look a lot like modern bees. Fossils like the one in this study can tell us about the changes certain wasp lineages underwent as they became palynivores -- pollen eaters."
Numerous pollen grains on Discoscapa apicula show the bee had recently been to one or more flowers.
Read more at Science Daily
The female bee, which became stuck in tree resin and thus preserved in amber, has been identified by Oregon State University researcher George Poinar Jr. as a new family, genus and species.
The mid-Cretaceous fossil from Myanmar provides the first record of a primitive bee with pollen and also the first record of the beetle parasites, which continue to show up on modern bees today.
The findings, published in BioOne Complete, shed new light on the early days of bees, a key component in evolutionary history and the diversification of flowering plants.
Insect pollinators aid the reproduction of flowering plants around the globe and are also ecologically critical as promoters of biodiversity. Bees are the standard bearer because they're usually present in the greatest numbers and because they're the only pollinator group that feeds exclusively on nectar and pollen throughout their life cycle.
Bees evolved from apoid wasps, which are carnivores. Not much is known, however, about the changes wasps underwent as they made that dietary transition.
Poinar, professor emeritus in the OSU College of Science and an international expert in using plant and animal life forms preserved in amber to learn more about the biology and ecology of the distant past, classified the new find as Discoscapa apicula, in the family Discoscapidae.
The fossilized bee shares traits with modern bees -- including plumose hairs, a rounded pronotal lobe, and a pair of spurs on the hind tibia -- and also those of apoid wasps, such as very low-placed antennal sockets and certain wing-vein features.
"Something unique about the new family that's not found on any extant or extinct lineage of apoid wasps or bees is a bifurcated scape," Poinar said, referring to a two-segment antennae base. "The fossil record of bees is pretty vast, but most are from the last 65 million years and look a lot like modern bees. Fossils like the one in this study can tell us about the changes certain wasp lineages underwent as they became palynivores -- pollen eaters."
Numerous pollen grains on Discoscapa apicula show the bee had recently been to one or more flowers.
Read more at Science Daily
Reconnecting with nature key for the health of people and the planet
Individuals who visit natural spaces weekly, and feel psychologically connected to them, report better physical and mental wellbeing, new research has shown.
Alongside the benefits to public health, those who make weekly nature visits, or feel connected to nature, are also more likely to behave in ways which promote environmental health, such as recycling and conservation activities.
The findings of the study, published in the Journal of Environmental Psychology, indicate that reconnecting with nature could be key to achieving synergistic improvements to human and planetary health.
The study was conducted by researchers at the University of Plymouth, Natural England, the University of Exeter and University of Derby, and is the first to investigate -- within a single study -- the contribution of both nature contact and connection to human health, wellbeing and pro-environmental behaviours.
The findings are based on responses to the Monitor of Engagement with the Natural Environment (MENE) survey, commissioned by Natural England as part of DEFRA's social science research programme. The team looked at people's engagement with nature though access to greenspace, nature visits and the extent to which they felt psychologically connected to the natural world.
Lead author Leanne Martin, of the University of Plymouth, said: "In the context of increasing urbanisation, it is important to understand how engagement with our planet's natural resources relate to human health and behaviour. Our results suggest that physically and psychologically reconnecting with nature can be beneficial for human health and wellbeing, and at the same time encourages individuals to act in ways which protect the health of the planet."
Marian Spain, Chief Executive of Natural England added: "It's a top priority for Natural England to unlock the potential of the natural environment to help address the challenges we are facing as a society: poor physical health and mental wellbeing; the climate change crisis and the devastating loss of wildlife.
Read more at Science Daily
Alongside the benefits to public health, those who make weekly nature visits, or feel connected to nature, are also more likely to behave in ways which promote environmental health, such as recycling and conservation activities.
The findings of the study, published in the Journal of Environmental Psychology, indicate that reconnecting with nature could be key to achieving synergistic improvements to human and planetary health.
The study was conducted by researchers at the University of Plymouth, Natural England, the University of Exeter and University of Derby, and is the first to investigate -- within a single study -- the contribution of both nature contact and connection to human health, wellbeing and pro-environmental behaviours.
The findings are based on responses to the Monitor of Engagement with the Natural Environment (MENE) survey, commissioned by Natural England as part of DEFRA's social science research programme. The team looked at people's engagement with nature though access to greenspace, nature visits and the extent to which they felt psychologically connected to the natural world.
Lead author Leanne Martin, of the University of Plymouth, said: "In the context of increasing urbanisation, it is important to understand how engagement with our planet's natural resources relate to human health and behaviour. Our results suggest that physically and psychologically reconnecting with nature can be beneficial for human health and wellbeing, and at the same time encourages individuals to act in ways which protect the health of the planet."
Marian Spain, Chief Executive of Natural England added: "It's a top priority for Natural England to unlock the potential of the natural environment to help address the challenges we are facing as a society: poor physical health and mental wellbeing; the climate change crisis and the devastating loss of wildlife.
Read more at Science Daily
Mars: Simulations of early impacts produce a mixed Mars mantle
Mars |
An important open issue in planetary science is to determine how Mars formed and to what extent its early evolution was affected by collisions. This question is difficult to answer given that billions of years of history have steadily erased evidence of early impact events. Luckily, some of this evolution is recorded in Martian meteorites. Of approximately 61,000 meteorites found on Earth, just 200 or so are thought to be of Martian origin, ejected from the Red Planet by more recent collisions.
These meteorites exhibit large variations in iron-loving elements such as tungsten and platinum, which have a moderate to high affinity for iron. These elements tend to migrate from a planet's mantle and into its central iron core during formation. Evidence of these elements in the Martian mantle as sampled by meteorites are important because they indicate that Mars was bombarded by planetesimals sometime after its primary core formation ended. Studying isotopes of particular elements produced locally in the mantle via radioactive decay processes helps scientists understand when planet formation was complete.
"We knew Mars received elements such as platinum and gold from early, large collisions. To investigate this process, we performed smoothed-particle hydrodynamics impact simulations," said SwRI's Dr. Simone Marchi, lead author of a Science Advances paper outlining these results. "Based on our model, early collisions produce a heterogeneous, marble-cake-like Martian mantle. These results suggest that the prevailing view of Mars formation may be biased by the limited number of meteorites available for study."
Based on the ratio of tungsten isotopes in Martian meteorites, it has been argued that Mars grew rapidly within about 2-4 million years after the Solar System started to form. However, large, early collisions could have altered the tungsten isotopic balance, which could support a Mars formation timescale of up to 20 million years, as shown by the new model.
"Collisions by projectiles large enough to have their own cores and mantles could result in a heterogeneous mixture of those materials in the early Martian mantle," said co-author Dr. Robin Canup, assistant vice president of SwRI's Space Science and Engineering Division. "This can lead to different interpretations on the timing of Mars' formation than those that assume that all projectiles are small and homogenous."
The Martian meteorites that landed on Earth probably originated from just a few localities around the planet. The new research shows that the Martian mantle could have received varying additions of projectile materials, leading to variable concentrations of iron-loving elements. The next generation of Mars missions, including plans to return samples to Earth, will provide new information to better understand the variability of iron-loving elements in Martian rocks and the early evolution of the Red Planet.
Read more at Science Daily
Antibiotics discovered that kill bacteria in a new way
Lab technician holding a Petri dish. |
The newly-found corbomycin and the lesser-known complestatin have a never-before-seen way to kill bacteria, which is achieved by blocking the function of the bacterial cell wall. The discovery comes from a family of antibiotics called glycopeptides that are produced by soil bacteria.
The researchers also demonstrated in mice that these new antibiotics can block infections caused by the drug resistant Staphylococcus aureus which is a group of bacteria that can cause many serious infections.
The findings were published in Nature today.
"Bacteria have a wall around the outside of their cells that gives them shape and is a source of strength," said study first author Beth Culp, a PhD candidate in biochemistry and biomedical sciences at McMaster.
"Antibiotics like penicillin kill bacteria by preventing building of the wall, but the antibiotics that we found actually work by doing the opposite -- they prevent the wall from being broken down. This is critical for cell to divide.
"In order for a cell to grow, it has to divide and expand. If you completely block the breakdown of the wall, it is like it is trapped in a prison, and can't expand or grow."
Looking at the family tree of known members of the glycopeptides, researchers studied the genes of those lacking known resistance mechanisms, with the idea they might be an antibiotic demonstrating a different way to attack bacteria.
"We hypothesized that if the genes that made these antibiotics were different, maybe the way they killed the bacteria was also different," said Culp.
The group confirmed that the bacterial wall was the site of action of these new antibiotics using cell imaging techniques in collaboration with Yves Brun and his team from the Université de Montréal.
Culp said: "This approach can be applied to other antibiotics and help us discover new ones with different mechanisms of action. We found one completely new antibiotic in this study, but since then, we've found a few others in the same family that have this same new mechanism."
Read more at Science Daily
Feb 12, 2020
Distant giant planets form differently than 'failed stars'
A team of astronomers led by Brendan Bowler of The University of Texas at Austin has probed the formation process of giant exoplanets and brown dwarfs, a class of objects that are more massive than giant planets, but not massive enough to ignite nuclear fusion in their cores to shine like true stars.
Using direct imaging with ground-based telescopes in Hawaii -- W. M. Keck Observatory and Subaru Telescope on Maunakea -- the team studied the orbits of these faint companions orbiting stars in 27 systems. These data, combined with modeling of the orbits, allowed them to determine that the brown dwarfs in these systems formed like stars, but the gas giants formed like planets.
The research is published in the current issue of The Astronomical Journal.
In the last two decades, technological leaps have allowed telescopes to separate the light from a parent star and a much-dimmer orbiting object. In 1995, this new capability produced the first direct images of a brown dwarf orbiting a star. The first direct image of planets orbiting another star followed in 2008.
"Over the past 20 years, we've been leaping down and down in mass," Bowler said of the direct imaging capability, noting that the current limit is about 1 Jupiter mass. As the technology has improved, "One of the big questions that has emerged is 'What's the nature of the companions we're finding?'"
Brown dwarfs, as defined by astronomers, have masses between 13 and 75 Jupiter masses. They have characteristics in common with both planets and with stars, and Bowler and his team wanted to settle the question: Are gas giant planets on the outer fringes of planetary systems the tip of the planetary iceberg, or the low-mass end of brown dwarfs? Past research has shown that brown dwarfs orbiting stars likely formed like low-mass stars, but it's been less clear what is the lowest mass companion this formation mechanism can produce.
"One way to get at this is to study the dynamics of the system -- to look at the orbits," Bowler said. Their orbits today hold the key to unlocking their evolution.
Using Keck Observatory's adaptive optics (AO) system with the Near-Infrared Camera, second generation (NIRC2) instrument on the Keck II telescope, as well as the Subaru Telescope, Bowler's team took images of giant planets and brown dwarfs as they orbit their parent stars.
It's a long process. The gas giants and brown dwarfs they studied are so distant from their parent stars that one orbit may take hundreds of years. To determine even a small percentage of the orbit, "You take an image, you wait a year," for the faint companion to travel a bit, Bowler said. Then "you take another image, you wait another year."
This research relied on AO technology, which allows astronomers to correct for distortions caused by the Earth's atmosphere. As AO instruments have continually improved over the past three decades, more brown dwarfs and giant planets have been directly imaged. But since most of these discoveries have been made over the past decade or two, the team only has images corresponding to a few percent of each object's total orbit. They combined their new observations of 27 systems with all of the previous observations published by other astronomers or available in telescope archives.
At this point, computer modeling comes in. Coauthors on this paper have helped create an orbit-fitting code called "Orbitize!" which uses Kepler's laws of planetary motion to identify which types of orbits are consistent with the measured positions, and which are not.
The code generates a set of possible orbits for each companion. The slight motion of each giant planet or brown dwarf forms a "cloud" of possible orbits. The smaller the cloud, the more astronomers are closing in on the companion's true orbit. And more data points -- that is, more direct images of each object as it orbits -- will refine the shape of the orbit.
"Rather than wait decades or centuries for a planet to complete one orbit, we can make up for the shorter time baseline of our data with very accurate position measurements," said team member Eric Nielsen of Stanford University. "A part of Orbitize! that we developed specifically to fit partial orbits, OFTI [Orbits For The Impatient], allowed us to find orbits even for the longest period companions."
Finding the shape of the orbit is key: Objects that have more circular orbits probably formed like planets. That is, when a cloud of gas and dust collapsed to form a star, the distant companion (and any other planets) formed out of a flattened disk of gas and dust rotating around that star.
On the other hand, the ones that have more elongated orbits probably formed like stars. In this scenario, a clump of gas and dust was collapsing to form a star, but it fractured into two clumps. Each clump then collapsed, one forming a star, and the other a brown dwarf orbiting around that star. This is essentially a binary star system, albeit containing one real star and one "failed star."
"Even though these companions are millions of years old, the memory of how they formed is still encoded in their present-day eccentricity," Nielsen added. Eccentricity is a measure of how circular or elongated an object's orbit is.
The results of the team's study of 27 distant companions was unambiguous.
"The punchline is, we found that when you divide these objects at this canonical boundary of more than about 15 Jupiter masses, the things that we've been calling planets do indeed have more circular orbits, as a population, compared to the rest," Bowler said. "And the rest look like binary stars."
The future of this work involves both continuing to monitor these 27 objects, as well as identifying new ones to widen the study. "The sample size is still modest, at the moment," Bowler said. His team is using the Gaia satellite to look for additional candidates to follow up using direct imaging with even greater sensitivity at the forthcoming Giant Magellan Telescope (GMT) and other facilities. UT-Austin is a founding member of the GMT collaboration.
Bowler's team's results reinforce similar conclusions recently reached by the GPIES direct imaging survey with the Gemini Planet Imager, which found evidence for a different formation channel for brown dwarfs and giant planets based on their statistical properties.
Read more at Science Daily
Using direct imaging with ground-based telescopes in Hawaii -- W. M. Keck Observatory and Subaru Telescope on Maunakea -- the team studied the orbits of these faint companions orbiting stars in 27 systems. These data, combined with modeling of the orbits, allowed them to determine that the brown dwarfs in these systems formed like stars, but the gas giants formed like planets.
The research is published in the current issue of The Astronomical Journal.
In the last two decades, technological leaps have allowed telescopes to separate the light from a parent star and a much-dimmer orbiting object. In 1995, this new capability produced the first direct images of a brown dwarf orbiting a star. The first direct image of planets orbiting another star followed in 2008.
"Over the past 20 years, we've been leaping down and down in mass," Bowler said of the direct imaging capability, noting that the current limit is about 1 Jupiter mass. As the technology has improved, "One of the big questions that has emerged is 'What's the nature of the companions we're finding?'"
Brown dwarfs, as defined by astronomers, have masses between 13 and 75 Jupiter masses. They have characteristics in common with both planets and with stars, and Bowler and his team wanted to settle the question: Are gas giant planets on the outer fringes of planetary systems the tip of the planetary iceberg, or the low-mass end of brown dwarfs? Past research has shown that brown dwarfs orbiting stars likely formed like low-mass stars, but it's been less clear what is the lowest mass companion this formation mechanism can produce.
"One way to get at this is to study the dynamics of the system -- to look at the orbits," Bowler said. Their orbits today hold the key to unlocking their evolution.
Using Keck Observatory's adaptive optics (AO) system with the Near-Infrared Camera, second generation (NIRC2) instrument on the Keck II telescope, as well as the Subaru Telescope, Bowler's team took images of giant planets and brown dwarfs as they orbit their parent stars.
It's a long process. The gas giants and brown dwarfs they studied are so distant from their parent stars that one orbit may take hundreds of years. To determine even a small percentage of the orbit, "You take an image, you wait a year," for the faint companion to travel a bit, Bowler said. Then "you take another image, you wait another year."
This research relied on AO technology, which allows astronomers to correct for distortions caused by the Earth's atmosphere. As AO instruments have continually improved over the past three decades, more brown dwarfs and giant planets have been directly imaged. But since most of these discoveries have been made over the past decade or two, the team only has images corresponding to a few percent of each object's total orbit. They combined their new observations of 27 systems with all of the previous observations published by other astronomers or available in telescope archives.
At this point, computer modeling comes in. Coauthors on this paper have helped create an orbit-fitting code called "Orbitize!" which uses Kepler's laws of planetary motion to identify which types of orbits are consistent with the measured positions, and which are not.
The code generates a set of possible orbits for each companion. The slight motion of each giant planet or brown dwarf forms a "cloud" of possible orbits. The smaller the cloud, the more astronomers are closing in on the companion's true orbit. And more data points -- that is, more direct images of each object as it orbits -- will refine the shape of the orbit.
"Rather than wait decades or centuries for a planet to complete one orbit, we can make up for the shorter time baseline of our data with very accurate position measurements," said team member Eric Nielsen of Stanford University. "A part of Orbitize! that we developed specifically to fit partial orbits, OFTI [Orbits For The Impatient], allowed us to find orbits even for the longest period companions."
Finding the shape of the orbit is key: Objects that have more circular orbits probably formed like planets. That is, when a cloud of gas and dust collapsed to form a star, the distant companion (and any other planets) formed out of a flattened disk of gas and dust rotating around that star.
On the other hand, the ones that have more elongated orbits probably formed like stars. In this scenario, a clump of gas and dust was collapsing to form a star, but it fractured into two clumps. Each clump then collapsed, one forming a star, and the other a brown dwarf orbiting around that star. This is essentially a binary star system, albeit containing one real star and one "failed star."
"Even though these companions are millions of years old, the memory of how they formed is still encoded in their present-day eccentricity," Nielsen added. Eccentricity is a measure of how circular or elongated an object's orbit is.
The results of the team's study of 27 distant companions was unambiguous.
"The punchline is, we found that when you divide these objects at this canonical boundary of more than about 15 Jupiter masses, the things that we've been calling planets do indeed have more circular orbits, as a population, compared to the rest," Bowler said. "And the rest look like binary stars."
The future of this work involves both continuing to monitor these 27 objects, as well as identifying new ones to widen the study. "The sample size is still modest, at the moment," Bowler said. His team is using the Gaia satellite to look for additional candidates to follow up using direct imaging with even greater sensitivity at the forthcoming Giant Magellan Telescope (GMT) and other facilities. UT-Austin is a founding member of the GMT collaboration.
Bowler's team's results reinforce similar conclusions recently reached by the GPIES direct imaging survey with the Gemini Planet Imager, which found evidence for a different formation channel for brown dwarfs and giant planets based on their statistical properties.
Read more at Science Daily
Long-distance skiers may have 'motor reserve' that can delay onset of Parkinson's disease
To better understand the relationship between physical activity and Parkinson's Disease (PD) investigators in Sweden analyzed medical records of nearly 200,000 long-distance skiers who took part in the Vasaloppet cross-country ski race. They established that a physically active lifestyle is associated with close to a 30% reduced risk for PD, which might be explained by a motor reserve among the physically active, however, this dissipates as individuals age. Their results are published in the Journal of Parkinson's Disease (JPD).
Studies have shown the enormous benefits of exercise in many disorders including neurodegenerative diseases, but the reasons are not always clear. "Exercise seems to protect against the motor symptoms of PD but not necessarily against the brain damage caused by PD," explained co-lead investigator Tomas T. Olsson, MD, Department of Neurology, Skåne University Hospital, and Department of Experimental Medical Science, Experimental Dementia Research Unit, Lund University, Lund, Sweden.
"To understand the mechanisms behind the protective effects of exercise it is very important to establish whether exercise gives people a greater reserve or direct protection," noted co-lead investigator Martina Svensson, Department of Experimental Medical Science, Experimental Neuroinflammation Laboratory, Lund University, Lund, Sweden.
To investigate the degree to which physical activity is associated with long-term lower risk of PD and whether this association can be explained by physically active people being able to sustain more PD neuropathology before the onset of clinical symptoms, investigators analyzed long-term data about the incidence of PD among long-distance skiers. They followed 197,685 participants (median age 36 years; 38% women) in the Vasaloppet, an annual cross-country ski race of up to 90 km, from 1989 to 2010 and compared them to 197,684 age-matched non-skiers. Incidence of PD was taken from the Swedish National Patient Registry
Investigators found that the skiers were almost 30% less likely to develop PD than non-skiers. However, this dissipates with time and increasing age and results in diagnoses of PD among skiers matching the general population.
"We speculate that this would be consistent with the hypothesis that individuals who are physically well-trained have a greater motor reserve, which for every given level of Parkinson's brain damage would result in less motor symptoms thus delaying the diagnosis of PD," noted senior investigator Tomas Deierborg, PhD, Department of Experimental Medical Science, Experimental Neuroinflammation Laboratory, Lund University, Lund, Sweden. "This is analogous to the well-established concept of cognitive reserve in dementia in which the well-educated can sustain more brain pathology without clinical dementia. It highlights the importance of staying physically active throughout life in order to have a reserve to better cope when the frailties and diseases of old age inevitably arrive."
"If a person is physically active, it may be possible to maintain mobility for longer, despite the pathological changes in the brain," added Dr. Olsson.
JPD's Co-Editor-in-Chief Bastiaan R. Bloem, MD, PhD, Director, Radboudumc Center of Expertise for Parkinson & Movement Disorders, Nijmegen, The Netherlands, commented, "There is an enormous interest in developing new therapies that can help to lower the risk of developing PD. This present study by Olsson and colleagues is particularly exciting in that regard, because it suggests that a readily available intervention -- exercise -- can actually achieve this. The study also provides an explanation why exercise does not offer a complete protection against PD; it supports the motor reserve of the brain, and as such, probably helps to postpone rather than fully prevent the onset of manifest Parkinson symptoms."
Read more at Science Daily
Studies have shown the enormous benefits of exercise in many disorders including neurodegenerative diseases, but the reasons are not always clear. "Exercise seems to protect against the motor symptoms of PD but not necessarily against the brain damage caused by PD," explained co-lead investigator Tomas T. Olsson, MD, Department of Neurology, Skåne University Hospital, and Department of Experimental Medical Science, Experimental Dementia Research Unit, Lund University, Lund, Sweden.
"To understand the mechanisms behind the protective effects of exercise it is very important to establish whether exercise gives people a greater reserve or direct protection," noted co-lead investigator Martina Svensson, Department of Experimental Medical Science, Experimental Neuroinflammation Laboratory, Lund University, Lund, Sweden.
To investigate the degree to which physical activity is associated with long-term lower risk of PD and whether this association can be explained by physically active people being able to sustain more PD neuropathology before the onset of clinical symptoms, investigators analyzed long-term data about the incidence of PD among long-distance skiers. They followed 197,685 participants (median age 36 years; 38% women) in the Vasaloppet, an annual cross-country ski race of up to 90 km, from 1989 to 2010 and compared them to 197,684 age-matched non-skiers. Incidence of PD was taken from the Swedish National Patient Registry
Investigators found that the skiers were almost 30% less likely to develop PD than non-skiers. However, this dissipates with time and increasing age and results in diagnoses of PD among skiers matching the general population.
"We speculate that this would be consistent with the hypothesis that individuals who are physically well-trained have a greater motor reserve, which for every given level of Parkinson's brain damage would result in less motor symptoms thus delaying the diagnosis of PD," noted senior investigator Tomas Deierborg, PhD, Department of Experimental Medical Science, Experimental Neuroinflammation Laboratory, Lund University, Lund, Sweden. "This is analogous to the well-established concept of cognitive reserve in dementia in which the well-educated can sustain more brain pathology without clinical dementia. It highlights the importance of staying physically active throughout life in order to have a reserve to better cope when the frailties and diseases of old age inevitably arrive."
"If a person is physically active, it may be possible to maintain mobility for longer, despite the pathological changes in the brain," added Dr. Olsson.
JPD's Co-Editor-in-Chief Bastiaan R. Bloem, MD, PhD, Director, Radboudumc Center of Expertise for Parkinson & Movement Disorders, Nijmegen, The Netherlands, commented, "There is an enormous interest in developing new therapies that can help to lower the risk of developing PD. This present study by Olsson and colleagues is particularly exciting in that regard, because it suggests that a readily available intervention -- exercise -- can actually achieve this. The study also provides an explanation why exercise does not offer a complete protection against PD; it supports the motor reserve of the brain, and as such, probably helps to postpone rather than fully prevent the onset of manifest Parkinson symptoms."
Read more at Science Daily
The use of jargon kills people's interest in science, politics
When scientists and others use their specialized jargon terms while communicating with the general public, the effects are much worse than just making what they're saying hard to understand.
In a new study, people exposed to jargon when reading about subjects like self-driving cars and surgical robots later said they were less interested in science than others who read about the same topics, but without the use of specialized terms.
They were also less likely to think they were good at science, felt less informed about science and felt less qualified to discuss science topics.
Crucially, it made no difference if the jargon terms -- like "vigilance decrement" and "laparoscopy" -- were defined in the text: Even when the terms were defined, readers still felt just as disengaged as readers who read jargon that wasn't explained.
The problem is that the mere presence of jargon sends a discouraging message to readers, said Hillary Shulman, lead author of the study and assistant professor of communication at The Ohio State University.
"The use of difficult, specialized words are a signal that tells people that they don't belong," Shulman said.
"You can tell them what the terms mean, but it doesn't matter. They already feel like that this message isn't for them."
This new study is the latest in a series by Shulman and her colleagues that shows how complex language in politics, as well as science, can lead people to tune out.
"Politics is where I started," Shulman said.
"We have found that when you use more colloquial language when talking to people about issues like immigration policy, they report more interest in politics, more ability to understand political information and more confidence in their political opinions."
Shulman and colleagues have now studied language and public engagement involving about 20 different political and science topics, all with the same results.
"We can get citizens to engage with complex political and scientific issues if we communicate to them in language that they understand," she said.
The latest study was published online recently in the Journal of Language and Social Psychology and will appear in a future print edition.
In the study, 650 adults participated online. They read a paragraph about each of three science and technology topics: self-driving cars, surgical robots and 3D bio-printing.
About half of them read versions of the paragraphs with no jargon and half read versions with jargon.
For example, one of the sentences in the high-jargon version of the surgical robots paragraph read: "This system works because of AI integration through motion scaling and tremor reduction."
The no-jargon version of that same sentence read: "This system works because of programming that makes the robot's movements more precise and less shaky."
Half of the people who read the high-jargon versions were also offered the opportunity to see the jargon terms defined. When they held their computer mouse over an underlined jargon term, a text box appeared with the definition -- the exact language used in the no-jargon version.
After reading each paragraph, participants rated how easy it was to read.
After they read all three paragraphs, participants completed a variety of measures examining issues like their interest in science and how much they thought they knew about science.
As expected, participants who read the high-jargon paragraphs thought they were more difficult to read than did those who read the no-jargon descriptions -- even if they had the definitions available to them.
"What we found is that giving people definitions didn't matter at all -- it had no effect on how difficult they thought the reading was," Shulman said.
Being exposed to jargon had a variety of negative effects on readers, the study showed.
"Exposure to jargon led people to report things like 'I'm not really good at science,' 'I'm not interested in learning about science,' and 'I'm not well qualified to participate in science discussions,'" Shulman said.
But people who read no-jargon versions felt more empowered.
"They were more likely to say they understood what they read because they were a science kind of person, that they liked science and considered themselves knowledgeable."
There's an even darker side, though, to how people react when they are exposed to jargon-filled explanations of science.
In an earlier study with these same participants, published in the journal Public Understanding of Science, the researchers found that reading jargon led people to not believe the science.
"When you have a difficult time processing the jargon, you start to counter-argue. You don't like what you're reading. But when it is easier to read, you are more persuaded and you're more likely to support these technologies," she said.
"You can see how important it is to communicate clearly when you're talking about complex science subjects like climate change or vaccines."
Shulman said that the use of jargon is appropriate with scientific audiences. But scientists and science communicators who want to communicate with the public need to modify their language, beginning with eliminating jargon.
Read more at Science Daily
In a new study, people exposed to jargon when reading about subjects like self-driving cars and surgical robots later said they were less interested in science than others who read about the same topics, but without the use of specialized terms.
They were also less likely to think they were good at science, felt less informed about science and felt less qualified to discuss science topics.
Crucially, it made no difference if the jargon terms -- like "vigilance decrement" and "laparoscopy" -- were defined in the text: Even when the terms were defined, readers still felt just as disengaged as readers who read jargon that wasn't explained.
The problem is that the mere presence of jargon sends a discouraging message to readers, said Hillary Shulman, lead author of the study and assistant professor of communication at The Ohio State University.
"The use of difficult, specialized words are a signal that tells people that they don't belong," Shulman said.
"You can tell them what the terms mean, but it doesn't matter. They already feel like that this message isn't for them."
This new study is the latest in a series by Shulman and her colleagues that shows how complex language in politics, as well as science, can lead people to tune out.
"Politics is where I started," Shulman said.
"We have found that when you use more colloquial language when talking to people about issues like immigration policy, they report more interest in politics, more ability to understand political information and more confidence in their political opinions."
Shulman and colleagues have now studied language and public engagement involving about 20 different political and science topics, all with the same results.
"We can get citizens to engage with complex political and scientific issues if we communicate to them in language that they understand," she said.
The latest study was published online recently in the Journal of Language and Social Psychology and will appear in a future print edition.
In the study, 650 adults participated online. They read a paragraph about each of three science and technology topics: self-driving cars, surgical robots and 3D bio-printing.
About half of them read versions of the paragraphs with no jargon and half read versions with jargon.
For example, one of the sentences in the high-jargon version of the surgical robots paragraph read: "This system works because of AI integration through motion scaling and tremor reduction."
The no-jargon version of that same sentence read: "This system works because of programming that makes the robot's movements more precise and less shaky."
Half of the people who read the high-jargon versions were also offered the opportunity to see the jargon terms defined. When they held their computer mouse over an underlined jargon term, a text box appeared with the definition -- the exact language used in the no-jargon version.
After reading each paragraph, participants rated how easy it was to read.
After they read all three paragraphs, participants completed a variety of measures examining issues like their interest in science and how much they thought they knew about science.
As expected, participants who read the high-jargon paragraphs thought they were more difficult to read than did those who read the no-jargon descriptions -- even if they had the definitions available to them.
"What we found is that giving people definitions didn't matter at all -- it had no effect on how difficult they thought the reading was," Shulman said.
Being exposed to jargon had a variety of negative effects on readers, the study showed.
"Exposure to jargon led people to report things like 'I'm not really good at science,' 'I'm not interested in learning about science,' and 'I'm not well qualified to participate in science discussions,'" Shulman said.
But people who read no-jargon versions felt more empowered.
"They were more likely to say they understood what they read because they were a science kind of person, that they liked science and considered themselves knowledgeable."
There's an even darker side, though, to how people react when they are exposed to jargon-filled explanations of science.
In an earlier study with these same participants, published in the journal Public Understanding of Science, the researchers found that reading jargon led people to not believe the science.
"When you have a difficult time processing the jargon, you start to counter-argue. You don't like what you're reading. But when it is easier to read, you are more persuaded and you're more likely to support these technologies," she said.
"You can see how important it is to communicate clearly when you're talking about complex science subjects like climate change or vaccines."
Shulman said that the use of jargon is appropriate with scientific audiences. But scientists and science communicators who want to communicate with the public need to modify their language, beginning with eliminating jargon.
Read more at Science Daily
Researchers stimulate areas vital to consciousness in monkeys' brains -- and it wakes them up
Brain illustration |
Previous studies, including EEG and fMRI studies in humans, had suggested that certain areas of the brain, including the parietal cortex and the thalamus, appear to be involved in consciousness. "We decided to go beyond the classical approach of recording from one area at a time," says senior author Yuri Saalmann, an assistant professor at the University of Wisconsin, Madison. "We recorded from multiple areas at the same time to see how the entire network behaves."
The investigators used macaques as their animal model. By studying awake, sleeping, and anesthetized animals, they were able to narrow down the region of the brain involved in consciousness to a much more specific area than other studies have done. They were also able to rule out some areas that had been proposed in previous neurocorrelative studies of consciousness. They ultimately focused on the central lateral thalamus, which is found deep in the forebrain.
Once the researchers pinpointed this area, they tested what happened when the central lateral thalamus was activated while the animals were under anesthesia, stimulating the region with a frequency of 50 Hz. "We found that when we stimulated this tiny little brain area, we could wake the animals up and reinstate all the neural activity that you'd normally see in the cortex during wakefulness," Saalmann says. "They acted just as they would if they were awake. When we switched off the stimulation, the animals went straight back to being unconscious."
One test of wakefulness was their neural responses to oddball auditory stimulation -- a series of beeps interspersed with other random sounds. The animals responded in the same way that awake animals would respond.
"Our electrodes have a very different design," Saalmann says. "They are much more tailored to the shape of the structure in the brain we want to stimulate. They also more closely mimic the electrical activity that's seen in a healthy, normal system."
"The overriding motivation of this research is to help people with disorders of consciousness to live better lives," says first author Michelle Redinbaugh, a graduate student in the Department of Psychology at the University of Wisconsin, Madison. "We have to start by understanding the minimum mechanism that is necessary or sufficient for consciousness, so that the correct part of the brain can be targeted clinically."
"There are many exciting implications for this work," she says. "It's possible we may be able to use these kinds of deep-brain stimulating electrodes to bring people out of comas. Our findings may also be useful for developing new ways to monitor patients under clinical anesthesia, to make sure they are safely unconscious."
Read more at Science Daily
Feb 11, 2020
Using sound and light to generate ultra-fast data transfer
Researchers have made a breakthrough in the control of terahertz quantum cascade lasers, which could lead to the transmission of data at the rate of 100 gigabits per second -- around one thousand times quicker than a fast Ethernet operating at 100 megabits a second.
What distinguishes terahertz quantum cascade lasers from other lasers is the fact that they emit light in the terahertz range of the electromagnetic spectrum. They have applications in the field of spectroscopy where they are used in chemical analysis.
The lasers could also eventually provide ultra-fast, short-hop wireless links where large datasets have to be transferred across hospital campuses or between research facilities on universities -- or in satellite communications.
To be able to send data at these increased speeds, the lasers need to be modulated very rapidly: switching on and off or pulsing around 100 billion times every second.
Engineers and scientists have so far failed to develop a way of achieving this.
A research team from the University of Leeds and University of Nottingham believe they have found a way of delivering ultra- fast modulation, by combining the power of acoustic and light waves. They have published their findings today (February 11th) in Nature Communications.
John Cunningham, Professor of Nanoelectronics at Leeds, said: "This is exciting research. At the moment, the system for modulating a quantum cascade laser is electrically driven -- but that system has limitations.
"Ironically, the same electronics that delivers the modulation usually puts a brake on the speed of the modulation. The mechanism we are developing relies instead on acoustic waves."
A quantum cascade laser is very efficient. As an electron passes through the optical component of the laser, it goes through a series of 'quantum wells' where the energy level of the electron drops and a photon or pulse of light energy is emitted.
One electron is capable of emitting multiple photons. It is this process that is controlled during the modulation.
Instead of using external electronics, the teams of researchers at Leeds and Nottingham Universities used acoustic waves to vibrate the quantum wells inside the quantum cascade laser.
The acoustic waves were generated by the impact of a pulse from another laser onto an aluminium film. This caused the film to expand and contract, sending a mechanical wave through the quantum cascade laser.
Tony Kent, Professor of Physics at Nottingham said "Essentially, what we did was use the acoustic wave to shake the intricate electronic states inside the quantum cascade laser. We could then see that its terahertz light output was being altered by the acoustic wave."
Professor Cunningham added: "We did not reach a situation where we could stop and start the flow completely, but we were able to control the light output by a few percent, which is a great start.
"We believe that with further refinement, we will be able to develop a new mechanism for complete control of the photon emissions from the laser, and perhaps even integrate structures generating sound with the terahertz laser, so that no external sound source is needed."
Read more at Science Daily
What distinguishes terahertz quantum cascade lasers from other lasers is the fact that they emit light in the terahertz range of the electromagnetic spectrum. They have applications in the field of spectroscopy where they are used in chemical analysis.
The lasers could also eventually provide ultra-fast, short-hop wireless links where large datasets have to be transferred across hospital campuses or between research facilities on universities -- or in satellite communications.
To be able to send data at these increased speeds, the lasers need to be modulated very rapidly: switching on and off or pulsing around 100 billion times every second.
Engineers and scientists have so far failed to develop a way of achieving this.
A research team from the University of Leeds and University of Nottingham believe they have found a way of delivering ultra- fast modulation, by combining the power of acoustic and light waves. They have published their findings today (February 11th) in Nature Communications.
John Cunningham, Professor of Nanoelectronics at Leeds, said: "This is exciting research. At the moment, the system for modulating a quantum cascade laser is electrically driven -- but that system has limitations.
"Ironically, the same electronics that delivers the modulation usually puts a brake on the speed of the modulation. The mechanism we are developing relies instead on acoustic waves."
A quantum cascade laser is very efficient. As an electron passes through the optical component of the laser, it goes through a series of 'quantum wells' where the energy level of the electron drops and a photon or pulse of light energy is emitted.
One electron is capable of emitting multiple photons. It is this process that is controlled during the modulation.
Instead of using external electronics, the teams of researchers at Leeds and Nottingham Universities used acoustic waves to vibrate the quantum wells inside the quantum cascade laser.
The acoustic waves were generated by the impact of a pulse from another laser onto an aluminium film. This caused the film to expand and contract, sending a mechanical wave through the quantum cascade laser.
Tony Kent, Professor of Physics at Nottingham said "Essentially, what we did was use the acoustic wave to shake the intricate electronic states inside the quantum cascade laser. We could then see that its terahertz light output was being altered by the acoustic wave."
Professor Cunningham added: "We did not reach a situation where we could stop and start the flow completely, but we were able to control the light output by a few percent, which is a great start.
"We believe that with further refinement, we will be able to develop a new mechanism for complete control of the photon emissions from the laser, and perhaps even integrate structures generating sound with the terahertz laser, so that no external sound source is needed."
Read more at Science Daily
To slow an epidemic, focus on handwashing
A new study estimates that improving the rates of handwashing by travelers passing through just 10 of the world's leading airports could significantly reduce the spread of many infectious diseases. And the greater the improvement in people's handwashing habits at airports, the more dramatic the effect on slowing the disease, the researchers found.
The findings, which deal with infectious diseases in general including the flu, were published in late December, just before the recent coronavirus outbreak in Wuhan, China, but the study's authors say that its results would apply to any such disease and are relevant to the current outbreak.
The study, which is based on epidemiological modeling and data-based simulations, appears in the journal Risk Analysis. The authors are Professor Christos Nicolaides PhD '14 of the University of Cyprus, who is also a fellow at the MIT Sloan School of Management; Professor Ruben Juanes of MIT's Department of Civil and Environmental Engineering; and three others.
People can be surprisingly casual about washing their hands, even in crowded locations like airports where people from many different locations are touching surfaces such as chair armrests, check-in kiosks, security checkpoint trays, and restroom doorknobs and faucets. Based on data from previous research by groups including the American Society for Microbiology, the team estimates that on average, only about 20 percent of people in airports have clean hands -- meaning that they have been washed with soap and water, for at least 15 seconds, within the last hour or so. The other 80 percent are potentially contaminating everything they touch with whatever germs they may be carrying, Nicolaides says.
"Seventy percent of the people who go to the toilet wash their hands afterwards," Nicolaides says, about findings from a previous ASM study. "The other 30 percent don't. And of those that do, only 50 percent do it right." Others just rinse briefly in some water, rather than using soap and water and spending the recommended 15 to 20 seconds washing, he says. That figure, combined with estimates of exposure to the many potentially contaminated surfaces that people come into contact with in an airport, leads to the team's estimate that about 20 percent of travelers in an airport have clean hands.
Improving handwashing at all of the world's airports to triple that rate, so that 60 percent of travelers to have clean hands at any given time, would have the greatest impact, potentially slowing global disease spread by almost 70 percent, the researchers found. Deploying such measures at so many airports and reaching such a high level of compliance may be impractical, but the new study suggests that a significant reduction in disease spread could still be achieved by just picking the 10 most significant airports based on the initial location of a viral outbreak. Focusing handwashing messaging in those 10 airports could potentially slow the disease spread by as much as 37 percent, the researchers estimate.
They arrived at these estimates using detailed epidemiological simulations that involved data on worldwide flights including duration, distance, and interconnections; estimates of wait times at airports; and studies on typical rates of interactions of people with various elements of their surroundings and with other people.
Even small improvements in hygiene could make a noticeable dent. Increasing the prevalence of clean hands in all airports worldwide by just 10 percent, which the researchers think could potentially be accomplished through education, posters, public announcements, and perhaps improved access to handwashing facilities, could slow the global rate of the spread of a disease by about 24 percent, they found. Numerous studies (such as this one) have shown that such measures can increase rates of proper handwashing, Nicolaides says.
"Eliciting an increase in hand-hygiene is a challenge," he says, "but new approaches in education, awareness, and social-media nudges have proven to be effective in hand-washing engagement."
The researchers used data from previous studies on the effectiveness of handwashing in controlling transmission of disease, so Juanes says these data would have to be calibrated in the field to obtain refined estimates of the slow-down in spreading of a specific outbreak.
The findings are consistent with recommendations made by both the U.S. Centers for Disease Control and the World Health Organization. Both have indicated that hand hygiene is the most efficient and cost-effective way to control disease propagation. While both organizations say that other measures can also play a useful role in limiting disease spread, such as use of surgical face masks, airport closures, and travel restrictions, hand hygiene is still the first line of defense -- and an easy one for individuals to implement.
While the potential of better hand hygiene in controlling transmission of diseases between individuals has been extensively studied and proven, this study is one of the first to quantitatively assess the effectiveness of such measures as a way to mitigate the risk of a global epidemic or pandemic, the authors say.
The researchers identified 120 airports that are the most influential in spreading disease, and found that these are not necessarily the ones with the most overall traffic. For example, they cite the airports in Tokyo and Honolulu as having an outsized influence because of their locations. While they respectively rank 46th and 117th in terms of overall traffic, they can contribute significantly to the spread of disease because they have direct connections to some of the world's biggest airport hubs, they have long-range direct international flights, and they sit squarely between the global East and West.
For any given disease outbreak, identifying the 10 airports from this list that are the closest to the location of the outbreak, and focusing handwashing education at those 10 turned out to be the most effective way of limiting the disease spread, they found.
Read more at Science Daily
The findings, which deal with infectious diseases in general including the flu, were published in late December, just before the recent coronavirus outbreak in Wuhan, China, but the study's authors say that its results would apply to any such disease and are relevant to the current outbreak.
The study, which is based on epidemiological modeling and data-based simulations, appears in the journal Risk Analysis. The authors are Professor Christos Nicolaides PhD '14 of the University of Cyprus, who is also a fellow at the MIT Sloan School of Management; Professor Ruben Juanes of MIT's Department of Civil and Environmental Engineering; and three others.
People can be surprisingly casual about washing their hands, even in crowded locations like airports where people from many different locations are touching surfaces such as chair armrests, check-in kiosks, security checkpoint trays, and restroom doorknobs and faucets. Based on data from previous research by groups including the American Society for Microbiology, the team estimates that on average, only about 20 percent of people in airports have clean hands -- meaning that they have been washed with soap and water, for at least 15 seconds, within the last hour or so. The other 80 percent are potentially contaminating everything they touch with whatever germs they may be carrying, Nicolaides says.
"Seventy percent of the people who go to the toilet wash their hands afterwards," Nicolaides says, about findings from a previous ASM study. "The other 30 percent don't. And of those that do, only 50 percent do it right." Others just rinse briefly in some water, rather than using soap and water and spending the recommended 15 to 20 seconds washing, he says. That figure, combined with estimates of exposure to the many potentially contaminated surfaces that people come into contact with in an airport, leads to the team's estimate that about 20 percent of travelers in an airport have clean hands.
Improving handwashing at all of the world's airports to triple that rate, so that 60 percent of travelers to have clean hands at any given time, would have the greatest impact, potentially slowing global disease spread by almost 70 percent, the researchers found. Deploying such measures at so many airports and reaching such a high level of compliance may be impractical, but the new study suggests that a significant reduction in disease spread could still be achieved by just picking the 10 most significant airports based on the initial location of a viral outbreak. Focusing handwashing messaging in those 10 airports could potentially slow the disease spread by as much as 37 percent, the researchers estimate.
They arrived at these estimates using detailed epidemiological simulations that involved data on worldwide flights including duration, distance, and interconnections; estimates of wait times at airports; and studies on typical rates of interactions of people with various elements of their surroundings and with other people.
Even small improvements in hygiene could make a noticeable dent. Increasing the prevalence of clean hands in all airports worldwide by just 10 percent, which the researchers think could potentially be accomplished through education, posters, public announcements, and perhaps improved access to handwashing facilities, could slow the global rate of the spread of a disease by about 24 percent, they found. Numerous studies (such as this one) have shown that such measures can increase rates of proper handwashing, Nicolaides says.
"Eliciting an increase in hand-hygiene is a challenge," he says, "but new approaches in education, awareness, and social-media nudges have proven to be effective in hand-washing engagement."
The researchers used data from previous studies on the effectiveness of handwashing in controlling transmission of disease, so Juanes says these data would have to be calibrated in the field to obtain refined estimates of the slow-down in spreading of a specific outbreak.
The findings are consistent with recommendations made by both the U.S. Centers for Disease Control and the World Health Organization. Both have indicated that hand hygiene is the most efficient and cost-effective way to control disease propagation. While both organizations say that other measures can also play a useful role in limiting disease spread, such as use of surgical face masks, airport closures, and travel restrictions, hand hygiene is still the first line of defense -- and an easy one for individuals to implement.
While the potential of better hand hygiene in controlling transmission of diseases between individuals has been extensively studied and proven, this study is one of the first to quantitatively assess the effectiveness of such measures as a way to mitigate the risk of a global epidemic or pandemic, the authors say.
The researchers identified 120 airports that are the most influential in spreading disease, and found that these are not necessarily the ones with the most overall traffic. For example, they cite the airports in Tokyo and Honolulu as having an outsized influence because of their locations. While they respectively rank 46th and 117th in terms of overall traffic, they can contribute significantly to the spread of disease because they have direct connections to some of the world's biggest airport hubs, they have long-range direct international flights, and they sit squarely between the global East and West.
For any given disease outbreak, identifying the 10 airports from this list that are the closest to the location of the outbreak, and focusing handwashing education at those 10 turned out to be the most effective way of limiting the disease spread, they found.
Read more at Science Daily
A happy partner leads to a healthier future
Science now supports the saying, "happy wife, happy life." Michigan State University research found that those who are optimistic contribute to the health of their partners, staving off the risk factors leading to Alzheimer's disease, dementia and cognitive decline as they grow old together.
"We spend a lot of time with our partners," said William Chopik, assistant professor of psychology and co-author of the study. "They might encourage us to exercise, eat healthier or remind us to take our medicine. When your partner is optimistic and healthy, it can translate to similar outcomes in your own life. You actually do experience a rosier future by living longer and staving off cognitive illnesses."
An optimistic partner may encourage eating a salad or work out together to develop healthier lifestyles. For example, if you quit smoking or start exercising, your partner is close to following suit within a few weeks and months.
"We found that when you look at the risk factors for what predicts things like Alzheimer's disease or dementia, a lot of them are things like living a healthy lifestyle," Chopik said. "Maintaining a healthy weight and physical activity are large predictors. There are some physiological markers as well. It looks like people who are married to optimists tend to score better on all of those metrics."
The study, published in the Journal of Personality and co-authored by MSU graduate student Jeewon Oh and Eric Kim, a research scientist in the Department of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health, followed nearly 4,500 heterosexual couples from the Health and Retirement Study for up to eight years. The researchers found a potential link between being married to an optimistic person and preventing the onset of cognitive decline, thanks to a healthier environment at home.
"There's a sense where optimists lead by example, and their partners follow their lead," Chopik said. "While there's some research on people being jealous of their partner's good qualities or on having bad reactions to someone trying to control you, it is balanced with other research that shows being optimistic is associated with perceiving your relationship in a positive light."
The research also indicated that when couples recall shared experiences together, richer details from the memories emerge. A recent example, Chopik explained, was Google's tearjerker Super Bowl ad, "Loretta," in which an elderly man uses his Google Assistant to help him remember details about his late wife.
"The things he was recollecting were positive things about his partner," Chopik said. "There is science behind the Google ad. Part of the types of memories being recalled were positive aspects of their relationship and personalities."
With all of its benefits, is optimism something that can be prescribed? While there is a heritable component to optimism, Chopik says there is some evidence to suggest that it's a trainable quality.
"There are studies that show people have the power to change their personalities, as long as they engage in things that make them change," Chopik said. "Part of it is wanting to change. There are also intervention programs that suggest you can build up optimism."
Read more at Science Daily
"We spend a lot of time with our partners," said William Chopik, assistant professor of psychology and co-author of the study. "They might encourage us to exercise, eat healthier or remind us to take our medicine. When your partner is optimistic and healthy, it can translate to similar outcomes in your own life. You actually do experience a rosier future by living longer and staving off cognitive illnesses."
An optimistic partner may encourage eating a salad or work out together to develop healthier lifestyles. For example, if you quit smoking or start exercising, your partner is close to following suit within a few weeks and months.
"We found that when you look at the risk factors for what predicts things like Alzheimer's disease or dementia, a lot of them are things like living a healthy lifestyle," Chopik said. "Maintaining a healthy weight and physical activity are large predictors. There are some physiological markers as well. It looks like people who are married to optimists tend to score better on all of those metrics."
The study, published in the Journal of Personality and co-authored by MSU graduate student Jeewon Oh and Eric Kim, a research scientist in the Department of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health, followed nearly 4,500 heterosexual couples from the Health and Retirement Study for up to eight years. The researchers found a potential link between being married to an optimistic person and preventing the onset of cognitive decline, thanks to a healthier environment at home.
"There's a sense where optimists lead by example, and their partners follow their lead," Chopik said. "While there's some research on people being jealous of their partner's good qualities or on having bad reactions to someone trying to control you, it is balanced with other research that shows being optimistic is associated with perceiving your relationship in a positive light."
The research also indicated that when couples recall shared experiences together, richer details from the memories emerge. A recent example, Chopik explained, was Google's tearjerker Super Bowl ad, "Loretta," in which an elderly man uses his Google Assistant to help him remember details about his late wife.
"The things he was recollecting were positive things about his partner," Chopik said. "There is science behind the Google ad. Part of the types of memories being recalled were positive aspects of their relationship and personalities."
With all of its benefits, is optimism something that can be prescribed? While there is a heritable component to optimism, Chopik says there is some evidence to suggest that it's a trainable quality.
"There are studies that show people have the power to change their personalities, as long as they engage in things that make them change," Chopik said. "Part of it is wanting to change. There are also intervention programs that suggest you can build up optimism."
Read more at Science Daily
Disease found in fossilized dinosaur tail afflicts humans to this day
The fossilized tail of a young dinosaur that lived on a prairie in southern Alberta, Canada, is home to the remains of a 60-million-year-old tumor.
Researchers at Tel Aviv University, led by Dr. Hila May of the Department of Anatomy and Anthropology at TAU's Sackler Faculty of Medicine and Dan David Center for Human Evolution and Biohistory Research, have identified this benign tumor as part of the pathology of LCH (Langerhans cell histiocytosis), a rare and sometimes painful disease that still afflicts humans, particularly children under the age of 10.
A study on the TAU discovery was published on February 10 in Scientific Reports. Prof. Bruce Rothschild of Indiana University, Prof. Frank Rühli of the University of Zurich and Mr. Darren Tanke of the Royal Museum of Paleontology also contributed to the research.
"Prof. Rothschild and Tanke spotted an unusual finding in the vertebrae of a tail of a young dinosaur of the grass-eating herbivore species, common in the world 66-80 million years ago," Dr. May explains. "There were large cavities in two of the vertebrae segments, which were unearthed at the Dinosaur Provincial Park in southern Alberta, Canada."
It was the specific shape of the cavities that attracted the attention of researchers.
"They were extremely similar to the cavities produced by tumors associated with the rare disease LCH that still exists today in humans," adds Dr. May. "Most of the LCH-related tumors, which can be very painful, suddenly appear in the bones of children aged 2-10 years. Thankfully, these tumors disappear without intervention in many cases."
The dinosaur tail vertebrae were sent for on-site advanced micro-CT scanning to the Shmunis Family Anthropology Institute at TAU's Dan David Center for Human Evolution and Biohistory Research, Sackler Faculty of Medicine, which is located at the Steinhardt Museum of Natural History.
"The micro-CT produces very high-resolution imaging, up to a few microns," Dr. May says. "We scanned the dinosaur vertebrae and created a computerized 3D reconstruction of the tumor and the blood vessels that fed it. The micro and macro analyses confirmed that it was, in fact, LCH. This is the first time this disease has been identified in a dinosaur."
According to Dr. May, the surprising findings indicate that the disease is not unique to humans, and that it has survived for more than 60 million years.
"These kinds of studies, which are now possible thanks to innovative technology, make an important and interesting contribution to evolutionary medicine, a relatively new field of research that investigates the development and behavior of diseases over time," notes Prof. Israel Hershkovitz of TAU's Department of Anatomy and Anthropology and Dan David Center for Human Evolution and Biohistory Research. "We are trying to understand why certain diseases survive evolution with an eye to deciphering what causes them in order to develop new and effective ways of treating them."
From Science Daily
Researchers at Tel Aviv University, led by Dr. Hila May of the Department of Anatomy and Anthropology at TAU's Sackler Faculty of Medicine and Dan David Center for Human Evolution and Biohistory Research, have identified this benign tumor as part of the pathology of LCH (Langerhans cell histiocytosis), a rare and sometimes painful disease that still afflicts humans, particularly children under the age of 10.
A study on the TAU discovery was published on February 10 in Scientific Reports. Prof. Bruce Rothschild of Indiana University, Prof. Frank Rühli of the University of Zurich and Mr. Darren Tanke of the Royal Museum of Paleontology also contributed to the research.
"Prof. Rothschild and Tanke spotted an unusual finding in the vertebrae of a tail of a young dinosaur of the grass-eating herbivore species, common in the world 66-80 million years ago," Dr. May explains. "There were large cavities in two of the vertebrae segments, which were unearthed at the Dinosaur Provincial Park in southern Alberta, Canada."
It was the specific shape of the cavities that attracted the attention of researchers.
"They were extremely similar to the cavities produced by tumors associated with the rare disease LCH that still exists today in humans," adds Dr. May. "Most of the LCH-related tumors, which can be very painful, suddenly appear in the bones of children aged 2-10 years. Thankfully, these tumors disappear without intervention in many cases."
The dinosaur tail vertebrae were sent for on-site advanced micro-CT scanning to the Shmunis Family Anthropology Institute at TAU's Dan David Center for Human Evolution and Biohistory Research, Sackler Faculty of Medicine, which is located at the Steinhardt Museum of Natural History.
"The micro-CT produces very high-resolution imaging, up to a few microns," Dr. May says. "We scanned the dinosaur vertebrae and created a computerized 3D reconstruction of the tumor and the blood vessels that fed it. The micro and macro analyses confirmed that it was, in fact, LCH. This is the first time this disease has been identified in a dinosaur."
According to Dr. May, the surprising findings indicate that the disease is not unique to humans, and that it has survived for more than 60 million years.
"These kinds of studies, which are now possible thanks to innovative technology, make an important and interesting contribution to evolutionary medicine, a relatively new field of research that investigates the development and behavior of diseases over time," notes Prof. Israel Hershkovitz of TAU's Department of Anatomy and Anthropology and Dan David Center for Human Evolution and Biohistory Research. "We are trying to understand why certain diseases survive evolution with an eye to deciphering what causes them in order to develop new and effective ways of treating them."
From Science Daily
Feb 10, 2020
New world map of fish genetic diversity
In a population of animals or plants, genetic diversity can decline much more quickly than species diversity in response to various stress factors: disease, changes to habitat or climate, and so on. Yet not much is known about fish genetic diversity around the world.
Help on that front is now on the way from an international team of scientists from French universities and ETH Zurich. They have produced the first global distribution map for genetic diversity among freshwater and marine fish. Furthermore, they identified the environmental factors that are instrumental in determining the distribution of genetic diversity. Their study was recently published in the journal Nature Communications.
Genetic diversity is unevenly distributed
To begin their study, the researchers analysed a database that contained the data of over 50,000 DNA sequences representing 3,815 species of marine fish and 1,611 species of freshwater fish. From this sequence data, the scientists estimated the average genetic diversity in sections of bodies of water, each section measuring 200 square kilometres.
Their analysis revealed that genetic diversity is unevenly distributed throughout marine and freshwater fish. The greatest genetic diversity was found among marine fish in the western Pacific Ocean, the northern Indian Ocean and the Caribbean. Among freshwater fish, genetic diversity was greatest in South America, but comparatively low in Europe.
In addition, the researchers determined that temperature is a key factor influencing genetic diversity among marine fish: as the temperature rises, so does diversity. By contrast, the key determinants of genetic diversity in freshwater fish were the complexity of their habitat structure and how their habitats have changed over time.
Impact on nature conservation strategies
The researchers see their study as a tool in efforts to improve conservation of genetic diversity and in turn biodiversity. Their map makes it easier to detect hotspots of species and genetic diversity and to plan appropriate protective action. Maintaining the genetic diversity is crucial, say the researchers. "The more diverse a population's gene pool is, the higher the potential for adaptation to environmental changes," explains Loïc Pellissier, co-?lead author of the study and Professor at ETH's Institute of Terrestrial Ecosystems.
Based on the findings, Pellissier predicts that fish populations will have potentially differing levels of adaptability in various areas of their range. "When setting up conservation areas, this characteristic has to be taken into account with respect to location, size and ecological connectivity," he says.
Protective measures thus far have concentrated primarily on maintaining species diversity. For example, several years ago Switzerland launched a programme for monitoring species diversity within its borders, but Pellissier believes this is not enough. "If we want to protect our biodiversity, we also have to monitor the genetic diversity of populations. This is the only way to ensure that the pool of varied genetic material is large enough to enable the survival of species under changing environmental conditions," he explains.
Read more at Science Daily
Help on that front is now on the way from an international team of scientists from French universities and ETH Zurich. They have produced the first global distribution map for genetic diversity among freshwater and marine fish. Furthermore, they identified the environmental factors that are instrumental in determining the distribution of genetic diversity. Their study was recently published in the journal Nature Communications.
Genetic diversity is unevenly distributed
To begin their study, the researchers analysed a database that contained the data of over 50,000 DNA sequences representing 3,815 species of marine fish and 1,611 species of freshwater fish. From this sequence data, the scientists estimated the average genetic diversity in sections of bodies of water, each section measuring 200 square kilometres.
Their analysis revealed that genetic diversity is unevenly distributed throughout marine and freshwater fish. The greatest genetic diversity was found among marine fish in the western Pacific Ocean, the northern Indian Ocean and the Caribbean. Among freshwater fish, genetic diversity was greatest in South America, but comparatively low in Europe.
In addition, the researchers determined that temperature is a key factor influencing genetic diversity among marine fish: as the temperature rises, so does diversity. By contrast, the key determinants of genetic diversity in freshwater fish were the complexity of their habitat structure and how their habitats have changed over time.
Impact on nature conservation strategies
The researchers see their study as a tool in efforts to improve conservation of genetic diversity and in turn biodiversity. Their map makes it easier to detect hotspots of species and genetic diversity and to plan appropriate protective action. Maintaining the genetic diversity is crucial, say the researchers. "The more diverse a population's gene pool is, the higher the potential for adaptation to environmental changes," explains Loïc Pellissier, co-?lead author of the study and Professor at ETH's Institute of Terrestrial Ecosystems.
Based on the findings, Pellissier predicts that fish populations will have potentially differing levels of adaptability in various areas of their range. "When setting up conservation areas, this characteristic has to be taken into account with respect to location, size and ecological connectivity," he says.
Protective measures thus far have concentrated primarily on maintaining species diversity. For example, several years ago Switzerland launched a programme for monitoring species diversity within its borders, but Pellissier believes this is not enough. "If we want to protect our biodiversity, we also have to monitor the genetic diversity of populations. This is the only way to ensure that the pool of varied genetic material is large enough to enable the survival of species under changing environmental conditions," he explains.
Read more at Science Daily
New research supports previous studies on global sea level rise
As a result of global warming, the world's oceans have risen by an average of around 3 mm a year since the early 1990s. But how much they have risen year on year has been a matter of some debate among experts, for instance in the UN's climate panel IPCC. Is the rise constant, or is it accelerating every year?
Now, in a new study, a Danish student has shown that the rise is accelerating. In other words, the oceans are rising faster every year. The new research supports previous studies and has been published in the scientific journal Advances in Space Research.
The calculations were done by Tadea Veng , who studies Earth and Space Physics and Engineering at DTU Space under the supervision of Professor Ole Baltazar Andersen.
"Using data from independent European satellites, our results show the same rate of acceleration in sea level rise used by the UN Climate Panel, which they based on data from American satellites," says Tadea Veng.
According to the new calculations, the average acceleration between 1991 and 2019 was 0.1 mm/year2 (or to be more precise, 0.095 mm/year2). This means that if, for example, the oceans rose by 2 mm in 2000, by 2010 they would have risen by 3 mm, and around 2020 by almost 4 mm.
The new findings have just been published in the scientific journal Advances in Space Research. In december, 25-year-old Tadea presented her research at the annual meeting of the American Geophysical Union, the world's largest conference on space and geophysics research.
The calculations are based on data from a number of European remote-sensing satellites in orbit around Earth (the European Space Agency's ERS1, ERS2, Envisat and Cryosat missions).
Tadea Veng compared her own results to calculations based on the US satellite data used in the climate change reports regularly published by the IPCC. (The US satellites are NASA's Topex/Poseidon, Jason-1, Jason-2 and Jason-3.) Based on data from the US satellites, the acceleration has been calculated as 0.084 mm/year2.
But unlike the US satellites, European satellites also take measurements in the Arctic region. Therefore, the new research provides a more comprehensive picture of the global rise in sea levels.
"In recent years, there's been a good deal of debate about the acceleration due to inaccuracies in the satellite measurements from Topex/Poseidon, the oldest of the US satellites. That's why it's important that we now also have results using data from European satellites. Acceleration is an important factor in modelling future sea level rise," says Professor Ole Baltazar Andersen, who co-authored the article.
"Tadea has presented an important and very useful contribution to the research and to the UN Climate Panel. It's a solid piece of scientific work, which is why Advances in Space Research chose to publish it."
In other words, there is now no doubt that the world's oceans are rising, and that this has happened at an increasing rate over the past 30 years. Overall, the world's oceans are estimated to have risen by approximately 75 mm from 1991 to 2019.
Satellites in orbit around Earth measure the distance to the oceans' surface over time and across very large areas using radar signals, among other things. This data can then be used, for example, to calculate the acceleration of sea level rise. The European Space Agency launched the CryoSat satellite in 2010. It measures sea levels and changes in ice cover in the Arctic and Antarctic.
Read more at Science Daily
Now, in a new study, a Danish student has shown that the rise is accelerating. In other words, the oceans are rising faster every year. The new research supports previous studies and has been published in the scientific journal Advances in Space Research.
The calculations were done by Tadea Veng , who studies Earth and Space Physics and Engineering at DTU Space under the supervision of Professor Ole Baltazar Andersen.
"Using data from independent European satellites, our results show the same rate of acceleration in sea level rise used by the UN Climate Panel, which they based on data from American satellites," says Tadea Veng.
According to the new calculations, the average acceleration between 1991 and 2019 was 0.1 mm/year2 (or to be more precise, 0.095 mm/year2). This means that if, for example, the oceans rose by 2 mm in 2000, by 2010 they would have risen by 3 mm, and around 2020 by almost 4 mm.
The new findings have just been published in the scientific journal Advances in Space Research. In december, 25-year-old Tadea presented her research at the annual meeting of the American Geophysical Union, the world's largest conference on space and geophysics research.
The calculations are based on data from a number of European remote-sensing satellites in orbit around Earth (the European Space Agency's ERS1, ERS2, Envisat and Cryosat missions).
Tadea Veng compared her own results to calculations based on the US satellite data used in the climate change reports regularly published by the IPCC. (The US satellites are NASA's Topex/Poseidon, Jason-1, Jason-2 and Jason-3.) Based on data from the US satellites, the acceleration has been calculated as 0.084 mm/year2.
But unlike the US satellites, European satellites also take measurements in the Arctic region. Therefore, the new research provides a more comprehensive picture of the global rise in sea levels.
"In recent years, there's been a good deal of debate about the acceleration due to inaccuracies in the satellite measurements from Topex/Poseidon, the oldest of the US satellites. That's why it's important that we now also have results using data from European satellites. Acceleration is an important factor in modelling future sea level rise," says Professor Ole Baltazar Andersen, who co-authored the article.
"Tadea has presented an important and very useful contribution to the research and to the UN Climate Panel. It's a solid piece of scientific work, which is why Advances in Space Research chose to publish it."
In other words, there is now no doubt that the world's oceans are rising, and that this has happened at an increasing rate over the past 30 years. Overall, the world's oceans are estimated to have risen by approximately 75 mm from 1991 to 2019.
Satellites in orbit around Earth measure the distance to the oceans' surface over time and across very large areas using radar signals, among other things. This data can then be used, for example, to calculate the acceleration of sea level rise. The European Space Agency launched the CryoSat satellite in 2010. It measures sea levels and changes in ice cover in the Arctic and Antarctic.
Read more at Science Daily
The brain of migraine sufferers is hyper-excitable
Individuals who suffer from migraine headaches appear to have a hyper-excitable visual cortex researchers at the Universities of Birmingham and Lancaster suggest.
Migraines are characterised as debilitating and persistent headaches, often accompanied by an increased sensitivity to visual or other sensory stimuli. The exact causes of these headaches are not well understood, although scientists believe they may be related to temporary changes in the chemicals, nerves, or blood vessels in the brain.
In a new study, published in the journal Neuroimage: Clinical, researchers set out to test a theory that at least part of the answer lies in the visual cortex -- the part of our brain that is responsible for vision.
Dr Terence Chun Yuen Fong, lead author on the study, explained: "Most migraineurs also report experiencing abnormal visual sensations in their everyday life, for example, elementary hallucinations, visual discomforts and extra light sensitivity. We believe this hints at a link between migraine experiences and abnormalities in the visual cortex. Our results provide the first evidence for this theory, by discovering a specific brain response pattern among migraineurs."
The study was carried out by researchers based in the Centre for Human Brain Health and School of Psychology at the University of Birmingham, and the Department of Psychology, Lancaster University. The team set up an experiment with a group of 60 volunteers, half of whom were 'migraineurs' -- regularly suffering from migraines. Participants were presented with a striped grating pattern, and asked to rate the pattern according to whether it was uncomfortable to look at, or any associated visual phenomena from viewing it.
In a further test, the participants underwent an electroencephalogram (EEG) test, in which the researchers were able to track and record brain wave patterns when the visual stimuli were presented.
In both tests, the researchers found a larger response in the visual cortex among the group of migraine sufferers when participants were presented with the gratings.
The study also took into account results from a subgroup of non-migraineurs -- participants who reported additional visual disturbances, a common feature of migraines. Surprisingly, it was found that these participants also showed hyperexcitability in the response of their visual cortex.
Dr Ali Mazaheri, the senior author on the paper, explains: "Our study provides evidence that there are likely specific anomalies present in the way the visual cortex of migraine sufferers processes information from the outside world. However, we suspect that is only part of the picture, since the same patterns of activity can also be seen in non-migraineurs who are sensitive to certain visual stimuli."
Read more at Science Daily
Migraines are characterised as debilitating and persistent headaches, often accompanied by an increased sensitivity to visual or other sensory stimuli. The exact causes of these headaches are not well understood, although scientists believe they may be related to temporary changes in the chemicals, nerves, or blood vessels in the brain.
In a new study, published in the journal Neuroimage: Clinical, researchers set out to test a theory that at least part of the answer lies in the visual cortex -- the part of our brain that is responsible for vision.
Dr Terence Chun Yuen Fong, lead author on the study, explained: "Most migraineurs also report experiencing abnormal visual sensations in their everyday life, for example, elementary hallucinations, visual discomforts and extra light sensitivity. We believe this hints at a link between migraine experiences and abnormalities in the visual cortex. Our results provide the first evidence for this theory, by discovering a specific brain response pattern among migraineurs."
The study was carried out by researchers based in the Centre for Human Brain Health and School of Psychology at the University of Birmingham, and the Department of Psychology, Lancaster University. The team set up an experiment with a group of 60 volunteers, half of whom were 'migraineurs' -- regularly suffering from migraines. Participants were presented with a striped grating pattern, and asked to rate the pattern according to whether it was uncomfortable to look at, or any associated visual phenomena from viewing it.
In a further test, the participants underwent an electroencephalogram (EEG) test, in which the researchers were able to track and record brain wave patterns when the visual stimuli were presented.
In both tests, the researchers found a larger response in the visual cortex among the group of migraine sufferers when participants were presented with the gratings.
The study also took into account results from a subgroup of non-migraineurs -- participants who reported additional visual disturbances, a common feature of migraines. Surprisingly, it was found that these participants also showed hyperexcitability in the response of their visual cortex.
Dr Ali Mazaheri, the senior author on the paper, explains: "Our study provides evidence that there are likely specific anomalies present in the way the visual cortex of migraine sufferers processes information from the outside world. However, we suspect that is only part of the picture, since the same patterns of activity can also be seen in non-migraineurs who are sensitive to certain visual stimuli."
Read more at Science Daily
'Rule breaking' plants may be climate change survivors
Plantago lanceolate |
Dr Annabel Smith, from UQ's School of Agriculture and Food Sciences, and Professor Yvonne Buckley, from UQ's School of Biological Sciences and Trinity College Dublin Ireland, studied the humble plantain (Plantago lanceolate) to see how it became one of the world's most successfully distributed plant species.
"The plantain, a small plant native to Europe, has spread wildly across the globe -- we needed to know why it's been so incredibly successful, even in hot, dry climates," Dr Smith said.
The global team of 48 ecologists set up 53 monitoring sites in 21 countries, tagged thousands of individual plants, tracked plant deaths and new seedlings, counted flowers and seeds and looked at DNA to see how many individual plants have historically been introduced outside Europe.
What they discovered went against existing tenets of ecological science.
"We were a bit shocked to find that some of the 'rules of ecology' simply didn't apply to this species," Dr Smith said.
"Ecologists use different theories to understand how nature works -- developed and tested over decades with field research -- these are the so-called 'rules'.
"One of these theories describes how genetic diversity or variation in genes embedded in DNA are produced by changes in population size.
"Small populations tend to have little genetic diversity, while large populations with many offspring, such as those with lots of seeds, have more genetic diversity.
"Genetic diversity sounds boring, but actually it's the raw material on which evolution acts; more genetic diversity means plants are better able to adapt to environmental changes, like climate change.
"We discovered that, in their native range, the environment determined their levels of genetic diversity.
"But, in new environments, these rule breakers were adapting better than most other plants."
The team found the plantain's success was due to multiple introductions around the world.
Professor Buckley, who coordinates the global project from Trinity College Dublin Ireland, said the DNA analysis revealed that ongoing introductions into Australia, NZ, North America, Japan and South Africa quickly prompted genetic diversity,
It gave these 'expats' a higher capacity for adaptation," Professor Buckley said.
"In Europe plantains played by the rules, but by breaking it outside of Europe, it didn't matter what kind of environment they were living in, the plantains almost always had high genetic diversity and high adaptability."
Dr Smith said the finding was fascinating and critical, for two crucial reasons.
"It's important we now know that multiple introductions will mix genetic stock and make invasive plants more successful quite quickly -- an important finding given invasive species cause extinction and cost governments billions of dollars," she said.
Read more at Science Daily
Subscribe to:
Posts (Atom)