In a world where movements of non-native animal species are drastically disrupting whole ecosystems and causing economic harm and environmental change, it is becoming increasingly important to understand the features that allow them to colonize new habitats.
A new study, published in Communications Biology, shows that the prolific comb jelly, a marine invertebrate invader from North America that now frequently washes up on Baltic shores, is able to expand their geographical range thanks to the use of its own young as nutrient stores through long and nutrient deprived winters.
As jellies trace their lineage back to the beginning of all animal life, this work furthers the view of cannibalism as a pervasive trait amongst the animal kingdom.
Mysterious success
With their translucent gelatinous bodies, they may not look like much, but the expansion of the comb jelly, Mnemiopsis leidyi, from the east coasts of North and South America to Eurasian coastal waters has wreaked havoc on local environments.
Their success has remained something of a mystery especially as, instead of storing resources before wintering, they seemed to counterproductively invest in massive 'blooms' of offspring unable to survive long and nutrient deprived winters.
It had been assumed that perhaps they were able to persist due to a lack of native predators, though both this, as well as the best conservation management strategies for this exotic species, have remained hazy.
A handy floating reservoir of food
That was until an international team of researchers, including authors at the University of Southern Denmark and the Max Planck Institute for the Science of Human History, performed the dedicated collection of comb jellies throughout the year at their northernmost range in the Baltic Sea off of northern Germany.
Lead author of the study, Jamileh Javidpour, Assistant Prof. at University of Southern Denmark states "we combined a study of the population dynamics of this species with experimental feeding and geochemical tracers to show, for the first time, that adult jellies were actually consuming the blooms of their own offspring."
This rather sinister realization behind the function of these blooms makes perfect sense. As a handy floating nutrient reservoir that lasted beyond the collapse of normal prey populations, the release of offspring provided adults with an additional 2-3 weeks window of growth which, ecologically, can be the difference between life and death.
Tackling their colonization
"In some ways, the whole jelly population is acting as a single organism, with the younger groups supporting the adults through times of nutrient stress," says Thomas Larsen, a co-author of the study at the Max Planck Institute for the Science of Human History.
"Overall, it enables jellies to persist through extreme events and low food periods, colonizing further than climate systems and other conditions would usually allow," he continues.
The novel data produced by the team may allow conservationists and governments to better combat the spread of these jellies which can disadvantage native species and bring local fisheries down. In their exotic ranges, the comb jellies have been particularly successful in seas impacted by rapid warming, overfishing and excessive nutrient loads.
Tackling these problems could potentially reduce the food sources for these gelatinous invaders and restore the ecological balance of Eurasian seas.
The study also suggests that this jelly may become a problematic species in its native ranges, with possible rapid bloom-and-bust cycles under the right conditions.
Was cannibalism an early trait?
This study also speaks to wider questions of cannibalism in the animal kingdom. Cannibalism has been recorded among over 1,500 species, including humans, chimpanzees, squirrels, fish, and dragonfly larvae.
Although sometimes cannibalism occurs during periods of extreme shortage or disaster, it can also occur under regular conditions.
"Because comb jellies trace their ancestry back to the beginning of most animal life as we know it during the Cambrian Period, 525 Million Years Ago, it remains possible that it is a basic, unifying feature across the animal kingdom," Jamileh Javidpour concludes.
More research is certainly required to clarify the role of cannibalism among the earliest members of the animal kingdom and the evolutionary origins of cannibalism and the reasons why it is particularly prominent in aquatic ecosystems.
Read more at Science Daily
May 9, 2020
Inspired by cheetahs, researchers build fastest soft robots yet
Inspired by the biomechanics of cheetahs, researchers have developed a new type of soft robot that is capable of moving more quickly on solid surfaces or in the water than previous generations of soft robots. The new soft robotics are also capable of grabbing objects delicately -- or with sufficient strength to lift heavy objects.
"Cheetahs are the fastest creatures on land, and they derive their speed and power from the flexing of their spines," says Jie Yin, an assistant professor of mechanical and aerospace engineering at North Carolina State University and corresponding author of a paper on the new soft robots.
"We were inspired by the cheetah to create a type of soft robot that has a spring-powered, 'bistable' spine, meaning that the robot has two stable states," Yin says. "We can switch between these stable states rapidly by pumping air into channels that line the soft, silicone robot. Switching between the two states releases a significant amount of energy, allowing the robot to quickly exert force against the ground. This enables the robot to gallop across the surface, meaning that its feet leave the ground.
"Previous soft robots were crawlers, remaining in contact with the ground at all times. This limits their speed."
The fastest soft robots until now could move at speeds of up to 0.8 body lengths per second on flat, solid surfaces. The new class of soft robots, which are called "Leveraging Elastic instabilities for Amplified Performance" (LEAP), are able to reach speeds of up to 2.7 body lengths per second -- more than three times faster -- at a low actuation frequency of about 3Hz. These new robots are also capable of running up steep inclines, which can be challenging or impossible for soft robots that exert less force against the ground.
These "galloping" LEAP robots are approximately 7 centimeters long and weigh about 45 grams.
The researchers also demonstrated that the LEAP design could improve swimming speeds for soft robots. Attaching a fin, rather than feet, a LEAP robot was able to swim at a speed of 0.78 body lengths per second, as compared to 0.7 body lengths per second for the previous fastest swimming soft robot.
"We also demonstrated the use of several soft robots working together, like pincers, to grab objects," Yin says. "By tuning the force exerted by the robots, we were able to lift objects as delicate as an egg, as well as objects weighing 10 kilograms or more."
The researchers note that this work serves as a proof of concept, and are optimistic that they can modify the design to make LEAP robots that are even faster and more powerful.
"Potential applications include search and rescue technologies, where speed is essential, and industrial manufacturing robotics," Yin says. "For example, imagine production line robotics that are faster, but still capable of handling fragile objects.
Read more at Science Daily
"Cheetahs are the fastest creatures on land, and they derive their speed and power from the flexing of their spines," says Jie Yin, an assistant professor of mechanical and aerospace engineering at North Carolina State University and corresponding author of a paper on the new soft robots.
"We were inspired by the cheetah to create a type of soft robot that has a spring-powered, 'bistable' spine, meaning that the robot has two stable states," Yin says. "We can switch between these stable states rapidly by pumping air into channels that line the soft, silicone robot. Switching between the two states releases a significant amount of energy, allowing the robot to quickly exert force against the ground. This enables the robot to gallop across the surface, meaning that its feet leave the ground.
"Previous soft robots were crawlers, remaining in contact with the ground at all times. This limits their speed."
The fastest soft robots until now could move at speeds of up to 0.8 body lengths per second on flat, solid surfaces. The new class of soft robots, which are called "Leveraging Elastic instabilities for Amplified Performance" (LEAP), are able to reach speeds of up to 2.7 body lengths per second -- more than three times faster -- at a low actuation frequency of about 3Hz. These new robots are also capable of running up steep inclines, which can be challenging or impossible for soft robots that exert less force against the ground.
These "galloping" LEAP robots are approximately 7 centimeters long and weigh about 45 grams.
The researchers also demonstrated that the LEAP design could improve swimming speeds for soft robots. Attaching a fin, rather than feet, a LEAP robot was able to swim at a speed of 0.78 body lengths per second, as compared to 0.7 body lengths per second for the previous fastest swimming soft robot.
"We also demonstrated the use of several soft robots working together, like pincers, to grab objects," Yin says. "By tuning the force exerted by the robots, we were able to lift objects as delicate as an egg, as well as objects weighing 10 kilograms or more."
The researchers note that this work serves as a proof of concept, and are optimistic that they can modify the design to make LEAP robots that are even faster and more powerful.
"Potential applications include search and rescue technologies, where speed is essential, and industrial manufacturing robotics," Yin says. "For example, imagine production line robotics that are faster, but still capable of handling fragile objects.
Read more at Science Daily
May 8, 2020
Neanderthals were choosy about making bone tools
Evidence continues to mount that the Neanderthals, who lived in Europe and Asia until about 40,000 years ago, were more sophisticated people than once thought. A new study from UC Davis shows that Neanderthals chose to use bones from specific animals to make a tool for specific purpose: working hides into leather.
Naomi Martisius, research associate in the Department of Anthropology, studied Neanderthal tools from sites in southern France for her doctoral research. The Neanderthals left behind a tool called a lissoir, a piece of animal rib with a smoothed tip used to rub animal hides to make them into leather. These lissoirs are often worn so smooth that it's impossible to tell which animal they came from just by looking at them.
Martisius and colleagues used highly sensitive mass spectrometry to look at residues of collagen protein from the bones. The method is called ZooMS, or Zooarchaeology by Mass Spectrometry. The technique breaks up samples into fragments that can be identified by their mass to charge ratio and used to reconstruct the original molecule.
Normally, this method would involve drilling a sample from the bone. To avoid damaging the precious specimens, Martisius and colleagues were able to lift samples from the plastic containers in which the bones had been stored and recover enough material to perform an analysis.
Favoring bovine ribs over deer
The results show that the bones used to make lissoirs mostly came from animals in the cattle family, such as bison or aurochs (a wild relative of modern cattle that is now extinct). But other animal bones from the same deposit show that reindeer were much more common and frequently hunted for food. So the Neanderthals were choosing to use only ribs from certain types of animals to make these tools.
"I think this shows that Neanderthals really knew what they were doing," Martisius said. "They were deliberately picking up these larger ribs when they happened to come across these animals while hunting and they may have even kept these rib tools for a long time, like we would with a favorite wrench or screwdriver."
Bovine ribs are bigger and more rigid than deer ribs, making them better suited for the hard work of rubbing skins without wearing out or breaking.
"Neanderthals knew that for a specific task, they needed a very particular tool. They found what worked best and sought it out when it was available," Martisius said.
From Science Daily
Naomi Martisius, research associate in the Department of Anthropology, studied Neanderthal tools from sites in southern France for her doctoral research. The Neanderthals left behind a tool called a lissoir, a piece of animal rib with a smoothed tip used to rub animal hides to make them into leather. These lissoirs are often worn so smooth that it's impossible to tell which animal they came from just by looking at them.
Martisius and colleagues used highly sensitive mass spectrometry to look at residues of collagen protein from the bones. The method is called ZooMS, or Zooarchaeology by Mass Spectrometry. The technique breaks up samples into fragments that can be identified by their mass to charge ratio and used to reconstruct the original molecule.
Normally, this method would involve drilling a sample from the bone. To avoid damaging the precious specimens, Martisius and colleagues were able to lift samples from the plastic containers in which the bones had been stored and recover enough material to perform an analysis.
Favoring bovine ribs over deer
The results show that the bones used to make lissoirs mostly came from animals in the cattle family, such as bison or aurochs (a wild relative of modern cattle that is now extinct). But other animal bones from the same deposit show that reindeer were much more common and frequently hunted for food. So the Neanderthals were choosing to use only ribs from certain types of animals to make these tools.
"I think this shows that Neanderthals really knew what they were doing," Martisius said. "They were deliberately picking up these larger ribs when they happened to come across these animals while hunting and they may have even kept these rib tools for a long time, like we would with a favorite wrench or screwdriver."
Bovine ribs are bigger and more rigid than deer ribs, making them better suited for the hard work of rubbing skins without wearing out or breaking.
"Neanderthals knew that for a specific task, they needed a very particular tool. They found what worked best and sought it out when it was available," Martisius said.
From Science Daily
New technique delivers complete DNA sequences of chromosomes inherited from mother and father
An international team of scientists led by the University of Adelaide's Davies Research Centre has shown that it is possible to disentangle the DNA sequences of the chromosomes inherited from the mother and the father, to create true diploid genomes from a single individual.
In a report published in Nature Communications, and funded by the Davies Research Centre over the past 15 years, the researchers have shown that genomes of two important modern-day cattle breeds, Angus (Bos taurus taurus) and Brahman (Bos taurus indicus), can be completely decoded from a single hybrid individual carrying the genetics of both breeds, using an innovative genome assembly strategy.
Although demonstrated in cattle, the approach is applicable to other species including humans.
Dr Lloyd Low, from the University of Adelaide's School of Animal and Veterinary Science, says the technique, called trio binning, gives the true genome sequence of each chromosome in an individual.
Obtaining a full genome from an organism that inherits half the chromosomes from the mother and the other half from the father is difficult due to the high similarity between the parental chromosomes.
"Back in 2018 we were able to demonstrate that with this method it was possible to identify large sections of the DNA from the parents. Now in 2020 we have used the same concept to create the sequence of full chromosomes," Lloyd said.
Professor John Williams added: "Disentangling maternal and paternal genomes is very difficult, but we have now been able to do this and create the best genome assemblies available for any livestock, and arguably any species."
"These high quality genome sequences will make it easier to more accurately study the genetics of cattle to improve production and welfare traits."
Brahman and Angus cattle subspecies were domesticated separately thousands of years ago and have been subjected to very different selection pressures since then; pest and humid environments in the case of the Brahman cattle and beef production in Angus cattle. These different characteristics and histories are reflected in their genomes, which makes them ideal test subjects.
Indian breeds such as Brahman cattle are better able to regulate body temperature and are routinely crossed with European breeds such as Angus to produce cattle that are better adapted to tropical climates.
Considering the large differences in production and adaptation traits between taurine and indicine cattle, comparing the genomes helps us understand how the animals adapt to their environment, which is of substantial scientific and economic interest.
Professor Stefan Hiendleder said high-quality genomes of both cattle subspecies were needed to decipher the differences between taurine and indicine cattle.
"This technology will ultimately lead to breeding cattle which are more productive in harsh environments and also better suited from an animal welfare perspective," he said.
"Comparison between the Brahman and Angus revealed an indicus-specific extra copy of fatty acid enzyme which may be important for the regulation of the metabolism related to heat tolerance."
Read more at Science Daily
In a report published in Nature Communications, and funded by the Davies Research Centre over the past 15 years, the researchers have shown that genomes of two important modern-day cattle breeds, Angus (Bos taurus taurus) and Brahman (Bos taurus indicus), can be completely decoded from a single hybrid individual carrying the genetics of both breeds, using an innovative genome assembly strategy.
Although demonstrated in cattle, the approach is applicable to other species including humans.
Dr Lloyd Low, from the University of Adelaide's School of Animal and Veterinary Science, says the technique, called trio binning, gives the true genome sequence of each chromosome in an individual.
Obtaining a full genome from an organism that inherits half the chromosomes from the mother and the other half from the father is difficult due to the high similarity between the parental chromosomes.
"Back in 2018 we were able to demonstrate that with this method it was possible to identify large sections of the DNA from the parents. Now in 2020 we have used the same concept to create the sequence of full chromosomes," Lloyd said.
Professor John Williams added: "Disentangling maternal and paternal genomes is very difficult, but we have now been able to do this and create the best genome assemblies available for any livestock, and arguably any species."
"These high quality genome sequences will make it easier to more accurately study the genetics of cattle to improve production and welfare traits."
Brahman and Angus cattle subspecies were domesticated separately thousands of years ago and have been subjected to very different selection pressures since then; pest and humid environments in the case of the Brahman cattle and beef production in Angus cattle. These different characteristics and histories are reflected in their genomes, which makes them ideal test subjects.
Indian breeds such as Brahman cattle are better able to regulate body temperature and are routinely crossed with European breeds such as Angus to produce cattle that are better adapted to tropical climates.
Considering the large differences in production and adaptation traits between taurine and indicine cattle, comparing the genomes helps us understand how the animals adapt to their environment, which is of substantial scientific and economic interest.
Professor Stefan Hiendleder said high-quality genomes of both cattle subspecies were needed to decipher the differences between taurine and indicine cattle.
"This technology will ultimately lead to breeding cattle which are more productive in harsh environments and also better suited from an animal welfare perspective," he said.
"Comparison between the Brahman and Angus revealed an indicus-specific extra copy of fatty acid enzyme which may be important for the regulation of the metabolism related to heat tolerance."
Read more at Science Daily
How does the brain link events to form a memory? Study reveals unexpected mental processes
A woman walking down the street hears a bang. Several moments later she discovers her boyfriend, who had been walking ahead of her, has been shot. A month later, the woman checks into the emergency room. The noises made by garbage trucks, she says, are causing panic attacks. Her brain had formed a deep, lasting connection between loud sounds and the devastating sight she witnessed.
This story, relayed by clinical psychiatrist and co-author of a new study Mohsin Ahmed, MD, PhD, is a powerful example of the brain's powerful ability to remember and connect events separated in time. And now, in that new study in mice published today in Neuron, scientists at Columbia's Zuckerman Institute have shed light on how the brain can form such enduring links.
The scientists uncovered a surprising mechanism by which the hippocampus, a brain region critical for memory, builds bridges across time: by firing off bursts of activity that seem random, but in fact make up a complex pattern that, over time, help the brain learn associations. By revealing the underlying circuitry behind associative learning, the findings lay the foundation for a better understanding of anxiety and trauma- and stressor-related disorders, such as panic and post-traumatic stress disorders, in which a seemingly neutral event can elicit a negative response.
"We know that the hippocampus is important in forms of learning that involve linking two events that happen even up to 10 to 30 seconds apart," said Attila Losonczy, MD, PhD, a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's co-senior author. "This ability is a key to survival, but the mechanisms behind it have proven elusive. With today's study in mice, we have mapped the complex calculations the brain undertakes in order to link distinct events that are separated in time."
The hippocampus -- a small, seahorse-shaped region buried deep in the brain -- is an important headquarters for learning and memory. Previous experiments in mice showed that disruption to the hippocampus leaves the animals with trouble learning to associate two events separated by tens of seconds.
"The prevailing view has been that cells in the hippocampus keep up a level of persistent activity to associate such events," said Dr. Ahmed, an assistant professor of clinical psychiatry at Columbia's Vagelos College of Physicians and Surgeons, and co-first author of today's study. "Turning these cells off would thus disrupt learning."
To test this traditional view, the researchers imaged parts of the hippocampus of mice as the animals were exposed to two different stimuli: a neutral sound followed by a small but unpleasant puff of air. A fifteen-second delay separated the two events. The scientists repeated this experiment across several trials. Over time, the mice learned to associate the tone with the soon-to-follow puff of air. Using advanced two-photon microscopy and functional calcium imaging, they recorded the activity of thousands of neurons, a type of brain cell, in the animals' hippocampus simultaneously over the course of each trial for many days.
"With this approach, we could mimic, albeit in a simpler way, the process our own brains undergo when we learn to connect two events," said Dr. Losonczy, who is also a professor of neuroscience at Columbia's Vagelos College of Physicians and Surgeons.
To make sense of the information they collected, the researchers teamed up with computational neuroscientists who develop powerful mathematical tools to analyze vast amounts of experimental data.
"We expected to see repetitive, continuous neural activity that persisted during the fifteen-second gap, an indication of the hippocampus at work linking the auditory tone and the air puff," said computational neuroscientist Stefano Fusi, PhD, a principal investigator at Columbia's Zuckerman Institute and the paper's co-senior author. "But when we began to analyze the data, we saw no such activity."
Instead, the neural activity recorded during the fifteen-second time gap was sparse. Only a small number of neurons fired, and they did so seemingly at random. This sporadic activity looked distinctly different from the continuous activity that the brain displays during other learning and memory tasks, like memorizing a phone number.
"The activity appears to come in fits and bursts at intermittent and random time periods throughout the task," said James Priestley, a doctoral candidate co-mentored by Drs. Losonczy and Fusi at Columbia's Zuckerman Institute and the paper's co-first author. "To understand activity, we had to shift the way we analyzed data and use tools designed to make sense of random processes."
Ultimately, the researchers discovered a pattern in the randomness: a style of mental computing that seems to be a remarkably efficient way that neurons store information. Instead of communicating with each other constantly, the neurons save energy -- perhaps by encoding information in the connections between cells, called synapses, rather than through the electrical activity of the cells.
"We were happy to see that the brain doesn't maintain ongoing activity over all these seconds because, metabolically, that's not the most efficient way to store information," said Dr. Fusi, who is also a professor of neuroscience at Columbia's Vagelos College of Physicians and Surgeons. "The brain seems to have a more efficient way to build this bridge, which we suspect may involve changing the strength of the synapses."
In addition to helping to map the circuitry involved in associative learning, these findings also provide a starting point to more deeply explore disorders involving dysfunctions in associative memory, such as panic and pos-ttraumatic stress disorder.
Read more at Science Daily
This story, relayed by clinical psychiatrist and co-author of a new study Mohsin Ahmed, MD, PhD, is a powerful example of the brain's powerful ability to remember and connect events separated in time. And now, in that new study in mice published today in Neuron, scientists at Columbia's Zuckerman Institute have shed light on how the brain can form such enduring links.
The scientists uncovered a surprising mechanism by which the hippocampus, a brain region critical for memory, builds bridges across time: by firing off bursts of activity that seem random, but in fact make up a complex pattern that, over time, help the brain learn associations. By revealing the underlying circuitry behind associative learning, the findings lay the foundation for a better understanding of anxiety and trauma- and stressor-related disorders, such as panic and post-traumatic stress disorders, in which a seemingly neutral event can elicit a negative response.
"We know that the hippocampus is important in forms of learning that involve linking two events that happen even up to 10 to 30 seconds apart," said Attila Losonczy, MD, PhD, a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's co-senior author. "This ability is a key to survival, but the mechanisms behind it have proven elusive. With today's study in mice, we have mapped the complex calculations the brain undertakes in order to link distinct events that are separated in time."
The hippocampus -- a small, seahorse-shaped region buried deep in the brain -- is an important headquarters for learning and memory. Previous experiments in mice showed that disruption to the hippocampus leaves the animals with trouble learning to associate two events separated by tens of seconds.
"The prevailing view has been that cells in the hippocampus keep up a level of persistent activity to associate such events," said Dr. Ahmed, an assistant professor of clinical psychiatry at Columbia's Vagelos College of Physicians and Surgeons, and co-first author of today's study. "Turning these cells off would thus disrupt learning."
To test this traditional view, the researchers imaged parts of the hippocampus of mice as the animals were exposed to two different stimuli: a neutral sound followed by a small but unpleasant puff of air. A fifteen-second delay separated the two events. The scientists repeated this experiment across several trials. Over time, the mice learned to associate the tone with the soon-to-follow puff of air. Using advanced two-photon microscopy and functional calcium imaging, they recorded the activity of thousands of neurons, a type of brain cell, in the animals' hippocampus simultaneously over the course of each trial for many days.
"With this approach, we could mimic, albeit in a simpler way, the process our own brains undergo when we learn to connect two events," said Dr. Losonczy, who is also a professor of neuroscience at Columbia's Vagelos College of Physicians and Surgeons.
To make sense of the information they collected, the researchers teamed up with computational neuroscientists who develop powerful mathematical tools to analyze vast amounts of experimental data.
"We expected to see repetitive, continuous neural activity that persisted during the fifteen-second gap, an indication of the hippocampus at work linking the auditory tone and the air puff," said computational neuroscientist Stefano Fusi, PhD, a principal investigator at Columbia's Zuckerman Institute and the paper's co-senior author. "But when we began to analyze the data, we saw no such activity."
Instead, the neural activity recorded during the fifteen-second time gap was sparse. Only a small number of neurons fired, and they did so seemingly at random. This sporadic activity looked distinctly different from the continuous activity that the brain displays during other learning and memory tasks, like memorizing a phone number.
"The activity appears to come in fits and bursts at intermittent and random time periods throughout the task," said James Priestley, a doctoral candidate co-mentored by Drs. Losonczy and Fusi at Columbia's Zuckerman Institute and the paper's co-first author. "To understand activity, we had to shift the way we analyzed data and use tools designed to make sense of random processes."
Ultimately, the researchers discovered a pattern in the randomness: a style of mental computing that seems to be a remarkably efficient way that neurons store information. Instead of communicating with each other constantly, the neurons save energy -- perhaps by encoding information in the connections between cells, called synapses, rather than through the electrical activity of the cells.
"We were happy to see that the brain doesn't maintain ongoing activity over all these seconds because, metabolically, that's not the most efficient way to store information," said Dr. Fusi, who is also a professor of neuroscience at Columbia's Vagelos College of Physicians and Surgeons. "The brain seems to have a more efficient way to build this bridge, which we suspect may involve changing the strength of the synapses."
In addition to helping to map the circuitry involved in associative learning, these findings also provide a starting point to more deeply explore disorders involving dysfunctions in associative memory, such as panic and pos-ttraumatic stress disorder.
Read more at Science Daily
Telescopes and spacecraft join forces to probe deep into Jupiter's atmosphere
Jupiter illustration |
A team of researchers led by Michael Wong at the University of California, Berkeley, and including Amy Simon of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and Imke de Pater also of UC Berkeley, are combining multiwavelength observations from Hubble and Gemini with close-up views from Juno's orbit about the monster planet, gaining new insights into turbulent weather on this distant world.
"We want to know how Jupiter's atmosphere works," said Wong. This is where the teamwork of Juno, Hubble and Gemini comes into play.
Radio 'Light Show'
Jupiter's constant storms are gigantic compared to those on Earth, with thunderheads reaching 40 miles from base to top -- five times taller than typical thunderheads on Earth -- and powerful lightning flashes up to three times more energetic than Earth's largest "superbolts."
Like lightning on Earth, Jupiter's lightning bolts act like radio transmitters, sending out radio waves as well as visible light when they flash across the sky.
Every 53 days, Juno races low over the storm systems detecting radio signals known as "sferics" and "whistlers," which can then be used to map lightning even on the day side of the planet or from deep clouds where flashes are not otherwise visible.
Coinciding with each pass, Hubble and Gemini watch from afar, capturing high-resolution global views of the planet that are key to interpreting Juno's close-up observations. "Juno's microwave radiometer probes deep into the planet's atmosphere by detecting high-frequency radio waves that can penetrate through the thick cloud layers. The data from Hubble and Gemini can tell us how thick the clouds are and how deep we are seeing into the clouds," Simon explained.
By mapping lightning flashes detected by Juno onto optical images captured of the planet by Hubble and thermal infrared images captured at the same time by Gemini, the research team has been able to show that lightning outbreaks are associated with a three-way combination of cloud structures: deep clouds made of water, large convective towers caused by upwelling of moist air -- essentially Jovian thunderheads -- and clear regions presumably caused by downwelling of drier air outside the convective towers.
The Hubble data show the height of the thick clouds in the convective towers, as well as the depth of deep water clouds. The Gemini data clearly reveal the clearings in the high-level clouds where it is possible to get a glimpse down to the deep water clouds.
Wong thinks that lightning is common in a type of turbulent area known as folded filamentary regions, which suggests that moist convection is occurring in them. "These cyclonic vortices could be internal energy smokestacks, helping release internal energy through convection," he said. "It doesn't happen everywhere, but something about these cyclones seems to facilitate convection."
The ability to correlate lightning with deep water clouds also gives researchers another tool for estimating the amount of water in Jupiter's atmosphere, which is important for understanding how Jupiter and the other gas and ice giants formed, and therefore how the solar system as a whole formed.
While much has been gleaned about Jupiter from previous space missions, many of the details -- including how much water is in the deep atmosphere, exactly how heat flows from the interior and what causes certain colors and patterns in the clouds -- remain a mystery. The combined result provides insight into the dynamics and three-dimensional structure of the atmosphere.
Seeing a 'Jack-O-Lantern' Red Spot
With Hubble and Gemini observing Jupiter more frequently during the Juno mission, scientists are also able to study short-term changes and short-lived features like those in the Great Red Spot.
Images from Juno as well as previous missions to Jupiter revealed dark features within the Great Red Spot that appear, disappear and change shape over time. It was not clear from individual images whether these are caused by some mysterious dark-colored material within the high cloud layer, or if they are instead holes in the high clouds -- windows into a deeper, darker layer below.
Now, with the ability to compare visible-light images from Hubble with thermal infrared images from Gemini captured within hours of each other, it is possible to answer the question. Regions that are dark in visible light are very bright in infrared, indicating that they are, in fact, holes in the cloud layer. In cloud-free regions, heat from Jupiter's interior that is emitted in the form of infrared light -- otherwise blocked by high-level clouds -- is free to escape into space and therefore appears bright in Gemini images.
"It's kind of like a jack-o-lantern," said Wong. "You see bright infrared light coming from cloud-free areas, but where there are clouds, it's really dark in the infrared."
Hubble and Gemini as Jovian Weather Trackers
The regular imaging of Jupiter by Hubble and Gemini in support of the Juno mission is proving valuable in studies of many other weather phenomena as well, including changes in wind patterns, characteristics of atmospheric waves and the circulation of various gases in the atmosphere.
Hubble and Gemini can monitor the planet as a whole, providing real-time base maps in multiple wavelengths for reference for Juno's measurements in the same way that Earth-observing weather satellites provide context for NOAA's high-flying Hurricane Hunters.
"Because we now routinely have these high-resolution views from a couple of different observatories and wavelengths, we are learning so much more about Jupiter's weather," explained Simon. "This is our equivalent of a weather satellite. We can finally start looking at weather cycles."
Because the Hubble and Gemini observations are so important for interpreting Juno data, Wong and his colleagues Simon and de Pater are making all of the processed data easily accessible to other researchers through the Mikulski Archives for Space Telescopes (MAST) at the Space Telescope Science Institute in Baltimore, Maryland.
"What's important is that we've managed to collect this huge data set that supports the Juno mission. There are so many applications of the data set that we may not even anticipate. So, we're going to enable other people to do science without that barrier of having to figure out on their own how to process the data," Wong said.
Read more at Science Daily
Labels:
Atmosphere,
Hubble,
Jupiter,
NASA,
Science,
Solar System,
Storms
May 7, 2020
Fossil reveals evidence of 200-million-year-old 'squid' attack
Scientists have discovered the world's oldest known example of a squid-like creature attacking its prey, in a fossil dating back almost 200 million years.
The fossil was found on the Jurassic coast of southern England in the 19th century and is currently housed within the collections of the British Geological Survey in Nottingham.
In a new analysis, researchers say it appears to show a creature -- which they have identified as Clarkeiteuthis montefiorei -- with a herring-like fish (Dorsetichthys bechei) in its jaws.
They say the position of the arms, alongside the body of the fish, suggests this is not a fortuitous quirk of fossilization but that it is recording an actual palaeobiological event.
They also believe it dates from the Sinemurian period (between 190 and 199 million years ago), which would predate any previously recorded similar sample by more than 10 million years.
The research was led by the University of Plymouth, in conjunction with the University of Kansas and Dorset-based company, The Forge Fossils.
It has been accepted for publication in Proceedings of the Geologists' Association and will also be presented as part of Sharing Geoscience Online, a virtual alternative to the traditional General Assembly held annually by the European Geosciences Union (EGU).
Professor Malcolm Hart, Emeritus Professor in Plymouth and the study's lead author, said: "Since the 19th century, the Blue Lias and Charmouth Mudstone formations of the Dorset coast have provided large numbers of important body fossils that inform our knowledge of coleoid palaeontology. In many of these mudstones, specimens of palaeobiological significance have been found, especially those with the arms and hooks with which the living animals caught their prey.
"This, however, is a most unusual if not extraordinary fossil as predation events are only very occasionally found in the geological record. It points to a particularly violent attack which ultimately appears to have caused the death, and subsequent preservation, of both animals."
In their analysis, the researchers say the fossilised remains indicate a brutal incident in which the head bones of the fish were apparently crushed by its attacker.
They also suggest two potential hypotheses for how the two animals ultimately came to be preserved together for eternity.
Firstly, they suggest that the fish was too large for its attacker or became stuck in its jaws so that the pair -- already dead -- settled to the seafloor where they were preserved.
Alternatively, the Clarkeiteuthis took its prey to the seafloor in a display of 'distraction sinking' to avoid the possibility of being attacked by another predator. However, in doing so it entered waters low in oxygen and suffocated.
From Science Daily
The fossil was found on the Jurassic coast of southern England in the 19th century and is currently housed within the collections of the British Geological Survey in Nottingham.
In a new analysis, researchers say it appears to show a creature -- which they have identified as Clarkeiteuthis montefiorei -- with a herring-like fish (Dorsetichthys bechei) in its jaws.
They say the position of the arms, alongside the body of the fish, suggests this is not a fortuitous quirk of fossilization but that it is recording an actual palaeobiological event.
They also believe it dates from the Sinemurian period (between 190 and 199 million years ago), which would predate any previously recorded similar sample by more than 10 million years.
The research was led by the University of Plymouth, in conjunction with the University of Kansas and Dorset-based company, The Forge Fossils.
It has been accepted for publication in Proceedings of the Geologists' Association and will also be presented as part of Sharing Geoscience Online, a virtual alternative to the traditional General Assembly held annually by the European Geosciences Union (EGU).
Professor Malcolm Hart, Emeritus Professor in Plymouth and the study's lead author, said: "Since the 19th century, the Blue Lias and Charmouth Mudstone formations of the Dorset coast have provided large numbers of important body fossils that inform our knowledge of coleoid palaeontology. In many of these mudstones, specimens of palaeobiological significance have been found, especially those with the arms and hooks with which the living animals caught their prey.
"This, however, is a most unusual if not extraordinary fossil as predation events are only very occasionally found in the geological record. It points to a particularly violent attack which ultimately appears to have caused the death, and subsequent preservation, of both animals."
In their analysis, the researchers say the fossilised remains indicate a brutal incident in which the head bones of the fish were apparently crushed by its attacker.
They also suggest two potential hypotheses for how the two animals ultimately came to be preserved together for eternity.
Firstly, they suggest that the fish was too large for its attacker or became stuck in its jaws so that the pair -- already dead -- settled to the seafloor where they were preserved.
Alternatively, the Clarkeiteuthis took its prey to the seafloor in a display of 'distraction sinking' to avoid the possibility of being attacked by another predator. However, in doing so it entered waters low in oxygen and suffocated.
From Science Daily
Ancient Andes, analyzed
An international research team has conducted the first in-depth, wide-scale study of the genomic history of ancient civilizations in the central Andes mountains and coast before European contact.
The findings, published online May 7 in Cell, reveal early genetic distinctions between groups in nearby regions, population mixing within and beyond the Andes, surprising genetic continuity amid cultural upheaval, and ancestral cosmopolitanism among some of the region's most well-known ancient civilizations.
Led by researchers at Harvard Medical School and the University of California, Santa Cruz, the team analyzed genome-wide data from 89 individuals who lived between 500 and 9,000 years ago. Of these, 64 genomes, ranging from 500 to 4,500 years old, were newly sequenced -- more than doubling the number of ancient individuals with genome-wide data from South America.
The analysis included representatives of iconic civilizations in the Andes from whom no genome-wide data had been reported before, including the Moche, Nasca, Wari, Tiwanaku and Inca.
"This was a fascinating and unique project," said Nathan Nakatsuka, first author of the paper and an MD/PhD student in the lab of David Reich in the Blavatnik Institute at HMS.
"It represents the first detailed study of Andean population history informed by pre-Colonial genomes with wide-ranging temporal and geographic coverage," said Lars Fehren-Schmitz, associate professor at UC Santa Cruz and co-senior author of the paper with Reich.
"This study also takes a major step toward redressing the global imbalance in ancient DNA data," said Reich, professor of genetics at HMS and associate member of the Broad Institute of MIT and Harvard.
"The great majority of published ancient DNA studies to date have focused on western Eurasia," he said. "This study in South America allows us to begin to discern at high resolution the detailed history of human movements in this extraordinarily important part of the world."
Attention on the Andes
The central Andes, surrounding present-day Peru, is one of the few places in the world where farming was invented rather than being adopted from elsewhere and where the earliest presence of complex civilizations in South America has been documented so far. While the region has been a major focus of archaeological research, there had been no systematic characterization with genome-wide ancient DNA until now, the authors said.
Geneticists, including several of the current team members, previously studied the deep genetic history of South America as a whole, including analysis of several individuals from the Andean highlands from many thousands of years ago. There have also been analyses of present-day residents of the Andes and a limited number of mitochondrial or Y-chromosome DNA analyses from individual ancient Andean sites.
The new study, however, expands on these findings to provide a far more comprehensive portrait. Now, Nakatsuka said, researchers are "finally able to see how the genetic structure of the Andes evolved over time."
By focusing on what is often called pre-Columbian history, the study demonstrates how large ancient DNA studies can reveal more about ancient cultures than studying present-day groups alone, said Reich.
"In the Andes, reconstruction of population history based on DNA analysis of present-day people has been challenging because there has so been much demographic change since contact with Europeans," Reich explained. "With ancient DNA data, we can carry out a detailed reconstruction of movements of people and how those relate to changes known from the archaeological record."
'Extraordinary' ancient population structure
The analyses revealed that by 9,000 years ago, groups living in the Andean highlands became genetically distinct from those that eventually came to live along the Pacific coast. The effects of this early differentiation are still seen today.
The genetic fingerprints distinguishing people living in the highlands from those in nearby regions are "remarkably ancient," said Nakatsuka, who will receive his PhD in systems, synthetic and quantitative biology in May.
"It is extraordinary, given the small geographic distance," added Reich.
By 5,800 years ago, the population of the north also developed distinct genetic signatures from populations that became prevalent in the south, the team found. Again, these differences can be observed today.
After that time, gene flow occurred among all regions in the Andes, although it dramatically slowed after 2,000 years ago, the team found.
"It is exciting that we were actually able to determine relatively fine-grained population structure in the Andes, allowing us to differentiate between coastal, northern, southern and highland groups as well as individuals living in the Titicaca Basin," said Fehren-Schmitz.
"This is significant for the archaeology of the Andes and will now allow us to ask more specific questions with regards to local demographies and cultural networks," said study co-author Jose Capriles of Pennsylvania State University.
Genetic intermingling
The team discovered genetic exchanges both within the Andes and between Andean and non-Andean populations.
Ancient people moved between south Peru and the Argentine plains and between the north Peru coast and the Amazon, largely bypassing the highlands, the researchers found.
Fehren-Schmitz was especially interested to uncover signs of long-range mobility in the Inca period. Specifically, he was surprised to detect ancient North Coast ancestry not only around Cusco, Peru, but also in a child sacrifice from the Argentinian southern Andes.
"This could be seen as genetic evidence for relocations of individuals under Inca rule, a practice we know of from ethnohistorical, historical and archaeological sources," he said.
Although the findings of genetic intermingling throughout the Andes correlate with known archaeological connections, they will likely prompt additional archaeological research to understand the cultural contexts underlying the migrations, said Nakatsuka.
"Now we have more evidence demonstrating important migrations and some constraints on when they happened, but further work needs to be done to know why exactly these migrations occurred," he said.
Long-term continuity
The analyses revealed that multiple regions maintained genetic continuity over the past 2,000 years despite clear cultural transformations.
The finding contrasts with many other world regions, where ancient DNA studies often document substantial genetic turnover during this period, said Reich.
The population structures that arose early on persisted through major social changes and on into modern societies, the authors said. The discoveries offer new evidence that can be incorporated alongside archaeological and other records to inform theories on the ancient history of different groups in the region.
"To our surprise, we observed strong genetic continuity during the rise and fall of many of the large-scale Andean cultures, such as the Moche, Wari and Nasca," said Nakatsuka. "Our results suggest that the fall of these cultures was not due to massive migration into the region, e.g., from an invading military force, a scenario which had been documented in some other regions of the world."
Two exceptions to the continuity trend were the vast urban centers that the Tiwanaku and Inca cultures called home. Rather than being fairly genetically homogeneous, the capital regions of these civilizations were cosmopolitan, hosting people from many genetic backgrounds, the team found.
"It was interesting to start to see these glimpses of ancestral heterogeneity," said Nakatsuka. "These regions have some similarity to what we see now in places like New York City and other major cities where people of very different ancestries are living side by side."
Cooperative authorship
The study included authors from many disciplines and many countries, including Argentina, Australia, Bolivia, Chile, Germany, Peru, the United Kingdom and the United States.
"This is an impressive interdisciplinary but, just as importantly, international collaboration," said study co-author Bastien Llamas of the University of Adelaide. "All worked very closely to draft this manuscript under the leadership of Fehren-Schmitz and Reich."
It was important to team up with local scientists who belong to communities that descend from the individuals analyzed in the study, Fehren-Schmitz said, and to obtain permission from and continually engage with indigenous and other local groups as well as local governments.
The analysis of DNA from ancient individuals can have significant implications for present-day communities. One concerns the physical handling of the skeletal materials, which might be sensitive to the groups involved.
The work provided opportunities to heal past wounds. In one case, a sample from Cusco, previously housed in the U.S., was repatriated to Peru. Other remains that had long ago been taken improperly from burial sites were able to be carbon-dated and reburied.
In the absence of pre-Columbian written histories, archaeology has been the main source of information available to reconstruct the complex history of the continent, said study co-author Chiara Barbieri of the University of Zurich.
"With the study of ancient DNA, we can read the demographic history of ancient groups and understand how ancient and present-day groups are related," she said. "The link with the genetic study of living populations opens a direct dialogue with the past and an occasion to involve local communities."
The researchers sought to deeply involve communities with the help of archaeologists from each area, said Nakatsuka. Their efforts included giving public talks about the study and translating materials into Spanish.
Read more at Science Daily
The findings, published online May 7 in Cell, reveal early genetic distinctions between groups in nearby regions, population mixing within and beyond the Andes, surprising genetic continuity amid cultural upheaval, and ancestral cosmopolitanism among some of the region's most well-known ancient civilizations.
Led by researchers at Harvard Medical School and the University of California, Santa Cruz, the team analyzed genome-wide data from 89 individuals who lived between 500 and 9,000 years ago. Of these, 64 genomes, ranging from 500 to 4,500 years old, were newly sequenced -- more than doubling the number of ancient individuals with genome-wide data from South America.
The analysis included representatives of iconic civilizations in the Andes from whom no genome-wide data had been reported before, including the Moche, Nasca, Wari, Tiwanaku and Inca.
"This was a fascinating and unique project," said Nathan Nakatsuka, first author of the paper and an MD/PhD student in the lab of David Reich in the Blavatnik Institute at HMS.
"It represents the first detailed study of Andean population history informed by pre-Colonial genomes with wide-ranging temporal and geographic coverage," said Lars Fehren-Schmitz, associate professor at UC Santa Cruz and co-senior author of the paper with Reich.
"This study also takes a major step toward redressing the global imbalance in ancient DNA data," said Reich, professor of genetics at HMS and associate member of the Broad Institute of MIT and Harvard.
"The great majority of published ancient DNA studies to date have focused on western Eurasia," he said. "This study in South America allows us to begin to discern at high resolution the detailed history of human movements in this extraordinarily important part of the world."
Attention on the Andes
The central Andes, surrounding present-day Peru, is one of the few places in the world where farming was invented rather than being adopted from elsewhere and where the earliest presence of complex civilizations in South America has been documented so far. While the region has been a major focus of archaeological research, there had been no systematic characterization with genome-wide ancient DNA until now, the authors said.
Geneticists, including several of the current team members, previously studied the deep genetic history of South America as a whole, including analysis of several individuals from the Andean highlands from many thousands of years ago. There have also been analyses of present-day residents of the Andes and a limited number of mitochondrial or Y-chromosome DNA analyses from individual ancient Andean sites.
The new study, however, expands on these findings to provide a far more comprehensive portrait. Now, Nakatsuka said, researchers are "finally able to see how the genetic structure of the Andes evolved over time."
By focusing on what is often called pre-Columbian history, the study demonstrates how large ancient DNA studies can reveal more about ancient cultures than studying present-day groups alone, said Reich.
"In the Andes, reconstruction of population history based on DNA analysis of present-day people has been challenging because there has so been much demographic change since contact with Europeans," Reich explained. "With ancient DNA data, we can carry out a detailed reconstruction of movements of people and how those relate to changes known from the archaeological record."
'Extraordinary' ancient population structure
The analyses revealed that by 9,000 years ago, groups living in the Andean highlands became genetically distinct from those that eventually came to live along the Pacific coast. The effects of this early differentiation are still seen today.
The genetic fingerprints distinguishing people living in the highlands from those in nearby regions are "remarkably ancient," said Nakatsuka, who will receive his PhD in systems, synthetic and quantitative biology in May.
"It is extraordinary, given the small geographic distance," added Reich.
By 5,800 years ago, the population of the north also developed distinct genetic signatures from populations that became prevalent in the south, the team found. Again, these differences can be observed today.
After that time, gene flow occurred among all regions in the Andes, although it dramatically slowed after 2,000 years ago, the team found.
"It is exciting that we were actually able to determine relatively fine-grained population structure in the Andes, allowing us to differentiate between coastal, northern, southern and highland groups as well as individuals living in the Titicaca Basin," said Fehren-Schmitz.
"This is significant for the archaeology of the Andes and will now allow us to ask more specific questions with regards to local demographies and cultural networks," said study co-author Jose Capriles of Pennsylvania State University.
Genetic intermingling
The team discovered genetic exchanges both within the Andes and between Andean and non-Andean populations.
Ancient people moved between south Peru and the Argentine plains and between the north Peru coast and the Amazon, largely bypassing the highlands, the researchers found.
Fehren-Schmitz was especially interested to uncover signs of long-range mobility in the Inca period. Specifically, he was surprised to detect ancient North Coast ancestry not only around Cusco, Peru, but also in a child sacrifice from the Argentinian southern Andes.
"This could be seen as genetic evidence for relocations of individuals under Inca rule, a practice we know of from ethnohistorical, historical and archaeological sources," he said.
Although the findings of genetic intermingling throughout the Andes correlate with known archaeological connections, they will likely prompt additional archaeological research to understand the cultural contexts underlying the migrations, said Nakatsuka.
"Now we have more evidence demonstrating important migrations and some constraints on when they happened, but further work needs to be done to know why exactly these migrations occurred," he said.
Long-term continuity
The analyses revealed that multiple regions maintained genetic continuity over the past 2,000 years despite clear cultural transformations.
The finding contrasts with many other world regions, where ancient DNA studies often document substantial genetic turnover during this period, said Reich.
The population structures that arose early on persisted through major social changes and on into modern societies, the authors said. The discoveries offer new evidence that can be incorporated alongside archaeological and other records to inform theories on the ancient history of different groups in the region.
"To our surprise, we observed strong genetic continuity during the rise and fall of many of the large-scale Andean cultures, such as the Moche, Wari and Nasca," said Nakatsuka. "Our results suggest that the fall of these cultures was not due to massive migration into the region, e.g., from an invading military force, a scenario which had been documented in some other regions of the world."
Two exceptions to the continuity trend were the vast urban centers that the Tiwanaku and Inca cultures called home. Rather than being fairly genetically homogeneous, the capital regions of these civilizations were cosmopolitan, hosting people from many genetic backgrounds, the team found.
"It was interesting to start to see these glimpses of ancestral heterogeneity," said Nakatsuka. "These regions have some similarity to what we see now in places like New York City and other major cities where people of very different ancestries are living side by side."
Cooperative authorship
The study included authors from many disciplines and many countries, including Argentina, Australia, Bolivia, Chile, Germany, Peru, the United Kingdom and the United States.
"This is an impressive interdisciplinary but, just as importantly, international collaboration," said study co-author Bastien Llamas of the University of Adelaide. "All worked very closely to draft this manuscript under the leadership of Fehren-Schmitz and Reich."
It was important to team up with local scientists who belong to communities that descend from the individuals analyzed in the study, Fehren-Schmitz said, and to obtain permission from and continually engage with indigenous and other local groups as well as local governments.
The analysis of DNA from ancient individuals can have significant implications for present-day communities. One concerns the physical handling of the skeletal materials, which might be sensitive to the groups involved.
The work provided opportunities to heal past wounds. In one case, a sample from Cusco, previously housed in the U.S., was repatriated to Peru. Other remains that had long ago been taken improperly from burial sites were able to be carbon-dated and reburied.
In the absence of pre-Columbian written histories, archaeology has been the main source of information available to reconstruct the complex history of the continent, said study co-author Chiara Barbieri of the University of Zurich.
"With the study of ancient DNA, we can read the demographic history of ancient groups and understand how ancient and present-day groups are related," she said. "The link with the genetic study of living populations opens a direct dialogue with the past and an occasion to involve local communities."
The researchers sought to deeply involve communities with the help of archaeologists from each area, said Nakatsuka. Their efforts included giving public talks about the study and translating materials into Spanish.
Read more at Science Daily
Middle age may be much more stressful now than in the '90s
If life feels more stressful now than it did a few decades ago, you're not alone. Even before the novel coronavirus started sweeping the globe, a new study found that life may be more stressful now than it was in the 1990s.
A team of researchers led by Penn State found that across all ages, there was a slight increase in daily stress in the 2010s compared to the 1990s. But when researchers restricted the sample to people between the ages of 45 and 64, there was a sharp increase in daily stress.
"On average, people reported about 2 percent more stressors in the 2010s compared to people in the past," said David M. Almeida, professor of human development and family studies at Penn State. "That's around an additional week of stress a year. But what really surprised us is that people at mid-life reported a lot more stressors, about 19 percent more stress in 2010 than in 1990. And that translates to 64 more days of stress a year."
Almeida said the findings were part of a larger project aiming to discover whether health during the middle of Americans' lives has been changing over time.
"Certainly, when you talk to people, they seem to think that daily life is more hectic and less certain these days," Almeida said. "And so we wanted to actually collect that data and run the analyses to test some of those ideas."
For the study, the researchers used data collected from 1,499 adults in 1995 and 782 different adults in 2012. Almeida said the goal was to study two cohorts of people who were the same age at the time the data was collected but born in different decades. All study participants were interviewed daily for eight consecutive days.
During each daily interview, the researchers asked the participants about their stressful experiences throughout the previous 24 hours. For example, arguments with family or friends or feeling overwhelmed at home or work. The participants were also asked how severe their stress was and whether those stressors were likely to impact other areas of their lives.
"We were able to estimate not only how frequently people experienced stress, but also what those stressors mean to them," Almeida said. "For example, did this stress affect their finances or their plans for the future. And by having these two cohorts of people, we were able to compare daily stress processes in 1990 with daily stress processes in 2010."
After analyzing the data, the researchers found that participants reported significantly more daily stress and lower well-being in the 2010s compared to the 1990s. Additionally, participants reported a 27 percent increase in the belief that stress would affect their finances and a 17 percent increase in the belief that stress would affect their future plans.
Almeida said he was surprised not that people were more stressed now than in the 90s, but at the age group that was mainly affected.
"We thought that with the economic uncertainty, life might be more stressful for younger adults," Almeida said. "But we didn't see that. We saw more stress for people at mid-life. And maybe that's because they have children who are facing an uncertain job market while also responsible for their own parents. So it's this generational squeeze that's making stress more prevalent for people at mid-life."
Almeida said that while there used to be a stereotype about people experiencing a midlife crisis because of a fear of death and getting older, he suspects the study findings -- recently published in the journal American Psychologist -- suggest midlife distress may be due to different reasons.
"It may have to do with people at mid-life being responsible for a lot of people," Almeida said. "They're responsible for their children, oftentimes they're responsible for their parents, and they may also be responsible for employees at work. And with that responsibility comes more daily stress, and maybe that's happening more so now than in the past."
Additionally, Almeida said the added stress could partially be due to life "speeding up" due to technological advances. This could be particularly true during stressful times like the coronavirus pandemic, when tuning out the news can seem impossible.
"With people always on their smartphones, they have access to constant news and information that could be overwhelming," Almeida said.
Read more at Science Daily
A team of researchers led by Penn State found that across all ages, there was a slight increase in daily stress in the 2010s compared to the 1990s. But when researchers restricted the sample to people between the ages of 45 and 64, there was a sharp increase in daily stress.
"On average, people reported about 2 percent more stressors in the 2010s compared to people in the past," said David M. Almeida, professor of human development and family studies at Penn State. "That's around an additional week of stress a year. But what really surprised us is that people at mid-life reported a lot more stressors, about 19 percent more stress in 2010 than in 1990. And that translates to 64 more days of stress a year."
Almeida said the findings were part of a larger project aiming to discover whether health during the middle of Americans' lives has been changing over time.
"Certainly, when you talk to people, they seem to think that daily life is more hectic and less certain these days," Almeida said. "And so we wanted to actually collect that data and run the analyses to test some of those ideas."
For the study, the researchers used data collected from 1,499 adults in 1995 and 782 different adults in 2012. Almeida said the goal was to study two cohorts of people who were the same age at the time the data was collected but born in different decades. All study participants were interviewed daily for eight consecutive days.
During each daily interview, the researchers asked the participants about their stressful experiences throughout the previous 24 hours. For example, arguments with family or friends or feeling overwhelmed at home or work. The participants were also asked how severe their stress was and whether those stressors were likely to impact other areas of their lives.
"We were able to estimate not only how frequently people experienced stress, but also what those stressors mean to them," Almeida said. "For example, did this stress affect their finances or their plans for the future. And by having these two cohorts of people, we were able to compare daily stress processes in 1990 with daily stress processes in 2010."
After analyzing the data, the researchers found that participants reported significantly more daily stress and lower well-being in the 2010s compared to the 1990s. Additionally, participants reported a 27 percent increase in the belief that stress would affect their finances and a 17 percent increase in the belief that stress would affect their future plans.
Almeida said he was surprised not that people were more stressed now than in the 90s, but at the age group that was mainly affected.
"We thought that with the economic uncertainty, life might be more stressful for younger adults," Almeida said. "But we didn't see that. We saw more stress for people at mid-life. And maybe that's because they have children who are facing an uncertain job market while also responsible for their own parents. So it's this generational squeeze that's making stress more prevalent for people at mid-life."
Almeida said that while there used to be a stereotype about people experiencing a midlife crisis because of a fear of death and getting older, he suspects the study findings -- recently published in the journal American Psychologist -- suggest midlife distress may be due to different reasons.
"It may have to do with people at mid-life being responsible for a lot of people," Almeida said. "They're responsible for their children, oftentimes they're responsible for their parents, and they may also be responsible for employees at work. And with that responsibility comes more daily stress, and maybe that's happening more so now than in the past."
Additionally, Almeida said the added stress could partially be due to life "speeding up" due to technological advances. This could be particularly true during stressful times like the coronavirus pandemic, when tuning out the news can seem impossible.
"With people always on their smartphones, they have access to constant news and information that could be overwhelming," Almeida said.
Read more at Science Daily
Vitamin D levels appear to play role in COVID-19 mortality rates
Doctor, vitamin D concept |
Led by Northwestern University, the research team conducted a statistical analysis of data from hospitals and clinics across China, France, Germany, Italy, Iran, South Korea, Spain, Switzerland, the United Kingdom (UK) and the United States.
The researchers noted that patients from countries with high COVID-19 mortality rates, such as Italy, Spain and the UK, had lower levels of vitamin D compared to patients in countries that were not as severely affected.
This does not mean that everyone -- especially those without a known deficiency -- needs to start hoarding supplements, the researchers caution.
"While I think it is important for people to know that vitamin D deficiency might play a role in mortality, we don't need to push vitamin D on everybody," said Northwestern's Vadim Backman, who led the research. "This needs further study, and I hope our work will stimulate interest in this area. The data also may illuminate the mechanism of mortality, which, if proven, could lead to new therapeutic targets."
The research is available on medRxiv, a preprint server for health sciences.
Backman is the Walter Dill Scott Professor of Biomedical Engineering at Northwestern's McCormick School of Engineering. Ali Daneshkhah, a postdoctoral research associate in Backman's laboratory, is the paper's first author.
Backman and his team were inspired to examine vitamin D levels after noticing unexplained differences in COVID-19 mortality rates from country to country. Some people hypothesized that differences in healthcare quality, age distributions in population, testing rates or different strains of the coronavirus might be responsible. But Backman remained skeptical.
"None of these factors appears to play a significant role," Backman said. "The healthcare system in northern Italy is one of the best in the world. Differences in mortality exist even if one looks across the same age group. And, while the restrictions on testing do indeed vary, the disparities in mortality still exist even when we looked at countries or populations for which similar testing rates apply.
"Instead, we saw a significant correlation with vitamin D deficiency," he said.
By analyzing publicly available patient data from around the globe, Backman and his team discovered a strong correlation between vitamin D levels and cytokine storm -- a hyperinflammatory condition caused by an overactive immune system -- as well as a correlation between vitamin D deficiency and mortality.
"Cytokine storm can severely damage lungs and lead to acute respiratory distress syndrome and death in patients," Daneshkhah said. "This is what seems to kill a majority of COVID-19 patients, not the destruction of the lungs by the virus itself. It is the complications from the misdirected fire from the immune system."
This is exactly where Backman believes vitamin D plays a major role. Not only does vitamin D enhance our innate immune systems, it also prevents our immune systems from becoming dangerously overactive. This means that having healthy levels of vitamin D could protect patients against severe complications, including death, from COVID-19.
"Our analysis shows that it might be as high as cutting the mortality rate in half," Backman said. "It will not prevent a patient from contracting the virus, but it may reduce complications and prevent death in those who are infected."
Backman said this correlation might help explain the many mysteries surrounding COVID-19, such as why children are less likely to die. Children do not yet have a fully developed acquired immune system, which is the immune system's second line of defense and more likely to overreact.
"Children primarily rely on their innate immune system," Backman said. "This may explain why their mortality rate is lower."
Backman is careful to note that people should not take excessive doses of vitamin D, which might come with negative side effects. He said the subject needs much more research to know how vitamin D could be used most effectively to protect against COVID-19 complications.
"It is hard to say which dose is most beneficial for COVID-19," Backman said. "However, it is clear that vitamin D deficiency is harmful, and it can be easily addressed with appropriate supplementation. This might be another key to helping protect vulnerable populations, such as African-American and elderly patients, who have a prevalence of vitamin D deficiency."
Read more at Science Daily
May 6, 2020
New ancient plant captures snapshot of evolution
In a brilliant dance, a cornucopia of flowers, pinecones and acorns connected by wind, rain, insects and animals ensure the reproductive future of seed plants. But before plants achieved these elaborate specializations for sex, they went through millions of years of evolution. Now, researchers have captured a glimpse of that evolutionary process with the discovery of a new ancient plant species.
The fossilized specimen likely belongs to the herbaceous barinophytes, an unusual extinct group of plants that may be related to clubmosses, and is one of the most comprehensive examples of a seemingly intermediate stage of plant reproductive biology. The new species, which is about 400 million years old and from the Early Devonian period, produced a spectrum of spore sizes -- a precursor to the specialized strategies of land plants that span the world's habitats. The research was published in Current Biology May 4.
"Usually when we see heterosporous plants appear in the fossil record, they just sort of pop into existence," said the study's senior author, Andrew Leslie, an assistant professor of geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "We think this may be kind of a snapshot of this very rarely witnessed transition period in evolutionary history where you see high variation amongst spores in the reproductive structure."
A major shift
One of the most important time periods for the evolution of land plants, the Devonian witnessed diversification from small mosses to towering complex forests. The development of different spore sizes, or heterospory, represents a major modification to control reproduction -- a feature that later evolved into small and large versions of these reproductive units.
"Think of all the different types of sexual systems that are in flowers -- all of that is predicated on having separate small spores, or pollen, and big spores, which are inside the seeds," Leslie said. "With two discrete size classes, it's a more efficient way of packaging resources because the big spores can't move as easily as the little ones, but can better nourish offspring."
The earliest plants, from between 475 million to 400 million years ago, lacked reproductive specialization in the sense that they made the same types of spores, which would then grow into little plantlets that actually transferred reproductive cells. By partitioning reproductive resources, plants assumed more control over reproduction, according to the researchers.
The new species, together with the previously described plant group Chaleuria of the same age, represents the first evidence of more advanced reproductive biology in land plants. The next example doesn't appear in the fossil record until about 20 million years later.
"These kinds of fossils help us locate when and how exactly plants achieved that kind of partitioning of their reproductive resources," Leslie said. "The very end of that evolutionary history of specialization is something like a flower."
A fortuitous find
The researchers began analyses of the fossils after they had been stored in the collections at the Smithsonian National Museum of Natural History for decades. From about 30 small chips of rock originally excavated from the Campbellton Formation of New Brunswick in Canada by late paleobotanist and study co-author Francis Hueber, they identified more than 80 reproductive structures, or sporangia. The spores themselves range from about 70 to 200 microns in diameter -- about a strand to two strands of hair. While some of the structures contained exclusively large or small spores, others held only intermediate-sized spores and others held the entire range of spore sizes -- possibly with some producing sperm and others eggs.
"It's rare to get this many sporangia with well-preserved spores that you can measure," Leslie said. "We just kind of got lucky in how they were preserved."
Fossil and modern heterosporous plants primarily live in wetland environments, such as floodplains and swamps, where fertilization of large spores is most effective. The ancient species, which will be formally described in a follow-up paper, has a medley of spores that is not like anything living today, Leslie said.
Read more at Science Daily
The fossilized specimen likely belongs to the herbaceous barinophytes, an unusual extinct group of plants that may be related to clubmosses, and is one of the most comprehensive examples of a seemingly intermediate stage of plant reproductive biology. The new species, which is about 400 million years old and from the Early Devonian period, produced a spectrum of spore sizes -- a precursor to the specialized strategies of land plants that span the world's habitats. The research was published in Current Biology May 4.
"Usually when we see heterosporous plants appear in the fossil record, they just sort of pop into existence," said the study's senior author, Andrew Leslie, an assistant professor of geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "We think this may be kind of a snapshot of this very rarely witnessed transition period in evolutionary history where you see high variation amongst spores in the reproductive structure."
A major shift
One of the most important time periods for the evolution of land plants, the Devonian witnessed diversification from small mosses to towering complex forests. The development of different spore sizes, or heterospory, represents a major modification to control reproduction -- a feature that later evolved into small and large versions of these reproductive units.
"Think of all the different types of sexual systems that are in flowers -- all of that is predicated on having separate small spores, or pollen, and big spores, which are inside the seeds," Leslie said. "With two discrete size classes, it's a more efficient way of packaging resources because the big spores can't move as easily as the little ones, but can better nourish offspring."
The earliest plants, from between 475 million to 400 million years ago, lacked reproductive specialization in the sense that they made the same types of spores, which would then grow into little plantlets that actually transferred reproductive cells. By partitioning reproductive resources, plants assumed more control over reproduction, according to the researchers.
The new species, together with the previously described plant group Chaleuria of the same age, represents the first evidence of more advanced reproductive biology in land plants. The next example doesn't appear in the fossil record until about 20 million years later.
"These kinds of fossils help us locate when and how exactly plants achieved that kind of partitioning of their reproductive resources," Leslie said. "The very end of that evolutionary history of specialization is something like a flower."
A fortuitous find
The researchers began analyses of the fossils after they had been stored in the collections at the Smithsonian National Museum of Natural History for decades. From about 30 small chips of rock originally excavated from the Campbellton Formation of New Brunswick in Canada by late paleobotanist and study co-author Francis Hueber, they identified more than 80 reproductive structures, or sporangia. The spores themselves range from about 70 to 200 microns in diameter -- about a strand to two strands of hair. While some of the structures contained exclusively large or small spores, others held only intermediate-sized spores and others held the entire range of spore sizes -- possibly with some producing sperm and others eggs.
"It's rare to get this many sporangia with well-preserved spores that you can measure," Leslie said. "We just kind of got lucky in how they were preserved."
Fossil and modern heterosporous plants primarily live in wetland environments, such as floodplains and swamps, where fertilization of large spores is most effective. The ancient species, which will be formally described in a follow-up paper, has a medley of spores that is not like anything living today, Leslie said.
Read more at Science Daily
More berries, apples and tea may have protective benefits against Alzheimer's
Older adults who consumed small amounts of flavonoid-rich foods, such as berries, apples and tea, were two to four times more likely to develop Alzheimer's disease and related dementias over 20 years compared with people whose intake was higher, according to a new study led by scientists at the Jean Mayer USDA Human Nutrition Research Center on Aging (USDA HNRCA) at Tufts University.
The epidemiological study of 2,800 people aged 50 and older examined the long-term relationship between eating foods containing flavonoids and risk of Alzheimer's disease (AD) and Alzheimer's disease and related dementias (ADRD). While many studies have looked at associations between nutrition and dementias over short periods of time, the study published today in the American Journal of Clinical Nutrition looked at exposure over 20 years.
Flavonoids are natural substances found in plants, including fruits and vegetables such as pears, apples, berries, onions, and plant-based beverages like tea and wine. Flavonoids are associated with various health benefits, including reduced inflammation. Dark chocolate is another source of flavonoids.
The research team determined that low intake of three flavonoid types was linked to higher risk of dementia when compared to the highest intake. Specifically:
The results were similar for AD.
"Our study gives us a picture of how diet over time might be related to a person's cognitive decline, as we were able to look at flavonoid intake over many years prior to participants' dementia diagnoses," said Paul Jacques, senior author and nutritional epidemiologist at the USDA HNRCA. "With no effective drugs currently available for the treatment of Alzheimer's disease, preventing disease through a healthy diet is an important consideration."
The researchers analyzed six types of flavonoids and compared long-term intake levels with the number of AD and ADRD diagnoses later in life. They found that low intake (15th percentile or lower) of three flavonoid types was linked to higher risk of dementia when compared to the highest intake (greater than 60th percentile). Examples of the levels studied included:
"Tea, specifically green tea, and berries are good sources of flavonoids," said first author Esra Shishtar, who at the time of the study was a doctoral student at the Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University in the Nutritional Epidemiology Program at the USDA HNRCA. "When we look at the study results, we see that the people who may benefit the most from consuming more flavonoids are people at the lowest levels of intake, and it doesn't take much to improve levels. A cup of tea a day or some berries two or three times a week would be adequate," she said.
Jacques also said 50, the approximate age at which data was first analyzed for participants, is not too late to make positive dietary changes. "The risk of dementia really starts to increase over age 70, and the take home message is, when you are approaching 50 or just beyond, you should start thinking about a healthier diet if you haven't already," he said.
Methodology
To measure long-term flavonoid intake, the research team used dietary questionnaires, filled out at medical exams approximately every four years by participants in the Framingham Heart Study, a largely Caucasian group of people who have been studied over several generations for risk factors of heart disease.
To increase the likelihood that dietary information was accurate, the researchers excluded questionnaires from the years leading up to the dementia diagnosis, based on the assumption that, as cognitive status declined, dietary behavior may have changed, and food questionnaires were more likely to be inaccurate.
The participants were from the Offspring Cohort (children of the original participants), and the data came from exams 5 through 9. At the start of the study, the participants were free of AD and ADRD, with a valid food frequency questionnaire at baseline. Flavonoid intakes were updated at each exam to represent cumulative average intake across the five exam cycles.
Researchers categorized flavonoids into six types and created four intake levels based on percentiles: less than or equal to the 15th percentile, 15th-30th percentile, 30th-60th percentile, and greater than 60th percentile. They then compared flavonoid intake types and levels with new diagnoses of AD and ADRD.
Read more at Science Daily
The epidemiological study of 2,800 people aged 50 and older examined the long-term relationship between eating foods containing flavonoids and risk of Alzheimer's disease (AD) and Alzheimer's disease and related dementias (ADRD). While many studies have looked at associations between nutrition and dementias over short periods of time, the study published today in the American Journal of Clinical Nutrition looked at exposure over 20 years.
Flavonoids are natural substances found in plants, including fruits and vegetables such as pears, apples, berries, onions, and plant-based beverages like tea and wine. Flavonoids are associated with various health benefits, including reduced inflammation. Dark chocolate is another source of flavonoids.
The research team determined that low intake of three flavonoid types was linked to higher risk of dementia when compared to the highest intake. Specifically:
- Low intake of flavonols (apples, pears and tea) was associated with twice the risk of developing ADRD.
- Low intake of anthocyanins (blueberries, strawberries, and red wine) was associated with a four-fold risk of developing ADRD.
- Low intake of flavonoid polymers (apples, pears, and tea) was associated with twice the risk of developing ADRD.
The results were similar for AD.
"Our study gives us a picture of how diet over time might be related to a person's cognitive decline, as we were able to look at flavonoid intake over many years prior to participants' dementia diagnoses," said Paul Jacques, senior author and nutritional epidemiologist at the USDA HNRCA. "With no effective drugs currently available for the treatment of Alzheimer's disease, preventing disease through a healthy diet is an important consideration."
The researchers analyzed six types of flavonoids and compared long-term intake levels with the number of AD and ADRD diagnoses later in life. They found that low intake (15th percentile or lower) of three flavonoid types was linked to higher risk of dementia when compared to the highest intake (greater than 60th percentile). Examples of the levels studied included:
- Low intake (15th percentile or lower) was equal to no berries (anthocyanins) per month, roughly one-and-a-half apples per month (flavonols), and no tea (flavonoid polymers).
- High intake (60th percentile or higher) was equal to roughly 7.5 cups of blueberries or strawberries (anthocyanins) per month, 8 apples and pears per month (flavonols), and 19 cups of tea per month (flavonoid polymers).
"Tea, specifically green tea, and berries are good sources of flavonoids," said first author Esra Shishtar, who at the time of the study was a doctoral student at the Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University in the Nutritional Epidemiology Program at the USDA HNRCA. "When we look at the study results, we see that the people who may benefit the most from consuming more flavonoids are people at the lowest levels of intake, and it doesn't take much to improve levels. A cup of tea a day or some berries two or three times a week would be adequate," she said.
Jacques also said 50, the approximate age at which data was first analyzed for participants, is not too late to make positive dietary changes. "The risk of dementia really starts to increase over age 70, and the take home message is, when you are approaching 50 or just beyond, you should start thinking about a healthier diet if you haven't already," he said.
Methodology
To measure long-term flavonoid intake, the research team used dietary questionnaires, filled out at medical exams approximately every four years by participants in the Framingham Heart Study, a largely Caucasian group of people who have been studied over several generations for risk factors of heart disease.
To increase the likelihood that dietary information was accurate, the researchers excluded questionnaires from the years leading up to the dementia diagnosis, based on the assumption that, as cognitive status declined, dietary behavior may have changed, and food questionnaires were more likely to be inaccurate.
The participants were from the Offspring Cohort (children of the original participants), and the data came from exams 5 through 9. At the start of the study, the participants were free of AD and ADRD, with a valid food frequency questionnaire at baseline. Flavonoid intakes were updated at each exam to represent cumulative average intake across the five exam cycles.
Researchers categorized flavonoids into six types and created four intake levels based on percentiles: less than or equal to the 15th percentile, 15th-30th percentile, 30th-60th percentile, and greater than 60th percentile. They then compared flavonoid intake types and levels with new diagnoses of AD and ADRD.
Read more at Science Daily
Mutations in SARS-CoV-2 offer insights into virus evolution
Illustration of coronavirus under a microscope |
The study, led by the UCL Genetics Institute, identified close to 200 recurrent genetic mutations in the virus, highlighting how it may be adapting and evolving to its human hosts.
Researchers found that a large proportion of the global genetic diversity of SARS-CoV-2 is found in all hardest-hit countries, suggesting extensive global transmission from early on in the epidemic and the absence of single 'Patient Zeroes' in most countries.
The findings, published today in Infection, Genetics and Evolution, also further establish the virus only emerged recently in late 2019, before quickly spreading across the globe. Scientists analysed the emergence of genomic diversity in SARS-CoV-2, the new coronavirus causing Covid-19, by screening the genomes of over 7,500 viruses from infected patients around the globe. They identified 198 mutations that appear to have independently occurred more than once, which may hold clues to how the virus is adapting.
Co-lead author Professor Francois Balloux (UCL Genetics Institute) said: "All viruses naturally mutate. Mutations in themselves are not a bad thing and there is nothing to suggest SARS-CoV-2 is mutating faster or slower than expected. So far we cannot say whether SARS-CoV-2 is becoming more or less lethal and contagious."
The small genetic changes, or mutations, identified were not evenly distributed across the virus genome. As some parts of the genome had very few mutations, the researchers say those invariant parts of the virus could be better targets for drug and vaccine development.
"A major challenge to defeating viruses is that a vaccine or drug might no longer be effective if the virus has mutated. If we focus our efforts on parts of the virus that are less likely to mutate, we have a better chance of developing drugs that will be effective in the long run," Professor Balloux explained.
"We need to develop drugs and vaccines that cannot be easily evaded by the virus."
Co-lead author Dr Lucy van Dorp (UCL Genetics Institute) added: "There are still very few genetic differences or mutations between viruses. We found that some of these differences have occurred multiple times, independently of one another during the course of the pandemic -- we need to continue to monitor these as more genomes become available and conduct research to understand exactly what they do."
The results add to a growing body of evidence that SARS-CoV-2 viruses share a common ancestor from late 2019, suggesting that this was when the virus jumped from a previous animal host, into people. This means it is most unlikely the virus causing Covid-19 was in human circulation for long before it was first detected.
In many countries including the UK, the diversity of viruses sampled was almost as much as that seen across the whole world, meaning the virus entered the UK numerous times independently, rather than via any one index case.
The research team have developed a new interactive, open-source online application so that researchers across the globe can also review the virus genomes and apply similar approaches to better understand its evolution.
Dr van Dorp said: "Being able to analyse such an extraordinary number of virus genomes within the first few months of the pandemic could be invaluable to drug development efforts, and showcases how far genomic research has come even within the last decade. We are all benefiting from a tremendous effort by hundreds of researchers globally who have been sequencing virus genomes and making them available online."
Read more at Science Daily
ESO instrument finds closest black hole to Earth
Black hole illustration |
"We were totally surprised when we realised that this is the first stellar system with a black hole that can be seen with the unaided eye," says Petr Hadrava, Emeritus Scientist at the Academy of Sciences of the Czech Republic in Prague and co-author of the research. Located in the constellation of Telescopium, the system is so close to us that its stars can be viewed from the southern hemisphere on a dark, clear night without binoculars or a telescope. "This system contains the nearest black hole to Earth that we know of," says ESO scientist Thomas Rivinius, who led the study published today in Astronomy & Astrophysics.
The team originally observed the system, called HR 6819, as part of a study of double-star systems. However, as they analysed their observations, they were stunned when they revealed a third, previously undiscovered body in HR 6819: a black hole. The observations with the FEROS spectrograph on the MPG/ESO 2.2-metre telescope at La Silla showed that one of the two visible stars orbits an unseen object every 40 days, while the second star is at a large distance from this inner pair.
Dietrich Baade, Emeritus Astronomer at ESO in Garching and co-author of the study, says: "The observations needed to determine the period of 40 days had to be spread over several months. This was only possible thanks to ESO's pioneering service-observing scheme under which observations are made by ESO staff on behalf of the scientists needing them."
The hidden black hole in HR 6819 is one of the very first stellar-mass black holes found that do not interact violently with their environment and, therefore, appear truly black. But the team could spot its presence and calculate its mass by studying the orbit of the star in the inner pair. "An invisible object with a mass at least 4 times that of the Sun can only be a black hole," concludes Rivinius, who is based in Chile.
Astronomers have spotted only a couple of dozen black holes in our galaxy to date, nearly all of which strongly interact with their environment and make their presence known by releasing powerful X-rays in this interaction. But scientists estimate that, over the Milky Way's lifetime, many more stars collapsed into black holes as they ended their lives. The discovery of a silent, invisible black hole in HR 6819 provides clues about where the many hidden black holes in the Milky Way might be. "There must be hundreds of millions of black holes out there, but we know about only very few. Knowing what to look for should put us in a better position to find them," says Rivinius. Baade adds that finding a black hole in a triple system so close by indicates that we are seeing just "the tip of an exciting iceberg."
Already, astronomers believe their discovery could shine some light on a second system. "We realised that another system, called LB-1, may also be such a triple, though we'd need more observations to say for sure," says Marianne Heida, a postdoctoral fellow at ESO and co-author of the paper. "LB-1 is a bit further away from Earth but still pretty close in astronomical terms, so that means that probably many more of these systems exist. By finding and studying them we can learn a lot about the formation and evolution of those rare stars that begin their lives with more than about 8 times the mass of the Sun and end them in a supernova explosion that leaves behind a black hole."
The discoveries of these triple systems with an inner pair and a distant star could also provide clues about the violent cosmic mergers that release gravitational waves powerful enough to be detected on Earth. Some astronomers believe that the mergers can happen in systems with a similar configuration to HR 6819 or LB-1, but where the inner pair is made up of two black holes or of a black hole and a neutron star. The distant outer object can gravitationally impact the inner pair in such a way that it triggers a merger and the release of gravitational waves. Although HR 6819 and LB-1 have only one black hole and no neutron stars, these systems could help scientists understand how stellar collisions can happen in triple star systems.
Read more at Science Daily
May 5, 2020
Exoplanets: How we'll search for signs of life
Whether there is life elsewhere in the universe is a question people have pondered for millennia; and within the last few decades, great strides have been made in our search for signs of life outside of our solar system.
NASA missions like the space telescope Kepler have helped us document thousands of exoplanets -- planets that orbit around other stars. And current NASA missions like Transiting Exoplanet Survey Satellite (TESS) are expected to vastly increase the current number of known exoplanets. It is expected that dozens will be Earth-sized rocky planets orbiting in their stars' habitable zones, at distances where water could exist as a liquid on their surfaces. These are promising places to look for life.
This will be accomplished by missions like the soon-to-be-launched James Webb Space Telescope, which will complement and extend the discoveries of the Hubble Space Telescope by observing at infrared wavelengths. It is expected to launch in 2021, and will allow scientists to determine if rocky exoplanets have oxygen in their atmospheres. Oxygen in Earth's atmosphere is due to photosynthesis by microbes and plants. To the extent that exoplanets resemble Earth, oxygen in their atmospheres may also be a sign of life.
Not all exoplanets will be Earth-like, though. Some will be, but others will differ from Earth enough that oxygen doesn't necessarily come from life. So with all of these current and future exoplanets to study, how do scientists narrow down the field to those for which oxygen is most indicative of life?
To answer this question, an interdisciplinary team of researchers, led by Arizona State University (ASU), has provided a framework, called a "detectability index" which may help prioritize exoplanets that require additional study. The details of this index have recently been published in the Astrophysical Journal of the American Astronomical Society.
"The goal of the index is to provide scientists with a tool to select the very best targets for observation and to maximize the chances of detecting life," says lead author Donald Glaser of ASU's School of Molecular Sciences.
The oxygen detectability index for a planet like Earth is high, meaning that oxygen in Earth's atmosphere is definitely due to life and nothing else. Seeing oxygen means life. A surprising finding by the team is that the detectability index plummets for exoplanets not-too-different from Earth.
Although Earth's surface is largely covered in water, Earth's oceans are only a small percentage (0.025%) of Earth's mass. By comparison, moons in the outer solar system are typically close to 50% water ice.
"It's easy to imagine that in another solar system like ours, an Earth-like planet could be just 0.2% water," says co-author Steven Desch of ASU's School of Earth and Space Exploration. "And that would be enough to change the detectability index. Oxygen would not be indicative of life on such planets, even if it were observed. That's because an Earth-like planet that was 0.2% water -- about eight times what Earth has -- would have no exposed continents or land."
Without land, rain would not weather rock and release important nutrients like phosphorus. Photosynthetic life could not produce oxygen at rates comparable to other non-biological sources.
"The detectability index tells us it's not enough to observe oxygen in an exoplanet's atmosphere. We must also observe oceans and land," says Desch. "That changes how we approach the search for life on exoplanets. It helps us interpret observations we've made of exoplanets. It helps us pick the best target exoplanets to look for life on. And it helps us design the next generation of space telescopes so that we get all the information we need to make a positive identification of life."
Scientists from diverse fields were brought together to create this index. The formation of the team was facilitated by NASA's Nexus for Exoplanetary System Science (NExSS) program, which funds interdisciplinary research to develop strategies for looking for life on exoplanets. Their disciplines include theoretical and observational astrophysics, geophysics, geochemistry, astrobiology, oceanography, and ecology.
"This kind of research needs diverse teams, we can't do it as individual scientists" says co-author Hilairy Hartnett who holds joint appointments at ASU's School of Earth and Space Exploration and School of Molecular Sciences.
In addition to lead author Glaser and co-authors Harnett and Desch, the team includes co-authors Cayman Unterborn, Ariel Anbar, Steffen Buessecker, Theresa Fisher, Steven Glaser, Susanne Neuer, Camerian Millsaps, Joseph O'Rourke, Sara Imari Walker, and Mikhail Zolotov who collectively represent ASU's School of Molecular Sciences, School of Earth and Space Exploration, and School of Life Sciences. Additional scientists on the team include researchers from the University of California Riverside, Johns Hopkins University and the University of Porto (Portugal).
Read more at Science Daily
NASA missions like the space telescope Kepler have helped us document thousands of exoplanets -- planets that orbit around other stars. And current NASA missions like Transiting Exoplanet Survey Satellite (TESS) are expected to vastly increase the current number of known exoplanets. It is expected that dozens will be Earth-sized rocky planets orbiting in their stars' habitable zones, at distances where water could exist as a liquid on their surfaces. These are promising places to look for life.
This will be accomplished by missions like the soon-to-be-launched James Webb Space Telescope, which will complement and extend the discoveries of the Hubble Space Telescope by observing at infrared wavelengths. It is expected to launch in 2021, and will allow scientists to determine if rocky exoplanets have oxygen in their atmospheres. Oxygen in Earth's atmosphere is due to photosynthesis by microbes and plants. To the extent that exoplanets resemble Earth, oxygen in their atmospheres may also be a sign of life.
Not all exoplanets will be Earth-like, though. Some will be, but others will differ from Earth enough that oxygen doesn't necessarily come from life. So with all of these current and future exoplanets to study, how do scientists narrow down the field to those for which oxygen is most indicative of life?
To answer this question, an interdisciplinary team of researchers, led by Arizona State University (ASU), has provided a framework, called a "detectability index" which may help prioritize exoplanets that require additional study. The details of this index have recently been published in the Astrophysical Journal of the American Astronomical Society.
"The goal of the index is to provide scientists with a tool to select the very best targets for observation and to maximize the chances of detecting life," says lead author Donald Glaser of ASU's School of Molecular Sciences.
The oxygen detectability index for a planet like Earth is high, meaning that oxygen in Earth's atmosphere is definitely due to life and nothing else. Seeing oxygen means life. A surprising finding by the team is that the detectability index plummets for exoplanets not-too-different from Earth.
Although Earth's surface is largely covered in water, Earth's oceans are only a small percentage (0.025%) of Earth's mass. By comparison, moons in the outer solar system are typically close to 50% water ice.
"It's easy to imagine that in another solar system like ours, an Earth-like planet could be just 0.2% water," says co-author Steven Desch of ASU's School of Earth and Space Exploration. "And that would be enough to change the detectability index. Oxygen would not be indicative of life on such planets, even if it were observed. That's because an Earth-like planet that was 0.2% water -- about eight times what Earth has -- would have no exposed continents or land."
Without land, rain would not weather rock and release important nutrients like phosphorus. Photosynthetic life could not produce oxygen at rates comparable to other non-biological sources.
"The detectability index tells us it's not enough to observe oxygen in an exoplanet's atmosphere. We must also observe oceans and land," says Desch. "That changes how we approach the search for life on exoplanets. It helps us interpret observations we've made of exoplanets. It helps us pick the best target exoplanets to look for life on. And it helps us design the next generation of space telescopes so that we get all the information we need to make a positive identification of life."
Scientists from diverse fields were brought together to create this index. The formation of the team was facilitated by NASA's Nexus for Exoplanetary System Science (NExSS) program, which funds interdisciplinary research to develop strategies for looking for life on exoplanets. Their disciplines include theoretical and observational astrophysics, geophysics, geochemistry, astrobiology, oceanography, and ecology.
"This kind of research needs diverse teams, we can't do it as individual scientists" says co-author Hilairy Hartnett who holds joint appointments at ASU's School of Earth and Space Exploration and School of Molecular Sciences.
In addition to lead author Glaser and co-authors Harnett and Desch, the team includes co-authors Cayman Unterborn, Ariel Anbar, Steffen Buessecker, Theresa Fisher, Steven Glaser, Susanne Neuer, Camerian Millsaps, Joseph O'Rourke, Sara Imari Walker, and Mikhail Zolotov who collectively represent ASU's School of Molecular Sciences, School of Earth and Space Exploration, and School of Life Sciences. Additional scientists on the team include researchers from the University of California Riverside, Johns Hopkins University and the University of Porto (Portugal).
Read more at Science Daily
Genetic study ties higher alcohol consumption to increased stroke and PAD risk
Higher alcohol consumption was shown to be associated with an increased risk of having a stroke or developing peripheral artery disease, according to new research published today in Circulation: Genomic and Precision Medicine, an American Heart Association journal.
While observational studies have consistently shown that heavy alcohol consumption is associated with an increased risk of certain cardiovascular diseases, they often use self-reported data and are unable to determine cause. Researchers in this study used a different technique called Mendelian randomization that identifies genetic variants with a known association to potential risk factors to determine the potential degree of disease risk.
"Since genetic variants are determined at conception and cannot be affected by subsequent environmental factors, this technique allows us to better determine whether a risk factor -- in this case, heavy alcohol consumption -- is the cause of a disease, or if it is simply associated," said Susanna Larsson, Ph.D., senior researcher and associate professor of cardiovascular and nutritional epidemiology at Karolinska Institutet in Stockholm, Sweden. "To our knowledge, this is the first Mendelian randomization study on alcohol consumption and several cardiovascular diseases."
Researchers analyzed the genetic data from several large-scale consortia and the UK Biobank, which follows the health and well-being of 500,000 United Kingdom residents. Results indicate that with higher alcohol consumption:
"Higher alcohol consumption is a known cause of death and disability, yet it was previously unclear if alcohol consumption is also a cause of cardiovascular disease. Considering that many people consume alcohol regularly, it is important to disentangle any risks or benefits," Larsson said.
Researchers noted that this study suggested the mechanism by which higher consumption was associated with the risk of stroke and PAD may be blood pressure.
According to a statement on dietary health, the American Heart Association believes that alcohol intake can be a component of a healthy diet if consumed in moderation (no more than one alcoholic drink per day for women and 2 alcohol drinks per day for men) and only by nonpregnant women and adults when there is no risk to existing health conditions, medication-alcohol interaction, or personal safety and work situations. One drink is equivalent to 12 ounces of beer (5% alcohol); 5 ounces of wine (12% alcohol); or 1.5 ounces of 80-proof distilled spirits (40% alcohol).
Read more at Science Daily
While observational studies have consistently shown that heavy alcohol consumption is associated with an increased risk of certain cardiovascular diseases, they often use self-reported data and are unable to determine cause. Researchers in this study used a different technique called Mendelian randomization that identifies genetic variants with a known association to potential risk factors to determine the potential degree of disease risk.
"Since genetic variants are determined at conception and cannot be affected by subsequent environmental factors, this technique allows us to better determine whether a risk factor -- in this case, heavy alcohol consumption -- is the cause of a disease, or if it is simply associated," said Susanna Larsson, Ph.D., senior researcher and associate professor of cardiovascular and nutritional epidemiology at Karolinska Institutet in Stockholm, Sweden. "To our knowledge, this is the first Mendelian randomization study on alcohol consumption and several cardiovascular diseases."
Researchers analyzed the genetic data from several large-scale consortia and the UK Biobank, which follows the health and well-being of 500,000 United Kingdom residents. Results indicate that with higher alcohol consumption:
- a three-fold increase of peripheral artery disease, a narrowing of arteries that results in reduced blood flow, usually to the legs;
- a 27% increase in stroke incidence; and
- some evidence for a positive association of coronary artery disease, atrial fibrillation and aortic aneurysm.
"Higher alcohol consumption is a known cause of death and disability, yet it was previously unclear if alcohol consumption is also a cause of cardiovascular disease. Considering that many people consume alcohol regularly, it is important to disentangle any risks or benefits," Larsson said.
Researchers noted that this study suggested the mechanism by which higher consumption was associated with the risk of stroke and PAD may be blood pressure.
According to a statement on dietary health, the American Heart Association believes that alcohol intake can be a component of a healthy diet if consumed in moderation (no more than one alcoholic drink per day for women and 2 alcohol drinks per day for men) and only by nonpregnant women and adults when there is no risk to existing health conditions, medication-alcohol interaction, or personal safety and work situations. One drink is equivalent to 12 ounces of beer (5% alcohol); 5 ounces of wine (12% alcohol); or 1.5 ounces of 80-proof distilled spirits (40% alcohol).
Read more at Science Daily
Evidence that human brains replay our waking experiences while we sleep
When we fall asleep, our brains are not merely offline, they're busy organizing new memories -- and now, scientists have gotten a glimpse of the process. Researchers report in the journal Cell Reports on May 5 the first direct evidence that human brains replay waking experiences while asleep, seen in two participants with intracortical microelectrode arrays placed in their brains as part of a brain-computer interface pilot clinical trial.
During sleep, the brain replays neural firing patterns experienced while awake, also known as "offline replay." Replay is thought to underlie memory consolidation, the process by which recent memories acquire more permanence in their neural representation. Scientists have previously observed replay in animals, but the study led by Jean-Baptiste Eichenlaub of Massachusetts General Hospital and Beata Jarosiewicz, formerly Research Assistant Professor at BrainGate, and now Senior Research Scientist at NeuroPace, tested whether the phenomenon happens in human brains as well.
The team asked the two participants to take a nap before and after playing a sequence-copying game, which is similar to the 80s hit game Simon. The video game had four color panels that lit up in different sequences for the players to repeat. But instead of moving their arms, the participants played the game with their minds -- imagining moving the cursor with their hands to different targets one by one, hitting the correct colors in the correct order as quickly as possible. While the participants rested, played the game, and then rested again, the researchers recorded the spiking activity of large groups of individual neurons in their brains through an implanted multi-electrode array.
"There aren't a lot of scenarios in which a person would have a multi-electrode array placed in their brain, where the electrodes are tiny enough to be able to detect the firing activity of individual neurons," says co-first author Jarosiewicz. Electrodes approved for medical indications, like those for treating Parkinson's disease or epilepsy, are too big to track the spiking activity of single neurons. But the electrode arrays used in the BrainGate pilot clinical trials are the first to allow for such detailed neural recordings in the human brain. "That's why this study is unprecedented," she says.
BrainGate is an academic research consortium spanning Brown University, Massachusetts General Hospital, Case Western Reserve University, and Stanford University. Researchers at BrainGate are working to develop chronically implanted brain-computer interfaces to help people with severe motor disabilities regain communication and control by using their brain signals to move computer cursors, robotic arms, and other assistive devices.
In this study, the team observed the same neuronal firing patterns during both the gaming period and the post-game rest period. In other words, it's as though the participants kept playing the Simon game after they were asleep, replaying the same patterns in their brain at a neuronal level. The findings provided direct evidence of learning-related replay in the human brain.
"This is the first piece of direct evidence that in humans, we also see replay during rest following learning that might help to consolidate those memories," said Jarosiewicz. "All the replay-related memory consolidation mechanisms that we've studied in animals for all these decades might actually generalize to humans as well."
The findings also open up more questions and future topics of study who want to understand the underlying mechanism by which replay enables memory consolidation. The next step is to find evidence that replay actually has a causal role in the memory consolidation process. One way to do that would be to test whether there's a relationship between the strength of the replay and the strength of post-nap memory recall.
Read more at Science Daily
During sleep, the brain replays neural firing patterns experienced while awake, also known as "offline replay." Replay is thought to underlie memory consolidation, the process by which recent memories acquire more permanence in their neural representation. Scientists have previously observed replay in animals, but the study led by Jean-Baptiste Eichenlaub of Massachusetts General Hospital and Beata Jarosiewicz, formerly Research Assistant Professor at BrainGate, and now Senior Research Scientist at NeuroPace, tested whether the phenomenon happens in human brains as well.
The team asked the two participants to take a nap before and after playing a sequence-copying game, which is similar to the 80s hit game Simon. The video game had four color panels that lit up in different sequences for the players to repeat. But instead of moving their arms, the participants played the game with their minds -- imagining moving the cursor with their hands to different targets one by one, hitting the correct colors in the correct order as quickly as possible. While the participants rested, played the game, and then rested again, the researchers recorded the spiking activity of large groups of individual neurons in their brains through an implanted multi-electrode array.
"There aren't a lot of scenarios in which a person would have a multi-electrode array placed in their brain, where the electrodes are tiny enough to be able to detect the firing activity of individual neurons," says co-first author Jarosiewicz. Electrodes approved for medical indications, like those for treating Parkinson's disease or epilepsy, are too big to track the spiking activity of single neurons. But the electrode arrays used in the BrainGate pilot clinical trials are the first to allow for such detailed neural recordings in the human brain. "That's why this study is unprecedented," she says.
BrainGate is an academic research consortium spanning Brown University, Massachusetts General Hospital, Case Western Reserve University, and Stanford University. Researchers at BrainGate are working to develop chronically implanted brain-computer interfaces to help people with severe motor disabilities regain communication and control by using their brain signals to move computer cursors, robotic arms, and other assistive devices.
In this study, the team observed the same neuronal firing patterns during both the gaming period and the post-game rest period. In other words, it's as though the participants kept playing the Simon game after they were asleep, replaying the same patterns in their brain at a neuronal level. The findings provided direct evidence of learning-related replay in the human brain.
"This is the first piece of direct evidence that in humans, we also see replay during rest following learning that might help to consolidate those memories," said Jarosiewicz. "All the replay-related memory consolidation mechanisms that we've studied in animals for all these decades might actually generalize to humans as well."
The findings also open up more questions and future topics of study who want to understand the underlying mechanism by which replay enables memory consolidation. The next step is to find evidence that replay actually has a causal role in the memory consolidation process. One way to do that would be to test whether there's a relationship between the strength of the replay and the strength of post-nap memory recall.
Read more at Science Daily
Antibody blocks infection by the SARS-CoV-2 in cells, scientists discover
SARS-CoV-2 concept illustration |
The COVID-19 pandemic has spread rapidly across the globe infecting more than 3.3M people worldwide and killing more than 235,000 people so far.
"This research builds on the work our groups have done in the past on antibodies targeting the SARS-CoV that emerged in 2002/2003," said Berend-Jan Bosch, Associate Professor, Research leader at Utrecht University, and co-lead author of the Nature Communications study. "Using this collection of SARS-CoV antibodies, we identified an antibody that also neutralizes infection of SARS-CoV-2 in cultured cells. Such a neutralizing antibody has potential to alter the course of infection in the infected host, support virus clearance or protect an uninfected individual that is exposed to the virus."
Dr. Bosch noted that the antibody binds to a domain that is conserved in both SARS-CoV and SARS-CoV-2, explaining its ability to neutralize both viruses. "This cross-neutralizing feature of the antibody is very interesting and suggests it may have potential in mitigation of diseases caused by future-emerging related coronaviruses."
"This discovery provides a strong foundation for additional research to characterize this antibody and begin development as a potential COVID-19 treatment," said Frank Grosveld, PhD. co-lead author on the study, Academy Professor of Cell Biology, Erasmus Medical Center, Rotterdam and Founding Chief Scientific Officer at Harbour BioMed. "The antibody used in this work is 'fully human,' allowing development to proceed more rapidly and reducing the potential for immune-related side effects." Conventional therapeutic antibodies are first developed in other species and then must undergo additional work to 'humanize' them. The antibody was generated using Harbour BioMed's H2L2 transgenic mouse technology.
"This is groundbreaking research," said Dr. Jingsong Wang, Founder, Chairman & Chief Executive Officer of HBM. "Much more work is needed to assess whether this antibody can protect or reduce the severity of disease in humans. We expect to advance development of the antibody with partners. We believe our technology can contribute to addressing this most urgent public health need and we are pursuing several other research avenues."
Read more at Science Daily
May 4, 2020
Arctic 'shorefast' sea ice threatened by climate change
For people who live in the Arctic, sea ice that forms along shorelines is a vital resource that connects isolated communities and provides access to hunting and fishing grounds. A new study by Brown University researchers found that climate change could significantly reduce this "shorefast ice" in communities across Northern Canada and Western Greenland.
The study, published in Nature Climate Change, used weather data and near-daily satellite observations of 28 Arctic communities to determine the timing of shorefast ice breakup in each location over the past 19 years. The analysis enabled the researchers to determine the conditions that drive springtime ice breakup. Then they use climate models to predict how that timing might change in each community as the planet warms.
The analysis found that by 2100, communities could see shorefast ice seasons reduced by anywhere from five to 44 days, with the coldest communities in the study seeing the largest reductions. The wide range of potential outcomes was a surprise, the researchers say, and underscores the need to take local factors into account when making policy to prepare for future climate change.
"One of the key takeaways for me is that even though the whole Arctic is going to warm and lose ice, we see very different outcomes from one community to another," said Sarah Cooley, lead author of the study and a Ph.D. student in the Institute at Brown for Environment and Society (IBES). "When you combine that wide range of outcomes with the fact that different communities have lots of social, cultural and economic differences, it means that some communities may experience much larger impacts than others."
For example, the northern Canadian communities of Clyde River and Taloyoak, which are particularly dependent upon shorefast ice for subsistence hunting and fishing, will see some of the most substantial declines in sea ice. On average, these two communities can expect ice to break up 23 to 44 days earlier, respectively by 2100. That could mean "economically and culturally significant activities on the ice will be harder to maintain in the future," the researchers write.
That the coldest regions in the study could see the largest reductions in ice is cause for concern, says study co-author Johnny Ryan, a postdoctoral researcher at IBES.
"Some of these places are considered to be the last remnants of truly polar ecosystems and people talk a lot about preserving these areas in particular," Ryan said. "Yet these are the areas that we find will lose the most ice."
The research is part of a larger research effort aimed at better understanding how climate change in the Arctic will impact the people who live there. In addition to gathering satellite and scientific data, the research team conducted fieldwork in the community of Uummannaq in western Greenland to learn more about how the local population utilizes the ice.
"Shorefast ice is something that's most important from the standpoint of the people who use it," Cooley said. "It has some implications in terms of global climate, but those are fairly small. This is really all about how it affects the people who actually live in the Arctic, and that's why we're studying it."
The fieldwork also provided a first-hand perspective of how things have been changing over the years.
"One of the most powerful things that came out of the field study for me was listening to a hunter talk about how the ice is breaking up earlier than it ever has in his lifetime," Ryan said. "We're only observing this 20-year satellite record. But to be able to learn from locals about what things were like 50 or 60 years ago, it really emphasized how climate change has already impacted the community."
Moving forward, the research team is hopeful that mapping the local effects of regional and global climate patterns will be useful for policy-makers.
Read more at Science Daily
The study, published in Nature Climate Change, used weather data and near-daily satellite observations of 28 Arctic communities to determine the timing of shorefast ice breakup in each location over the past 19 years. The analysis enabled the researchers to determine the conditions that drive springtime ice breakup. Then they use climate models to predict how that timing might change in each community as the planet warms.
The analysis found that by 2100, communities could see shorefast ice seasons reduced by anywhere from five to 44 days, with the coldest communities in the study seeing the largest reductions. The wide range of potential outcomes was a surprise, the researchers say, and underscores the need to take local factors into account when making policy to prepare for future climate change.
"One of the key takeaways for me is that even though the whole Arctic is going to warm and lose ice, we see very different outcomes from one community to another," said Sarah Cooley, lead author of the study and a Ph.D. student in the Institute at Brown for Environment and Society (IBES). "When you combine that wide range of outcomes with the fact that different communities have lots of social, cultural and economic differences, it means that some communities may experience much larger impacts than others."
For example, the northern Canadian communities of Clyde River and Taloyoak, which are particularly dependent upon shorefast ice for subsistence hunting and fishing, will see some of the most substantial declines in sea ice. On average, these two communities can expect ice to break up 23 to 44 days earlier, respectively by 2100. That could mean "economically and culturally significant activities on the ice will be harder to maintain in the future," the researchers write.
That the coldest regions in the study could see the largest reductions in ice is cause for concern, says study co-author Johnny Ryan, a postdoctoral researcher at IBES.
"Some of these places are considered to be the last remnants of truly polar ecosystems and people talk a lot about preserving these areas in particular," Ryan said. "Yet these are the areas that we find will lose the most ice."
The research is part of a larger research effort aimed at better understanding how climate change in the Arctic will impact the people who live there. In addition to gathering satellite and scientific data, the research team conducted fieldwork in the community of Uummannaq in western Greenland to learn more about how the local population utilizes the ice.
"Shorefast ice is something that's most important from the standpoint of the people who use it," Cooley said. "It has some implications in terms of global climate, but those are fairly small. This is really all about how it affects the people who actually live in the Arctic, and that's why we're studying it."
The fieldwork also provided a first-hand perspective of how things have been changing over the years.
"One of the most powerful things that came out of the field study for me was listening to a hunter talk about how the ice is breaking up earlier than it ever has in his lifetime," Ryan said. "We're only observing this 20-year satellite record. But to be able to learn from locals about what things were like 50 or 60 years ago, it really emphasized how climate change has already impacted the community."
Moving forward, the research team is hopeful that mapping the local effects of regional and global climate patterns will be useful for policy-makers.
Read more at Science Daily
When natural disasters strike locally, urban networks spread the damage globally
When cyclones and other natural disasters strike a city or town, the social and economic impacts locally can be devastating. But these events also have ripple effects that can be felt in distant cities and regions -- even globally -- due to the interconnectedness of the world's urban trade networks.
In fact, a new study by researchers at the Yale School of Forestry & Environmental Studies finds that local economic impacts -- such as damage to factories and production facilities -- can trigger secondary impacts across the city's production and trade network. For the largest storms, they report, these impacts can account for as much as three-fourths of the total damage.
According to their findings, published in the journal Nature Sustainability, the extent of these secondary costs depends more on the structure of the production and supply networks for a particular city than on its geographic location. Regional cities that are dependent on their urban network for industrial supplies -- and that have access to relatively few suppliers -- are most vulnerable to these secondary impacts. Larger, global cities such as New York and Beijing, meanwhile, are more insulated from risks.
"Cities are strongly connected by flows of people, of energy, and ideas -- but also by the flows of trade and materials," said Chris Shughrue '18 Ph.D., lead author of the study which is based on his dissertation work at Yale. He is now a data scientist at StreetCred Labs in New York. "These connections have implications for vulnerability, particularly as we anticipate cyclones and other natural hazards to become more intense and frequent as a result of climate change over the coming decades."
The paper was co-authored by Karen Seto, a professor of geography and urbanization science at F&ES, and B.T. Werner, a professor from the Scripps Institution of Oceanography.
"This study is especially important in the context of climate impacts on urban areas," Seto said. "Whereas we tend to consider a city's vulnerability to climate change as limited to local events, this study shows that we need to rethink this conceptualization. It shows that disasters have a domino effect through urban networks."
Using a simulation coupled with a global urban trade network model -- which maps the interdependencies of cities worldwide -- the researchers show how simulated disasters in one location can trigger a catastrophic domino effect.
The global spread of damage was particularly acute when cyclones occurred in cities of North America and East Asia, largely because of their outsize role in global trade networks -- as purchasers and suppliers, respectively -- and because these regions are particularly susceptible to cyclone events.
Often, adverse impacts are primarily caused by a spike in material prices, followed by production losses to purchasers. These production losses eventually can cause industrial shortages, which can then induce additional cycles of price spikes and shortages throughout the production chain.
Similar outcomes have been borne out following real world disasters. For instance, when catastrophic flooding occurred in Queensland, Australia, the impact on coking coal production prompted a 25-percent spike in the global costs. And the economic impacts of Hurricane Katrina extended far beyond New Orleans for several years after the historic storm.
While the example of cyclones can act as a proxy for other isolated disasters -- such as the 2011 tsunami in Japan which caused global economic disruptions, particularly in the auto sector -- the researchers say the findings are particularly relevant in terms of climate-related natural events.
Read more at Science Daily
In fact, a new study by researchers at the Yale School of Forestry & Environmental Studies finds that local economic impacts -- such as damage to factories and production facilities -- can trigger secondary impacts across the city's production and trade network. For the largest storms, they report, these impacts can account for as much as three-fourths of the total damage.
According to their findings, published in the journal Nature Sustainability, the extent of these secondary costs depends more on the structure of the production and supply networks for a particular city than on its geographic location. Regional cities that are dependent on their urban network for industrial supplies -- and that have access to relatively few suppliers -- are most vulnerable to these secondary impacts. Larger, global cities such as New York and Beijing, meanwhile, are more insulated from risks.
"Cities are strongly connected by flows of people, of energy, and ideas -- but also by the flows of trade and materials," said Chris Shughrue '18 Ph.D., lead author of the study which is based on his dissertation work at Yale. He is now a data scientist at StreetCred Labs in New York. "These connections have implications for vulnerability, particularly as we anticipate cyclones and other natural hazards to become more intense and frequent as a result of climate change over the coming decades."
The paper was co-authored by Karen Seto, a professor of geography and urbanization science at F&ES, and B.T. Werner, a professor from the Scripps Institution of Oceanography.
"This study is especially important in the context of climate impacts on urban areas," Seto said. "Whereas we tend to consider a city's vulnerability to climate change as limited to local events, this study shows that we need to rethink this conceptualization. It shows that disasters have a domino effect through urban networks."
Using a simulation coupled with a global urban trade network model -- which maps the interdependencies of cities worldwide -- the researchers show how simulated disasters in one location can trigger a catastrophic domino effect.
The global spread of damage was particularly acute when cyclones occurred in cities of North America and East Asia, largely because of their outsize role in global trade networks -- as purchasers and suppliers, respectively -- and because these regions are particularly susceptible to cyclone events.
Often, adverse impacts are primarily caused by a spike in material prices, followed by production losses to purchasers. These production losses eventually can cause industrial shortages, which can then induce additional cycles of price spikes and shortages throughout the production chain.
Similar outcomes have been borne out following real world disasters. For instance, when catastrophic flooding occurred in Queensland, Australia, the impact on coking coal production prompted a 25-percent spike in the global costs. And the economic impacts of Hurricane Katrina extended far beyond New Orleans for several years after the historic storm.
While the example of cyclones can act as a proxy for other isolated disasters -- such as the 2011 tsunami in Japan which caused global economic disruptions, particularly in the auto sector -- the researchers say the findings are particularly relevant in terms of climate-related natural events.
Read more at Science Daily
Predators help prey adapt to an uncertain future
What effect does extinction of species have on the evolution of surviving species? Evolutionary biologists have investigated this question by conducting a field experiment with a leaf galling fly and its predatory enemies. They found that losing its natural enemies could make it more difficult for the prey to adapt to future environments.
According to many experts, the Earth is at the beginning of its sixth mass extinction, which is already having dire consequences for the functioning of natural ecosystems. What remains unclear is how these extinctions will alter the future ability of remaining species to adapt.
Researchers from the University of Zurich have now pursued this question with a field experiment in California. They investigated how the traits of a tiny fly changed when a group of its natural enemies was removed. From their observations, they drew conclusions about changes in the genetic diversity of the flies.
Specific elimination of parasitoids
The fly Iteomyia salicisverruca lives on willow leaves in tooth-shaped growths called galls, which it induces in its larval stage. The natural enemies of this fly include several species of parasitic wasps. These wasps lay their eggs inside the fly larva within the gall, where they then develop into parasitic predators known as parasitoids. Before the adult wasp leaves the gall, it devours its host, the fly.
Some species of these parasitoids attack before the gall is formed, while others parasitize fly larvae later in their development and pierce through the gall. The researchers specifically eliminated the latter group of natural enemies by attaching fine-meshed nets over leaves with galls before they were attacked.
After three months, the biologists collected about 600 galls and checked if the fly larvae had survived. They also measured three traits that influence a fly's survival from parasitoid attack: the size of the gall; the number of flies within a gall; and the fly's preference to create galls on particular genetic varieties of willow trees. Using these data they then created "fitness landscapes" using computer models, which visualize the adaptability of a species.
Fewer enemies, less variability
It turned out that different combinations of these three traits helped flies survive ? when all of the fly's natural enemies were present. "So there are several equally good solutions that ensure the survival of the fly," says Matt Barbour, the study's lead author. In contrast, after some natural enemies were removed, only one specific combination of traits helped flies survive. "This suggests that the extinction of natural enemies constrains fly evolution toward only one optimal solution." Genetic variations that lead to a different development of the traits could thus be permanently lost in the flies' genome.
This loss of diversity might be of consequence: "The diversity of potential solutions for survival acts to preserve genetic variability in the gall's traits," says Barbour. And since genetic variation provides the raw material for evolution, the findings suggest that the extinction of this fly's natural enemies may make it more difficult for it to adapt to a changing environment.
Read more at Science Daily
According to many experts, the Earth is at the beginning of its sixth mass extinction, which is already having dire consequences for the functioning of natural ecosystems. What remains unclear is how these extinctions will alter the future ability of remaining species to adapt.
Researchers from the University of Zurich have now pursued this question with a field experiment in California. They investigated how the traits of a tiny fly changed when a group of its natural enemies was removed. From their observations, they drew conclusions about changes in the genetic diversity of the flies.
Specific elimination of parasitoids
The fly Iteomyia salicisverruca lives on willow leaves in tooth-shaped growths called galls, which it induces in its larval stage. The natural enemies of this fly include several species of parasitic wasps. These wasps lay their eggs inside the fly larva within the gall, where they then develop into parasitic predators known as parasitoids. Before the adult wasp leaves the gall, it devours its host, the fly.
Some species of these parasitoids attack before the gall is formed, while others parasitize fly larvae later in their development and pierce through the gall. The researchers specifically eliminated the latter group of natural enemies by attaching fine-meshed nets over leaves with galls before they were attacked.
After three months, the biologists collected about 600 galls and checked if the fly larvae had survived. They also measured three traits that influence a fly's survival from parasitoid attack: the size of the gall; the number of flies within a gall; and the fly's preference to create galls on particular genetic varieties of willow trees. Using these data they then created "fitness landscapes" using computer models, which visualize the adaptability of a species.
Fewer enemies, less variability
It turned out that different combinations of these three traits helped flies survive ? when all of the fly's natural enemies were present. "So there are several equally good solutions that ensure the survival of the fly," says Matt Barbour, the study's lead author. In contrast, after some natural enemies were removed, only one specific combination of traits helped flies survive. "This suggests that the extinction of natural enemies constrains fly evolution toward only one optimal solution." Genetic variations that lead to a different development of the traits could thus be permanently lost in the flies' genome.
This loss of diversity might be of consequence: "The diversity of potential solutions for survival acts to preserve genetic variability in the gall's traits," says Barbour. And since genetic variation provides the raw material for evolution, the findings suggest that the extinction of this fly's natural enemies may make it more difficult for it to adapt to a changing environment.
Read more at Science Daily
Why smartphones are digital truth serum
Researchers from University of Pennsylvania published a new paper in the Journal of Marketing that explains that the device people use to communicate can affect the extent to which they are willing to disclose intimate or personal information about themselves.
The study forthcoming in the Journal of Marketing is titled "Full Disclosure: How Smartphones Enhance Consumer Self-disclosure" and is authored by Shiri Melumad and Robert Meyer.
Do smartphones alter what people are willing to disclose about themselves to others? A new study in the Journal of Marketing suggests that they might. The research indicates that people are more willing to reveal personal information about themselves online using their smartphones compared to desktop computers. For example, Tweets and reviews composed on smartphones are more likely to be written from the perspective of the first person, to disclose negative emotions, and to discuss the writer's private family and personal friends. Likewise, when consumers receive an online ad that requests personal information (such as phone number and income), they are more likely to provide it when the request is received on their smartphone compared to their desktop or laptop computer.
Why do smartphones have this effect on behavior? Melumad explains that "Writing on one's smartphone often lowers the barriers to revealing certain types of sensitive information for two reasons; one stemming from the unique form characteristics of phones and the second from the emotional associations that consumers tend to hold with their device." First, one of the most distinguishing features of phones is the small size; something that makes viewing and creating content generally more difficult compared with desktop computers. Because of this difficulty, when writing or responding on a smartphone, a person tends to narrowly focus on completing the task and become less cognizant of external factors that would normally inhibit self-disclosure, such as concerns about what others would do with the information.?Smartphone users know this effect well -- when using their phones in public places, they often fixate so intently on its content that they become oblivious to what is going on around them.
The second reason people tend to be more self-disclosing on their phones lies in the feelings of comfort and familiarity people associate with their phones. Melumad adds, "Because our smartphones are with us all of the time and perform so many vital functions in our lives, they often serve as 'adult pacifiers' that bring feelings of comfort to their owners." The downstream effect of those feelings shows itself when people are more willing to disclose feelings to a close friend compared to a stranger or open up to a therapist in a comfortable rather than uncomfortable setting. As Meyer says, "Similarly, when writing on our phones, we tend to feel that we are in a comfortable 'safe zone.' As a consequence, we are more willing to open up about ourselves."
The data to support these ideas is far-ranging and includes analyses of thousands of social media posts and online reviews, responses to web ads, and controlled laboratory studies.?For example, initial evidence comes from analyses of the depth of self-disclosure?revealed in 369,161 Tweets and 10,185 restaurant reviews posted on TripAdvisor.com, with some posted on PCs and some on smartphones.? Using both automated natural-language processing tools and human judgements of self-disclosure, the researchers find robust evidence that smartphone-generated content is indeed more self-disclosing. Perhaps even more compelling is evidence from an analysis of 19,962 "call to action" web ads, where consumers are asked to provide private information.
Consistent with the tendency for smartphones to facilitate greater self-disclosure, compliance was systematically higher for ads targeted at smartphones versus PCs.
Read more at Science Daily
The study forthcoming in the Journal of Marketing is titled "Full Disclosure: How Smartphones Enhance Consumer Self-disclosure" and is authored by Shiri Melumad and Robert Meyer.
Do smartphones alter what people are willing to disclose about themselves to others? A new study in the Journal of Marketing suggests that they might. The research indicates that people are more willing to reveal personal information about themselves online using their smartphones compared to desktop computers. For example, Tweets and reviews composed on smartphones are more likely to be written from the perspective of the first person, to disclose negative emotions, and to discuss the writer's private family and personal friends. Likewise, when consumers receive an online ad that requests personal information (such as phone number and income), they are more likely to provide it when the request is received on their smartphone compared to their desktop or laptop computer.
Why do smartphones have this effect on behavior? Melumad explains that "Writing on one's smartphone often lowers the barriers to revealing certain types of sensitive information for two reasons; one stemming from the unique form characteristics of phones and the second from the emotional associations that consumers tend to hold with their device." First, one of the most distinguishing features of phones is the small size; something that makes viewing and creating content generally more difficult compared with desktop computers. Because of this difficulty, when writing or responding on a smartphone, a person tends to narrowly focus on completing the task and become less cognizant of external factors that would normally inhibit self-disclosure, such as concerns about what others would do with the information.?Smartphone users know this effect well -- when using their phones in public places, they often fixate so intently on its content that they become oblivious to what is going on around them.
The second reason people tend to be more self-disclosing on their phones lies in the feelings of comfort and familiarity people associate with their phones. Melumad adds, "Because our smartphones are with us all of the time and perform so many vital functions in our lives, they often serve as 'adult pacifiers' that bring feelings of comfort to their owners." The downstream effect of those feelings shows itself when people are more willing to disclose feelings to a close friend compared to a stranger or open up to a therapist in a comfortable rather than uncomfortable setting. As Meyer says, "Similarly, when writing on our phones, we tend to feel that we are in a comfortable 'safe zone.' As a consequence, we are more willing to open up about ourselves."
The data to support these ideas is far-ranging and includes analyses of thousands of social media posts and online reviews, responses to web ads, and controlled laboratory studies.?For example, initial evidence comes from analyses of the depth of self-disclosure?revealed in 369,161 Tweets and 10,185 restaurant reviews posted on TripAdvisor.com, with some posted on PCs and some on smartphones.? Using both automated natural-language processing tools and human judgements of self-disclosure, the researchers find robust evidence that smartphone-generated content is indeed more self-disclosing. Perhaps even more compelling is evidence from an analysis of 19,962 "call to action" web ads, where consumers are asked to provide private information.
Consistent with the tendency for smartphones to facilitate greater self-disclosure, compliance was systematically higher for ads targeted at smartphones versus PCs.
Read more at Science Daily
Subscribe to:
Posts (Atom)