Research conducted at Swansea University and the University of Milan has shown that students who use digital technology excessively are less motivated to engage with their studies, and are more anxious about tests. This effect was made worse by the increased feelings of loneliness that use of digital technology produced.
Two hundred and eighty-five university students, enrolled on a range of health-related degree courses, participated in the study. They were assessed for their use of digital technology, their study skills and motivation, anxiety, and loneliness. The study found a negative relationship between internet addiction and motivation to study. Students reporting more internet addiction also found it harder to organise their learning productively, and were more anxious about their upcoming tests. The study also found that internet addiction was associated with loneliness, and that this loneliness made study harder.
Professor Phil Reed of Swansea University said: "These results suggest that students with high levels of internet addiction may be particularly at risk from lower motivations to study, and, hence, lower actual academic performance."
About 25% of the students reported that they spent over four hours a day online, with the rest indicating that they spent between one to three hours a day. The main uses of the internet for the student sample were social networking (40%) and information seeking (30%).
Professor Truzoli of Milan University said: "Internet addiction has been shown to impair a range of abilities such as impulse control, planning, and sensitivity to rewards. A lack of ability in these areas could well make study harder."
In addition to the links between levels of internet addiction and poor study motivation and ability, internet addiction was found to be associated with increased loneliness. The results indicated that loneliness, in turn, made studying harder for the students.
The study suggests that loneliness plays a large role in positive feelings about academic life in higher education. The poorer social interactions that are known to be associated with internet addiction make loneliness worse, and, in turn, impact on motivation to engage in a highly social educational environment such as a university.
Read more at Science Daily
Jan 18, 2020
Mosquitoes engineered to repel dengue virus
Aedes aegypti mosquito |
Led by biologists at the University of California San Diego, the research team describes details of the achievement in Aedes aegypti mosquitoes, the insects that spread dengue in humans, on January 16 in the journal PLOS Pathogens.
Researchers in UC San Diego Associate Professor Omar Akbari's lab worked with colleagues at Vanderbilt University Medical Center in identifying a broad spectrum human antibody for dengue suppression. The development marks the first engineered approach in mosquitoes that targets the four known types of dengue, improving upon previous designs that addressed single strains.
They then designed the antibody "cargo" to be synthetically expressed in female A. aegypti mosquitoes, which spread the dengue virus.
"Once the female mosquito takes in blood, the antibody is activated and expressed -- that's the trigger," said Akbari, of the Division of Biological Sciences and a member of the Tata Institute for Genetics and Society. "The antibody is able to hinder the replication of the virus and prevent its dissemination throughout the mosquito, which then prevents its transmission to humans. It's a powerful approach."
Akbari said the engineered mosquitoes could easily be paired with a dissemination system, such as a gene drive based on CRISPR/CAS-9 technology, capable of spreading the antibody throughout wild disease-transmitting mosquito populations.
"It is fascinating that we now can transfer genes from the human immune system to confer immunity to mosquitoes. This work opens up a whole new field of biotechnology possibilities to interrupt mosquito-borne diseases of man," said coauthor James Crowe, Jr., M.D., director of the Vanderbilt Vaccine Center at Vanderbilt University Medical Center in Nashville, Tenn.
According to the World Health Organization, dengue virus threatens millions of people in tropical and sub-tropical climates. Severe dengue is a leading cause of serious illness and death among children in many Asian and Latin American countries. The Pan American Health Organization recently reported the highest number of dengue cases ever recorded in the Americas. Infecting those with compromised immune systems, dengue victims suffer flu-like symptoms, including severe fevers and rashes. Serious cases can include life-threatening bleeding. Currently no specific treatment exists and thus prevention and control depend on measures that stop the spread of the virus.
"This development means that in the foreseeable future there may be viable genetic approaches to controlling dengue virus in the field, which could limit human suffering and mortality," said Akbari, whose lab is now in the early stages of testing methods to simultaneously neutralize mosquitoes against dengue and a suite of other viruses such as Zika, yellow fever and chikungunya.
"Mosquitoes have been given the bad rap of being the deadliest killers on the planet because they are the messengers that transmit diseases like malaria, dengue, chikungunya, Zika and yellow fever that collectively put 6.5 billion people at risk globally," said Suresh Subramani, professor emeritus of molecular biology at UC San Diego and global director of the Tata Institute for Genetics and Society (TIGS). "Until recently, the world has focused on shooting (killing) this messenger. Work from the Akbari lab and at TIGS is aimed at disarming the mosquito instead by preventing it from transmitting diseases, without killing the messenger. This paper shows that it is possible to immunize mosquitoes and prevent their ability to transmit dengue virus, and potentially other mosquito-borne pathogens."
Read more at Science Daily
Labels:
Animals,
Insects,
Mosquitoes,
Science,
Virus
Jan 17, 2020
The core of massive dying galaxies already formed 1.5 billion years after the Big Bang
The most distant dying galaxy discovered so far, more massive than our Milky Way -- with more than a trillion stars -- has revealed that the 'cores' of these systems had formed already 1.5 billion years after the Big Bang, about 1 billion years earlier than previous measurements revealed. The discovery will add to our knowledge on the formation of the Universe more generally, and may cause the computer models astronomers use, one of the most fundamental tools, to be revised. The result was obtained in close collaboration with Masayuki Tanaka and his colleagues at the National Observatory of Japan is now published in two works in the Astrophysical Journal Letters and the Astrophysical Journal.
What is a "dead" galaxy?
Galaxies are broadly categorized as dead or alive: dead galaxies are no longer forming stars, while alive galaxies are still bright with star formation activity. A 'quenching' galaxy is a galaxy in the process of dying -- meaning its star formation is significantly suppressed. Quenching galaxies are not as bright as fully alive galaxies, but they are not as dark as dead galaxies. Researchers use this spectrum of brightness as the first line of identification when observing galaxies in the Universe.
The farthest dying galaxy discovered so far reveals remarkable maturity
A team of researchers of the Cosmic Dawn Center at the Niels Bohr Institute and the National Observatory of Japan recently discovered a massive galaxy dying already 1.5 billion years after the Big Bang, the most distant of its kind. "Moreover, we found that its core seems already fully formed at that time," says Masayuki Tanaka, the author of the letter. "This result pairs up with the fact that, when these dying gigantic systems were still alive and forming stars, they might have not been that extreme compared with the average population of galaxies," adds Francesco Valentino, assistant professor at the Cosmic Dawn Center at the Niels Bohr Institute and author of an article on the past history of dead galaxies appeared in the Astrophysical Journal.
Why do galaxies die? -- One of the biggest and still unanswered questions in astrophysics
"The suppressed star formation tells us that a galaxy is dying, sadly, but that is exactly the kind of galaxy we want to study in detail to understand why it dies," continues Valentino. One of the biggest questions that astrophysics still has not answered is how a galaxy goes from being star-forming to being dead. For instance, the Milky Way is still alive and slowly forming new stars, but not too far away (in astronomical terms), the central galaxy of the Virgo cluster -- M87 -- is dead and completely different. Why is that? "It might have to do with the presence of gigantic and active black hole at the center of galaxies like M87" Valentino says.
Earth based telescopes find extremes -- but astronomers look for normality
One of the problems in observing galaxies in this much detail is that the telescopes available now on Earth are generally able to find only the most extreme systems. However, the key to describe the history of the Universe is held by the vastly more numerous population of normal objects. "Since we are trying hard to discover this normality, the current observational limitations are an obstacle that has to be overcome."
The James Webb Telescope (JWST) represents hope for better data material in the near future
The new James Webb Space Telescope, scheduled for launch in 2021, will be able to provide the astronomers with data in a level of detail that should be able to map exactly this "normality." The methods developed in close collaboration between the Japanese team and the team at the Niels Bohr Institute have already proven to be successful, given the recent result. "This is significant, because it will enable us to look for the most promising galaxies from the start, when JWST gives us access to much higher quality data" Francesco Valentino explains.
Combining observations with the tool -- the computer models of the Universe
What has been found observationally is not too far away from what the most recent models predict. "Until very recently, we did not have many observations to compare with the models. However, the situation is in rapid evolution, and with JWST we will have valuable larger samples of ``normal'' galaxies in a few years. The more galaxies we can study, the better we are able to understand the properties or situations leading to a certain state -- if the galaxy is alive, quenching or dead. It is basically a question of writing the history of the Universe correctly, and in greater and greater detail. At the same time, we are tuning the computer models to take our observations into account, which will be a huge improvement, not just for our branch of work, but for astronomy in general" Francesco Valentino explains.
Read more at Science Daily
What is a "dead" galaxy?
Galaxies are broadly categorized as dead or alive: dead galaxies are no longer forming stars, while alive galaxies are still bright with star formation activity. A 'quenching' galaxy is a galaxy in the process of dying -- meaning its star formation is significantly suppressed. Quenching galaxies are not as bright as fully alive galaxies, but they are not as dark as dead galaxies. Researchers use this spectrum of brightness as the first line of identification when observing galaxies in the Universe.
The farthest dying galaxy discovered so far reveals remarkable maturity
A team of researchers of the Cosmic Dawn Center at the Niels Bohr Institute and the National Observatory of Japan recently discovered a massive galaxy dying already 1.5 billion years after the Big Bang, the most distant of its kind. "Moreover, we found that its core seems already fully formed at that time," says Masayuki Tanaka, the author of the letter. "This result pairs up with the fact that, when these dying gigantic systems were still alive and forming stars, they might have not been that extreme compared with the average population of galaxies," adds Francesco Valentino, assistant professor at the Cosmic Dawn Center at the Niels Bohr Institute and author of an article on the past history of dead galaxies appeared in the Astrophysical Journal.
Why do galaxies die? -- One of the biggest and still unanswered questions in astrophysics
"The suppressed star formation tells us that a galaxy is dying, sadly, but that is exactly the kind of galaxy we want to study in detail to understand why it dies," continues Valentino. One of the biggest questions that astrophysics still has not answered is how a galaxy goes from being star-forming to being dead. For instance, the Milky Way is still alive and slowly forming new stars, but not too far away (in astronomical terms), the central galaxy of the Virgo cluster -- M87 -- is dead and completely different. Why is that? "It might have to do with the presence of gigantic and active black hole at the center of galaxies like M87" Valentino says.
Earth based telescopes find extremes -- but astronomers look for normality
One of the problems in observing galaxies in this much detail is that the telescopes available now on Earth are generally able to find only the most extreme systems. However, the key to describe the history of the Universe is held by the vastly more numerous population of normal objects. "Since we are trying hard to discover this normality, the current observational limitations are an obstacle that has to be overcome."
The James Webb Telescope (JWST) represents hope for better data material in the near future
The new James Webb Space Telescope, scheduled for launch in 2021, will be able to provide the astronomers with data in a level of detail that should be able to map exactly this "normality." The methods developed in close collaboration between the Japanese team and the team at the Niels Bohr Institute have already proven to be successful, given the recent result. "This is significant, because it will enable us to look for the most promising galaxies from the start, when JWST gives us access to much higher quality data" Francesco Valentino explains.
Combining observations with the tool -- the computer models of the Universe
What has been found observationally is not too far away from what the most recent models predict. "Until very recently, we did not have many observations to compare with the models. However, the situation is in rapid evolution, and with JWST we will have valuable larger samples of ``normal'' galaxies in a few years. The more galaxies we can study, the better we are able to understand the properties or situations leading to a certain state -- if the galaxy is alive, quenching or dead. It is basically a question of writing the history of the Universe correctly, and in greater and greater detail. At the same time, we are tuning the computer models to take our observations into account, which will be a huge improvement, not just for our branch of work, but for astronomy in general" Francesco Valentino explains.
Read more at Science Daily
Fossil is the oldest-known scorpion
Modern-day scorpion |
The discovery provides new information about how animals transitioned from living in the sea to living entirely on land: The scorpion's respiratory and circulatory systems are almost identical to those of our modern-day scorpions -- which spend their lives exclusively on land -- and operate similarly to those of a horseshoe crab, which lives mostly in the water, but which is capable of forays onto land for short periods of time.
The researchers named the new scorpion Parioscorpio venator. The genus name means "progenitor scorpion," and the species name means "hunter." They outlined their findings in a study published today in the journal Scientific Reports.
"We're looking at the oldest known scorpion -- the oldest known member of the arachnid lineage, which has been one of the most successful land-going creatures in all of Earth history," said Loren Babcock, an author of the study and a professor of earth sciences at The Ohio State University.
"And beyond that, what is of even greater significance is that we've identified a mechanism by which animals made that critical transition from a marine habitat to a terrestrial habitat. It provides a model for other kinds of animals that have made that transition including, potentially, vertebrate animals. It's a groundbreaking discovery."
The "hunter scorpion" fossils were unearthed in 1985 from a site in Wisconsin that was once a small pool at the base of an island cliff face. They had remained unstudied in a museum at the University of Wisconsin for more than 30 years when one of Babcock's doctoral students, Andrew Wendruff -- now an adjunct professor at Otterbein University in Westerville -- decided to examine the fossils in detail.
Wendruff and Babcock knew almost immediately that the fossils were scorpions. But, initially, they were not sure how close these fossils were to the roots of arachnid evolutionary history. The earliest known scorpion to that point had been found in Scotland and dated to about 434 million years ago. Scorpions, paleontologists knew, were one of the first animals to live on land full-time.
The Wisconsin fossils, the researchers ultimately determined, are between 1 million and 3 million years older than the fossil from Scotland. They figured out how old this scorpion was from other fossils in the same formation. Those fossils came from creatures that scientists think lived between 436.5 and 437.5 million years ago, during the early part of the Silurian period, the third period in the Paleozoic era.
"People often think we use carbon dating to determine the age of fossils, but that doesn't work for something this old," Wendruff said. "But we date things with ash beds -- and when we don't have volcanic ash beds, we use these microfossils and correlate the years when those creatures were on Earth. It's a little bit of comparative dating."
The Wisconsin fossils -- from a formation that contains fossils known as the Waukesha Biota -- show features typical of a scorpion, but detailed analysis showed some characteristics that were not previously known in any scorpion, such as additional body segments and a short "tail" region, all of which shed light on the ancestry of this group.
Wendruff examined the fossils under a microscope, and took detailed, high-resolution photographs of the fossils from different angles. Bits of the animal's internal organs, preserved in the rock, began to emerge. He identified the appendages, a chamber where the animal would have stored its venom, and -- most importantly -- the remains of its respiratory and circulatory systems.
This scorpion is about 2.5 centimeters long -- about the same size as many scorpions in the world today. And, Babcock said, it shows a crucial evolutionary link between the way ancient ancestors of scorpions respired under water, and the way modern-day scorpions breathe on land. Internally, the respiratory-circulatory system has a structure just like that found in today's scorpions.
"The inner workings of the respiratory-circulatory system in this animal are, shape-wise, identical to those of the arachnids and scorpions that breathe air exclusively," Babcock said. "But it also is incredibly similar to what we recognize in marine arthropods like horseshoe crabs. So, it looks like this scorpion, this lineage, must have been pre-adapted to life on land, meaning they had the morphologic capability to make that transition, even before they first stepped onto land."
Paleontologists have for years debated how animals moved from sea to land. Some fossils show walking traces in the sand that may be as old as 560 million years, but these traces may have been made in prehistoric surf -- meaning it is difficult to know whether animals were living on land or darting out from their homes in the ancient ocean.
Read more at Science Daily
'Living fossil' may upend basic tenet of evolutionary theory
Illustration of Cryptococcus neoformans |
But now, a UC San Francisco-led research team has discovered the first conclusive evidence that selection may also occur at the level of the epigenome -- a term that refers to an assortment of chemical "annotations" to the genome that determine whether, when and to what extent genes are activated -- and has done so for tens of millions of years. This unprecedented finding subverts the widely accepted notion that over geologic timescales, natural selection acts exclusively on variation in the genome sequence itself.
In a study published Jan. 16, 2020 in the journal Cell, the researchers show that Cryptococcus neoformans -- a pathogenic yeast that infects people with weakened immune systems and is responsible for about 20 percent of all HIV/AIDS-related deaths -- contains a particular epigenetic "mark" on its DNA sequence, which, based on their lab experiments and statistical models, should have disappeared from the species sometime during the age of the dinosaurs.
But the study shows that this methylation mark -- so named because it's created through a process that attaches a molecular tag called a methyl group to the genome -- has managed to stick around for at least 50 million years -- maybe as long as 150 million years -- past its predicted expiration date. This amazing feat of evolutionary tenacity is made possible by an unusual enzyme and a hefty dose of natural selection.
"What we've seen is that methylation can undergo natural variation and can be selected for over million-year time scales to drive evolution," explained Hiten Madhani, MD, PhD, professor of biochemistry and biophysics at UCSF and senior author of the new study. "This is a previously unappreciated mode of evolution that's not based on changes in the organism's DNA sequence."
Though not seen in all life forms, DNA methylation isn't uncommon either. It's found in all vertebrates and plants, as well as many fungi and insects. In some species, however, methylation is nowhere to be found.
"Methylation has a patchy evolutionary presence," said Madhani, who is also a member of the UCSF Helen Diller Family Comprehensive Cancer Center and a Chan-Zuckerberg Biohub investigator. "Depending on what branch of the evolutionary tree you look at, different epigenetic mechanisms have been maintained or not maintained."
Many model organisms that are staples of the modern molecular biology lab -- including the baker's yeast S. cerevisiae, the roundworm C. elegans, and the fruit fly D. melanogaster -- lack DNA methylation entirely. These species are descended from ancient ancestors that lost enzymes that were, until this study was published, thought to be essential for propagating methylation for generation upon generation. How C. neoformans managed to avoid the same fate was a mystery up to now.
In the new study, Madhani and his collaborators show that hundreds of millions of years ago, the ancestor of C. neoformans had two enzymes that controlled DNA methylation. One was what's known as a "de novo methyltransferase," which was responsible for adding methylation marks to "naked" DNA that had none. The other was a "maintenance methyltransferase" that functioned a bit like a molecular Xerox. This enzyme copied existing methylation marks, which had been put in place by the de novo methyltransferase, onto unmethylated DNA during DNA replication. And like every other species with an epigenome that includes methylation, the ancestor of C. neoformans had both types of methyltransferase.
But then, sometime during the age of the dinosaurs, the ancestor of C. neoformans lost its de novo enzyme. Its descendants have been living without one since then, making C. neoformans and its closest relatives the only species alive today known to have DNA methylation without a de novo methyltransferase. "We didn't understand how methylation could still be in place since the Cretaceous period without a de novo enzyme," said Madhani.
Though the maintenance methyltransferase was still available to copy any existing methylation marks -- and the new study clearly demonstrates that this enzyme is unique among such enzymes for a number of reasons, including its ability to propagate existing methylation marks with exceptionally high fidelity -- the study also shows that unless natural selection were acting to preserve methylation, the ancient loss of the de novo methyltransferase should have resulted in the rapid demise and eventual disappearance of DNA methylation in C. neoformans.
That's because methylation marks can be randomly lost, which means that no matter how exquisitely a maintenance methyltransferase copies existing marks onto new strands of DNA, the accumulated loss of methylation would eventually leave the maintenance enzyme with no template to work from. Though it's conceivable that these loss events might occur at a sluggish pace, experimental observations allowed the researchers to determine that each methylation mark in C. neoformans was likely to disappear from half of the population after just 7500 generations. Even assuming that for some reason C. neoformans might reproduce 100 times more slowly in the wild than in the lab, this would still be the equivalent of only 130 years.
The rare and random acquisition of new methylation marks can't account for the persistence of methylation in C. neoformans either. The researchers' lab experiments demonstrated that new methylation marks arise by chance at a rate 20 times slower than methylation losses. Over evolutionary timescales, the losses would clearly predominate, and without a de novo enzyme to compensate, methylation would have vanished from C. neoformans around the time when dinosaurs disappeared had it not been for selection pressures favoring the marks.
In fact, when the researchers compared a variety of C. neoformans strains that were known to have diverged from one another nearly 5 million years ago, they found that not only did all the strains still have DNA methylation, but the methylation marks were coating analogous regions of the genome, a finding which suggests that methylation marks at specific genomic sites confer some sort of survival advantage that's being selected for.
"Natural selection is maintaining methylation at much higher levels than would be expected from a neutral process of random gains and losses. This is the epigenetic equivalent of Darwinian evolution," said Madhani.
Asked why evolution would select for these particular marks, Madhani explained that "one of methylation's major functions is genome defense. In this case we think it's for silencing transposons."
Transposons, also known as jumping genes, are stretches of DNA that are able to extract themselves from one part of the genome and insert themselves into another. If a transposon were to insert itself into the middle of a gene needed for survival, that gene may no longer function and the cell would die. Therefore, transposon-silencing methylation provides an obvious survival advantage, which is exactly what's needed to drive evolution.
However, it remains to be seen how common this unappreciated form of natural selection is in other species.
"Previously, there was no evidence of this kind of selection happening over these time scales. This is an entirely novel concept," Madhani said. "But now the big question is 'Is this happening outside of this exceptional circumstance, and if so, how do we find it?'"
Read more at Science Daily
In death of dinosaurs, it was all about the asteroid -- not volcanoes
Illustrated scene of dinosaurs and asteroid. |
In a break from a number of other recent studies, Yale assistant professor of geology & geophysics Pincelli Hull and her colleagues argue in a new research paper in Science that environmental impacts from massive volcanic eruptions in India in the region known as the Deccan Traps happened well before the Cretaceous-Paleogene extinction event 66 million years ago and therefore did not contribute to the mass extinction.
Most scientists acknowledge that the mass extinction event, also known as K-Pg, occurred after an asteroid slammed into Earth. Some researchers also have focused on the role of volcanoes in K-Pg due to indications that volcanic activity happened around the same time.
"Volcanoes can drive mass extinctions because they release lots of gases, like SO2 and CO2, that can alter the climate and acidify the world," said Hull, lead author of the new study. "But recent work has focused on the timing of lava eruption rather than gas release."
To pinpoint the timing of volcanic gas emission, Hull and her colleagues compared global temperature change and the carbon isotopes (an isotope is an atom with a higher or lower number of neutrons than normal) from marine fossils with models of the climatic effect of CO2 release. They concluded that most of the gas release happened well before the asteroid impact -- and that the asteroid was the sole driver of extinction.
"Volcanic activity in the late Cretaceous caused a gradual global warming event of about two degrees, but not mass extinction," said former Yale researcher Michael Henehan, who compiled the temperature records for the study. "A number of species moved toward the North and South poles but moved back well before the asteroid impact."
Added Hull, "A lot of people have speculated that volcanoes mattered to K-Pg, and we're saying, 'No, they didn't.'"
Recent work on the Deccan Traps, in India, has also pointed to massive eruptions in the immediate aftermath of the K-Pg mass extinction. These results have puzzled scientists because there is no warming event to match. The new study suggests an answer to this puzzle, as well.
Read more at Science Daily
Jan 16, 2020
Neanderthals went underwater for their tools
Clam shells. |
Neanderthals are known to have used tools, but the extent to which they were able to exploit coastal resources has been questioned. In this study, Villa and colleagues explored artifacts from the Neanderthal archaeological cave site of Grotta dei Moscerini in Italy, one of two Neanderthal sites in the country with an abundance of hand-modified clam shells, dating back to around 100,000 years ago.
The authors examined 171 modified shells, most of which had to be retouched to be used as scrapers. All of these shells belonged to the Mediterranean smooth clam species Callista chione. Based on the state of preservation of the shells, including shell damage and encrustation on the shells by marine organisms, the authors inferred that nearly a quarter of the shells had been collected underwater from the sea floor, as live animals, as opposed to being washed up on the beach. In the same cave sediments, the authors also found abundant pumice stones likely used as abrading tools, which apparently drifted via sea currents from erupting volcanoes in the Gulf of Naples (70km south) onto the Moscerini beach, where they were collected by Neanderthals.
These findings join a growing list of evidence that Neanderthals in Western Europe were in the practice of wading or diving into coastal waters to collect resources long before Homo sapiens brought these habits to the region. The authors also note that shell tools were abundant in sediment layers that had few stone tools, suggesting Neanderthals might have turned to making shell tools during times where more typical stone materials were scarce (though it's also possible that clam shells were used because they have a thin and sharp cutting edge, which can be maintained through re-sharpening, unlike flint tools).
Read more at Science Daily
Scientists unexpectedly witness wolf puppies play fetch
Wolf pups. |
The findings were made serendipitously when researchers tested 13 wolf puppies from three different litters in a behavioral test battery designed to assess various behaviors in young dog puppies. During this series of tests, three 8-week-old wolf puppies spontaneously showed interest in a ball and returned it to a perfect stranger upon encouragement. The discovery comes as a surprise because it had been hypothesized that the cognitive abilities necessary to understand cues given by a human, such as those required for a game of fetch, arose in dogs only after humans domesticated them at least 15,000 years ago.
"When I saw the first wolf puppy retrieving the ball I literally got goose bumps," says Christina Hansen Wheat of Stockholm University, Sweden. "It was so unexpected, and I immediately knew that this meant that if variation in human-directed play behavior exists in wolves, this behavior could have been a potential target for early selective pressures exerted during dog domestication."
Hansen Wheat is interested in understanding how domestication affects behavior. To study this, she and her team raise wolf and dog puppies from the age of 10 days and put them through various behavioral tests. In one of those tests, a person the pup does not know throws a tennis ball across the room and, without the benefit of any prior experience or training, encourages the puppy to get it and bring it back.
The researchers never really expected wolf pups to catch on. In fact, the first two wolf litters they worked with showed little to no interest in balls let alone retrieving one. They thought little of it at the time. It was what they would have expected, after all. That is until they tested the third wolf litter and some of the puppies not only went for the ball, but also responded to the social cues given by the unfamiliar person and brought it back.
"It was very surprising that we had wolves actually retrieving the ball," says Hansen Wheat. "I did not expect that. I do not think any of us did. It was especially surprising that the wolves retrieved the ball for a person they had never met before."
Hansen Wheat adds that similarities between dogs and wolves can tell us something about where the behavior we see in our dogs comes from. And, while it was a surprise to see a wolf puppy playing fetch and connecting with a person in that way, she says, in retrospect, it also makes sense.
"Wolf puppies showing human-directed behavior could have had a selective advantage in early stages of dog domestication," she says.
Read more at Science Daily
Taking the temperature of dark matter
Distant galaxies. |
We have very little idea of what dark matter is and physicists have yet to detect a dark matter particle. But we do know that the gravity of clumps of dark matter can distort light from distant objects. Chris Fassnacht, a physics professor at UC Davis and colleagues are using this distortion, called gravitational lensing, to learn more about the properties of dark matter.
The standard model for dark matter is that it is 'cold,' meaning that the particles move slowly compared to the speed of light, Fassnacht said. This is also tied to the mass of dark matter particles. The lower the mass of the particle, the 'warmer' it is and the faster it will move.
The model of cold (more massive) dark matter holds at very large scales, Fassnacht said, but doesn't work so well on the scale of individual galaxies. That's led to other models including 'warm' dark matter with lighter, faster-moving particles. 'Hot' dark matter with particles moving close to the speed of light has been ruled out by observations.
Former UC Davis graduate student Jen-Wei Hsueh, Fassnacht and colleagues used gravitational lensing to put a limit on the warmth and therefore the mass of dark matter. They measured the brightness of seven distant gravitationally lensed quasars to look for changes caused by additional intervening blobs of dark matter and used these results to measure the size of these dark matter lenses.
If dark matter particles are lighter, warmer and more rapidly-moving, then they will not form structures below a certain size, Fassnacht said.
"Below a certain size, they would just get smeared out," he said.
The results put a lower limit on the mass of a potential dark matter particle while not ruling out cold dark matter, he said. The team's results represent a major improvement over a previous analysis, from 2002, and are comparable to recent results from a team at UCLA.
Fassnacht hopes to continue adding lensed objects to the survey to improve the statistical accuracy.
"We need to look at about 50 objects to get a good constraint on how warm dark matter can be," he said.
Read more at Science Daily
The mysterious, legendary giant squid's genome is revealed
How did the monstrous giant squid -- reaching school-bus size, with eyes as big as dinner plates and tentacles that can snatch prey 10 yards away -- get so scarily big?
Today, important clues about the anatomy and evolution of the mysterious giant squid (Architeuthis dux) are revealed through publication of its full genome sequence by a University of Copenhagen-led team that includes scientist Caroline Albertin of the Marine Biological Laboratory (MBL), Woods Hole.
Giant squid are rarely sighted and have never been caught and kept alive, meaning their biology (even how they reproduce) is still largely a mystery. The genome sequence can provide important insight.
"In terms of their genes, we found the giant squid look a lot like other animals. This means we can study these truly bizarre animals to learn more about ourselves," says Albertin, who in 2015 led the team that sequenced the first genome of a cephalopod (the group that includes squid, octopus, cuttlefish, and nautilus).
Led by Rute da Fonseca at University of Copenhagen, the team discovered that the giant squid genome is big: with an estimated 2.7 billion DNA base pairs, it's about 90 percent the size of the human genome.
Albertin analyzed several ancient, well-known gene families in the giant squid, drawing comparisons with the four other cephalopod species that have been sequenced and with the human genome.
She found that important developmental genes in almost all animals (Hox and Wnt) were present in single copies only in the giant squid genome. That means this gigantic, invertebrate creature -- long a source of sea-monster lore -- did NOT get so big through whole-genome duplication, a strategy that evolution took long ago to increase the size of vertebrates.
So, knowing how this squid species got so giant awaits further probing of its genome.
"A genome is a first step for answering a lot of questions about the biology of these very weird animals," Albertin said, such as how they acquired the largest brain among the invertebrates, their sophisticated behaviors and agility, and their incredible skill at instantaneous camouflage.
"While cephalopods have many complex and elaborate features, they are thought to have evolved independently of the vertebrates. By comparing their genomes we can ask, 'Are cephalopods and vertebrates built the same way or are they built differently?'" Albertin says.
Albertin also identified more than 100 genes in the protocadherin family -- typically not found in abundance in invertebrates -- in the giant squid genome.
"Protocadherins are thought to be important in wiring up a complicated brain correctly," she says. "They were thought they were a vertebrate innovation, so we were really surprised when we found more than 100 of them in the octopus genome (in 2015). That seemed like a smoking gun to how you make a complicated brain. And we have found a similar expansion of protocadherins in the giant squid, as well."
Lastly, she analyzed a gene family that (so far) is unique to cephalopods, called reflectins. "Reflectins encode a protein that is involved in making iridescence. Color is an important part of camouflage, so we are trying to understand what this gene family is doing and how it works," Albertin says.
Read more at Science Daily
Today, important clues about the anatomy and evolution of the mysterious giant squid (Architeuthis dux) are revealed through publication of its full genome sequence by a University of Copenhagen-led team that includes scientist Caroline Albertin of the Marine Biological Laboratory (MBL), Woods Hole.
Giant squid are rarely sighted and have never been caught and kept alive, meaning their biology (even how they reproduce) is still largely a mystery. The genome sequence can provide important insight.
"In terms of their genes, we found the giant squid look a lot like other animals. This means we can study these truly bizarre animals to learn more about ourselves," says Albertin, who in 2015 led the team that sequenced the first genome of a cephalopod (the group that includes squid, octopus, cuttlefish, and nautilus).
Led by Rute da Fonseca at University of Copenhagen, the team discovered that the giant squid genome is big: with an estimated 2.7 billion DNA base pairs, it's about 90 percent the size of the human genome.
Albertin analyzed several ancient, well-known gene families in the giant squid, drawing comparisons with the four other cephalopod species that have been sequenced and with the human genome.
She found that important developmental genes in almost all animals (Hox and Wnt) were present in single copies only in the giant squid genome. That means this gigantic, invertebrate creature -- long a source of sea-monster lore -- did NOT get so big through whole-genome duplication, a strategy that evolution took long ago to increase the size of vertebrates.
So, knowing how this squid species got so giant awaits further probing of its genome.
"A genome is a first step for answering a lot of questions about the biology of these very weird animals," Albertin said, such as how they acquired the largest brain among the invertebrates, their sophisticated behaviors and agility, and their incredible skill at instantaneous camouflage.
"While cephalopods have many complex and elaborate features, they are thought to have evolved independently of the vertebrates. By comparing their genomes we can ask, 'Are cephalopods and vertebrates built the same way or are they built differently?'" Albertin says.
Albertin also identified more than 100 genes in the protocadherin family -- typically not found in abundance in invertebrates -- in the giant squid genome.
"Protocadherins are thought to be important in wiring up a complicated brain correctly," she says. "They were thought they were a vertebrate innovation, so we were really surprised when we found more than 100 of them in the octopus genome (in 2015). That seemed like a smoking gun to how you make a complicated brain. And we have found a similar expansion of protocadherins in the giant squid, as well."
Lastly, she analyzed a gene family that (so far) is unique to cephalopods, called reflectins. "Reflectins encode a protein that is involved in making iridescence. Color is an important part of camouflage, so we are trying to understand what this gene family is doing and how it works," Albertin says.
Read more at Science Daily
Jan 15, 2020
Astronomers discover class of strange objects near our galaxy's enormous black hole
Astronomers from UCLA's Galactic Center Orbits Initiative have discovered a new class of bizarre objects at the center of our galaxy, not far from the supermassive black hole called Sagittarius A*. They published their research today in the journal Nature.
"These objects look like gas and behave like stars," said co-author Andrea Ghez, UCLA's Lauren B. Leichtman and Arthur E. Levine Professor of Astrophysics and director of the UCLA Galactic Center Group.
The new objects look compact most of the time and stretch out when their orbits bring them closest to the black hole. Their orbits range from about 100 to 1,000 years, said lead author Anna Ciurlo, a UCLA postdoctoral researcher.
Ghez's research group identified an unusual object at the center of our galaxy in 2005, which was later named G1. In 2012, astronomers in Germany made a puzzling discovery of a bizarre object named G2 in the center of the Milky Way that made a close approach to the supermassive black hole in 2014. Ghez and her research team believe that G2 is most likely two stars that had been orbiting the black hole in tandem and merged into an extremely large star, cloaked in unusually thick gas and dust.
"At the time of closest approach, G2 had a really strange signature," Ghez said. "We had seen it before, but it didn't look too peculiar until it got close to the black hole and became elongated, and much of its gas was torn apart. It went from being a pretty innocuous object when it was far from the black hole to one that was really stretched out and distorted at its closest approach and lost its outer shell, and now it's getting more compact again."
"One of the things that has gotten everyone excited about the G objects is that the stuff that gets pulled off of them by tidal forces as they sweep by the central black hole must inevitably fall into the black hole," said co-author Mark Morris, UCLA professor of physics and astronomy. "When that happens, it might be able to produce an impressive fireworks show since the material eaten by the black hole will heat up and emit copious radiation before it disappears across the event horizon."
But are G2 and G1 outliers, or are they part of a larger class of objects? In answer to that question, Ghez's research group reports the existence of four more objects they are calling G3, G4, G5 and G6. The researchers have determined each of their orbits. While G1 and G2 have similar orbits, the four new objects have very different orbits.
Ghez believes all six objects were binary stars -- a system of two stars orbiting each other -- that merged because of the strong gravitational force of the supermassive black hole. The merging of two stars takes more than 1 million years to complete, Ghez said.
"Mergers of stars may be happening in the universe more often than we thought, and likely are quite common," Ghez said. "Black holes may be driving binary stars to merge. It's possible that many of the stars we've been watching and not understanding may be the end product of mergers that are calm now. We are learning how galaxies and black holes evolve. The way binary stars interact with each other and with the black hole is very different from how single stars interact with other single stars and with the black hole."
Ciurlo noted that while the gas from G2's outer shell got stretched dramatically, its dust inside the gas did not get stretched much. "Something must have kept it compact and enabled it to survive its encounter with the black hole," Ciurlo said. "This is evidence for a stellar object inside G2."
"The unique dataset that Professor Ghez's group has gathered during more than 20 years is what allowed us to make this discovery," Ciurlo said. "We now have a population of 'G' objects, so it is not a matter of explaining a 'one-time event' like G2."
The researchers made observations from the W.M. Keck Observatory in Hawaii and used a powerful technology that Ghez helped pioneer, called adaptive optics, which corrects the distorting effects of the Earth's atmosphere in real time. They conducted a new analysis of 13 years of their UCLA Galactic Center Orbits Initiative data.
In September 2019, Ghez's team reported that the black hole is getting hungrier and it is unclear why. The stretching of G2 in 2014 appeared to pull off gas that may recently have been swallowed by the black hole, said co-author Tuan Do, a UCLA research scientist and deputy director of the Galactic Center Group. The mergers of stars could feed the black hole.
The team has already identified a few other candidates that may be part of this new class of objects, and are continuing to analyze them.
Ghez noted the center of the Milky Way galaxy is an extreme environment, unlike our less hectic corner of the universe.
"The Earth is in the suburbs compared to the center of the galaxy, which is some 26,000 light-years away," Ghez said. "The center of our galaxy has a density of stars 1 billion times higher than our part of the galaxy. The gravitational pull is so much stronger. The magnetic fields are more extreme. The center of the galaxy is where extreme astrophysics occurs -- the X-sports of astrophysics."
Ghez said this research will help to teach us what is happening in the majority of galaxies.
Other co-authors include Randall Campbell, an astronomer with the W.M. Keck Observatory in Hawaii; Aurelien Hees, a former UCLA postdoctoral scholar, now a researcher at the Paris Observatory in France; and Smadar Naoz, a UCLA assistant professor of physics and astronomy.
The research is funded by the National Science Foundation, W.M. Keck Foundation and Keck Visiting Scholars Program, the Gordon and Betty Moore Foundation, the Heising-Simons Foundation, Lauren Leichtman and Arthur Levine, Jim and Lori Keir, and Howard and Astrid Preston.
Read more at Science Daily
"These objects look like gas and behave like stars," said co-author Andrea Ghez, UCLA's Lauren B. Leichtman and Arthur E. Levine Professor of Astrophysics and director of the UCLA Galactic Center Group.
The new objects look compact most of the time and stretch out when their orbits bring them closest to the black hole. Their orbits range from about 100 to 1,000 years, said lead author Anna Ciurlo, a UCLA postdoctoral researcher.
Ghez's research group identified an unusual object at the center of our galaxy in 2005, which was later named G1. In 2012, astronomers in Germany made a puzzling discovery of a bizarre object named G2 in the center of the Milky Way that made a close approach to the supermassive black hole in 2014. Ghez and her research team believe that G2 is most likely two stars that had been orbiting the black hole in tandem and merged into an extremely large star, cloaked in unusually thick gas and dust.
"At the time of closest approach, G2 had a really strange signature," Ghez said. "We had seen it before, but it didn't look too peculiar until it got close to the black hole and became elongated, and much of its gas was torn apart. It went from being a pretty innocuous object when it was far from the black hole to one that was really stretched out and distorted at its closest approach and lost its outer shell, and now it's getting more compact again."
"One of the things that has gotten everyone excited about the G objects is that the stuff that gets pulled off of them by tidal forces as they sweep by the central black hole must inevitably fall into the black hole," said co-author Mark Morris, UCLA professor of physics and astronomy. "When that happens, it might be able to produce an impressive fireworks show since the material eaten by the black hole will heat up and emit copious radiation before it disappears across the event horizon."
But are G2 and G1 outliers, or are they part of a larger class of objects? In answer to that question, Ghez's research group reports the existence of four more objects they are calling G3, G4, G5 and G6. The researchers have determined each of their orbits. While G1 and G2 have similar orbits, the four new objects have very different orbits.
Ghez believes all six objects were binary stars -- a system of two stars orbiting each other -- that merged because of the strong gravitational force of the supermassive black hole. The merging of two stars takes more than 1 million years to complete, Ghez said.
"Mergers of stars may be happening in the universe more often than we thought, and likely are quite common," Ghez said. "Black holes may be driving binary stars to merge. It's possible that many of the stars we've been watching and not understanding may be the end product of mergers that are calm now. We are learning how galaxies and black holes evolve. The way binary stars interact with each other and with the black hole is very different from how single stars interact with other single stars and with the black hole."
Ciurlo noted that while the gas from G2's outer shell got stretched dramatically, its dust inside the gas did not get stretched much. "Something must have kept it compact and enabled it to survive its encounter with the black hole," Ciurlo said. "This is evidence for a stellar object inside G2."
"The unique dataset that Professor Ghez's group has gathered during more than 20 years is what allowed us to make this discovery," Ciurlo said. "We now have a population of 'G' objects, so it is not a matter of explaining a 'one-time event' like G2."
The researchers made observations from the W.M. Keck Observatory in Hawaii and used a powerful technology that Ghez helped pioneer, called adaptive optics, which corrects the distorting effects of the Earth's atmosphere in real time. They conducted a new analysis of 13 years of their UCLA Galactic Center Orbits Initiative data.
In September 2019, Ghez's team reported that the black hole is getting hungrier and it is unclear why. The stretching of G2 in 2014 appeared to pull off gas that may recently have been swallowed by the black hole, said co-author Tuan Do, a UCLA research scientist and deputy director of the Galactic Center Group. The mergers of stars could feed the black hole.
The team has already identified a few other candidates that may be part of this new class of objects, and are continuing to analyze them.
Ghez noted the center of the Milky Way galaxy is an extreme environment, unlike our less hectic corner of the universe.
"The Earth is in the suburbs compared to the center of the galaxy, which is some 26,000 light-years away," Ghez said. "The center of our galaxy has a density of stars 1 billion times higher than our part of the galaxy. The gravitational pull is so much stronger. The magnetic fields are more extreme. The center of the galaxy is where extreme astrophysics occurs -- the X-sports of astrophysics."
Ghez said this research will help to teach us what is happening in the majority of galaxies.
Other co-authors include Randall Campbell, an astronomer with the W.M. Keck Observatory in Hawaii; Aurelien Hees, a former UCLA postdoctoral scholar, now a researcher at the Paris Observatory in France; and Smadar Naoz, a UCLA assistant professor of physics and astronomy.
The research is funded by the National Science Foundation, W.M. Keck Foundation and Keck Visiting Scholars Program, the Gordon and Betty Moore Foundation, the Heising-Simons Foundation, Lauren Leichtman and Arthur Levine, Jim and Lori Keir, and Howard and Astrid Preston.
Read more at Science Daily
Air pollution from oil and gas production sites visible from space
Oil and gas production has doubled in some parts of the United States in the last two years, and scientists can use satellites to see impacts of that trend: a significant increase in the release of the lung-irritating air pollutant nitrogen dioxide, for example, and a more-than-doubling of the amount of gas flared into the atmosphere.
"We see the industry's growing impact from space," said Barbara Dix, a scientist at the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado Boulder and lead author of the new assessment published in the AGU journal Geophysical Research Letters. "We really are at the point where we can use satellite data to give feedback to companies and regulators, and see if they are successful in regulating emissions."
Dix and a team of U.S. and Dutch researchers set out to see if a suite of satellite-based instruments could help scientists understand more about nitrogen oxides pollution (including nitrogen dioxide) coming from engines in U.S. oil and gas fields. Combustion engines produce nitrogen oxides, which is a respiratory irritant and can lead to the formation of other types of harmful air pollutants, such as ground-level ozone.
On oil and gas drilling and production sites, there may be several small and large combustion engines, drilling, compressing gas, separating liquids and gases, and moving gas and oil through pipes and storage containers, said co-author Joost de Gouw, a CIRES Fellow and chemistry professor at CU Boulder. The emissions of those engines are not controlled. "Cars have catalytic converters, big industrial stacks may have emissions reduction equipment..." de Gouw said. "Not so with these engines."
Conventional "inventories" meant to account for nitrogen oxides pollution from oil and gas sites are often very uncertain, underestimating or overestimating the pollutants, de Gouw said. And there are few sustained measurements of nitrogen oxides in many of the rural areas where oil and gas development often takes place, Dix said.
So she, de Gouw and their colleagues turned to nitrogen dioxide data from the Ozone Monitoring Instrument (OMI) on board a NASA satellite and the Tropospheric Monitoring Instrument (TropOMI) on a European Space Agency satellite. They also looked at gas flaring data from an instrument on the NOAA/NASA Suomi satellite system.
Between 2007 and 2019, across much of the United States, nitrogen dioxide pollution levels dropped because of cleaner cars and power plants, the team found, confirming findings reported previously. The clean air trend in satellite data was most obvious in urban areas of California, Washington and Oregon and in the eastern half of the continental United States. "We've cleaned up our act a lot," Dix said.
However, several areas stuck out with increased emissions of nitrogen dioxide: The Permian, Bakken and Eagle Ford oil and gas basins, in Texas and New Mexico, North Dakota, and Texas, respectively.
In those areas, the scientists used a type of time-series analysis to figure out where the pollutant was coming from: Drilling of new wells vs. longer-term production. They could do this kind of analysis because drilling activity swings up and down quickly in response to market forces while production changes far more slowly (once a well is drilled, it may produce oil and natural gas for years or even decades).
Before a downturn in drilling in 2015, drilling generated about 80 percent of nitrogen dioxide from oil and gas sites, the team reported. After 2015, drilling and production produced roughly equal amounts of the pollutant. Flaring is estimated to contribute up to 10 percent in both time frames.
The researchers also developed a new oil and gas emissions inventory, using data on fuel use by the industry, the location of drilling rigs, and well-level production data. The inventory confirmed the satellite trends, said co-author Brian McDonald, a CIRES scientist working in NOAA's Chemical Sciences Division, "It is a promising development that what we observe from space can be explained by expected trends in emissions from the oil and gas industry."
Read more at Science Daily
"We see the industry's growing impact from space," said Barbara Dix, a scientist at the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado Boulder and lead author of the new assessment published in the AGU journal Geophysical Research Letters. "We really are at the point where we can use satellite data to give feedback to companies and regulators, and see if they are successful in regulating emissions."
Dix and a team of U.S. and Dutch researchers set out to see if a suite of satellite-based instruments could help scientists understand more about nitrogen oxides pollution (including nitrogen dioxide) coming from engines in U.S. oil and gas fields. Combustion engines produce nitrogen oxides, which is a respiratory irritant and can lead to the formation of other types of harmful air pollutants, such as ground-level ozone.
On oil and gas drilling and production sites, there may be several small and large combustion engines, drilling, compressing gas, separating liquids and gases, and moving gas and oil through pipes and storage containers, said co-author Joost de Gouw, a CIRES Fellow and chemistry professor at CU Boulder. The emissions of those engines are not controlled. "Cars have catalytic converters, big industrial stacks may have emissions reduction equipment..." de Gouw said. "Not so with these engines."
Conventional "inventories" meant to account for nitrogen oxides pollution from oil and gas sites are often very uncertain, underestimating or overestimating the pollutants, de Gouw said. And there are few sustained measurements of nitrogen oxides in many of the rural areas where oil and gas development often takes place, Dix said.
So she, de Gouw and their colleagues turned to nitrogen dioxide data from the Ozone Monitoring Instrument (OMI) on board a NASA satellite and the Tropospheric Monitoring Instrument (TropOMI) on a European Space Agency satellite. They also looked at gas flaring data from an instrument on the NOAA/NASA Suomi satellite system.
Between 2007 and 2019, across much of the United States, nitrogen dioxide pollution levels dropped because of cleaner cars and power plants, the team found, confirming findings reported previously. The clean air trend in satellite data was most obvious in urban areas of California, Washington and Oregon and in the eastern half of the continental United States. "We've cleaned up our act a lot," Dix said.
However, several areas stuck out with increased emissions of nitrogen dioxide: The Permian, Bakken and Eagle Ford oil and gas basins, in Texas and New Mexico, North Dakota, and Texas, respectively.
In those areas, the scientists used a type of time-series analysis to figure out where the pollutant was coming from: Drilling of new wells vs. longer-term production. They could do this kind of analysis because drilling activity swings up and down quickly in response to market forces while production changes far more slowly (once a well is drilled, it may produce oil and natural gas for years or even decades).
Before a downturn in drilling in 2015, drilling generated about 80 percent of nitrogen dioxide from oil and gas sites, the team reported. After 2015, drilling and production produced roughly equal amounts of the pollutant. Flaring is estimated to contribute up to 10 percent in both time frames.
The researchers also developed a new oil and gas emissions inventory, using data on fuel use by the industry, the location of drilling rigs, and well-level production data. The inventory confirmed the satellite trends, said co-author Brian McDonald, a CIRES scientist working in NOAA's Chemical Sciences Division, "It is a promising development that what we observe from space can be explained by expected trends in emissions from the oil and gas industry."
Read more at Science Daily
Beauty sleep could be real, say body clock biologists
Biologists from The University of Manchester have explained for the first time why having a good night's sleep really could prepare us for the rigours of the day ahead.
The study in mice and published in Nature Cell Biology, shows how the body clock mechanism boosts our ability to maintain our bodies when we are most active.
And because we know the body clock is less precise as we age, the discovery, argues lead author Professor Karl Kadler, may one day help unlock some of the mysteries of aging.
The discovery throws fascinating light on the body's extracellular matrix -which provides structural and biochemical support to cells in the form of connective tissue such as bone, skin, tendon and cartilage.
Over half our body weight is matrix, and half of this is collagen -- and scientists have long understood it is fully formed by the time we reach the age of 17.
But now the researchers have discovered there are two types of fibrils -- the rope-like structures of collagen that are woven by the cells to form tissues.
Thicker fibrils measuring about 200 nanometres in diameter -- a million million times smaller than a pinhead -- are permanent and stay with us throughout our lives, unchanged from the age of 17.
But thinner fibrils measuring 50 nanometres, they find, are sacrificial, breaking as we subject the body to the rigours of the day but replenishing when we rest at night.
The collagen was observed by mass spectrometry and the mouse fibrils were observed using state of the art volumetric electron microscopy -- funded by the Wellcome Trust -- every 4 hours over 2 days.
When the body clock genes where knocked out in mice, the thin and thick fibrils were amalgamated randomly.
"Collagen provides the body with structure and is our most abundant protein, ensuring the integrity, elasticity and strength of the body's connective tissue," said Professor Kadler
"It's intuitive to think our matrix should be worn down by wear and tear, but it isn't and now we know why: our body clock makes an element which is sacrificial and can be replenished, protecting the permanent parts of the matrix.
He added: "So if you imagine the bricks in the walls of a room as the permanent part, the paint on the walls could be seen as the sacrificial part which needs to be replenished every so often.
"And just like you need to oil a car and keep its radiator topped up with water, these thin fibrils help maintain the body's matrix."
Read more at Science Daily
The study in mice and published in Nature Cell Biology, shows how the body clock mechanism boosts our ability to maintain our bodies when we are most active.
And because we know the body clock is less precise as we age, the discovery, argues lead author Professor Karl Kadler, may one day help unlock some of the mysteries of aging.
The discovery throws fascinating light on the body's extracellular matrix -which provides structural and biochemical support to cells in the form of connective tissue such as bone, skin, tendon and cartilage.
Over half our body weight is matrix, and half of this is collagen -- and scientists have long understood it is fully formed by the time we reach the age of 17.
But now the researchers have discovered there are two types of fibrils -- the rope-like structures of collagen that are woven by the cells to form tissues.
Thicker fibrils measuring about 200 nanometres in diameter -- a million million times smaller than a pinhead -- are permanent and stay with us throughout our lives, unchanged from the age of 17.
But thinner fibrils measuring 50 nanometres, they find, are sacrificial, breaking as we subject the body to the rigours of the day but replenishing when we rest at night.
The collagen was observed by mass spectrometry and the mouse fibrils were observed using state of the art volumetric electron microscopy -- funded by the Wellcome Trust -- every 4 hours over 2 days.
When the body clock genes where knocked out in mice, the thin and thick fibrils were amalgamated randomly.
"Collagen provides the body with structure and is our most abundant protein, ensuring the integrity, elasticity and strength of the body's connective tissue," said Professor Kadler
"It's intuitive to think our matrix should be worn down by wear and tear, but it isn't and now we know why: our body clock makes an element which is sacrificial and can be replenished, protecting the permanent parts of the matrix.
He added: "So if you imagine the bricks in the walls of a room as the permanent part, the paint on the walls could be seen as the sacrificial part which needs to be replenished every so often.
"And just like you need to oil a car and keep its radiator topped up with water, these thin fibrils help maintain the body's matrix."
Read more at Science Daily
Researchers identify gene with functional role in aging of eye
A lengthy-named gene called Elongation of Very Long Chain Fatty Acids Protein 2 or ELOVL2 is an established biomarker of age. In a new paper, published online January 14, 2020 in the journal Aging Cell, researchers at University of California San Diego School of Medicine say the gene appears to play a key role in age-associated functional and anatomical aging in vivo in mouse retinas, a finding that has direct relevance to age-related eye diseases.
Specifically, the research team, led by senior author Dorota Skowronska-Krawczyk, PhD, assistant professor in the Viterbi Family Department of Ophthalmology at UC San Diego Shiley Eye Institute, found that an age-related decrease in ELOVL2 gene expression was associated with increased DNA methylation of its promoter. Methylation is a simple biochemical process in which groups of carbon and hydrogen atoms are transferred from one substance to another. In the case of DNA, methylation of regulatory regions negatively impacts expression of the gene.
When researchers reversed hypermethylation in vivo, they boosted ELOVL2 expression and rescued age-related decline in visual function in mice. "These findings indicate that ELOVL2 actively regulates aging in mouse retina, provides a molecular link between polyunsaturated fatty acids elongation and visual functions, and suggests novel therapeutic strategies for treatment of age-related eye diseases," wrote the authors.
ELOVL2 is involved in production of long-chain omega-3 and omega-6 polyunsaturated fatty acids, which are used in several crucial biological functions, such as energy production, inflammation response and maintenance of cell membrane integrity. The gene is found in humans as well as mice.
In particular, ELOVL2 regulates levels of docosahexaenoic acid or DHA, a polyunsaturated omega-3 fatty acid abundantly found in the brain and retina. DHA is associated with a number of beneficial effects. Notably, its presence in photoreceptors in eyes promotes healthy retinal function, protects against damage from bright light or oxidative stress and has been linked to improving a variety of vision conditions, from age-related macular (AMD) degeneration to diabetic eye disease and dry eyes.
Skowronska-Krawczyk said the work demonstrated for the first time that a "methylation clock" gene had a functional role in the aging of an organ. In this case, the eye. DNA methylation is used throughout the human body, essentially turning biological switches on an off to maximize efficient operation. It has key regulatory roles in the body's cardiovascular, neurological, reproductive and detoxification systems.
In recent years, there has been much work and progress in identifying possible biomarkers that predict the biological age (not chronological) of individuals. Such biomarkers would be useful in identifying risk and status of age-related diseases. ELOVL2 is among the genes attracting greatest interest.
"I have been asked whether I think ELOVL2 is the aging gene," said Skowronska-Krawczyk. "After thinking about it, it is not unreasonable to think that lower ELOVL2 expression might be at the basis for many age-related conditions. Future work in our lab will address that question."
Read more at Science Daily
Specifically, the research team, led by senior author Dorota Skowronska-Krawczyk, PhD, assistant professor in the Viterbi Family Department of Ophthalmology at UC San Diego Shiley Eye Institute, found that an age-related decrease in ELOVL2 gene expression was associated with increased DNA methylation of its promoter. Methylation is a simple biochemical process in which groups of carbon and hydrogen atoms are transferred from one substance to another. In the case of DNA, methylation of regulatory regions negatively impacts expression of the gene.
When researchers reversed hypermethylation in vivo, they boosted ELOVL2 expression and rescued age-related decline in visual function in mice. "These findings indicate that ELOVL2 actively regulates aging in mouse retina, provides a molecular link between polyunsaturated fatty acids elongation and visual functions, and suggests novel therapeutic strategies for treatment of age-related eye diseases," wrote the authors.
ELOVL2 is involved in production of long-chain omega-3 and omega-6 polyunsaturated fatty acids, which are used in several crucial biological functions, such as energy production, inflammation response and maintenance of cell membrane integrity. The gene is found in humans as well as mice.
In particular, ELOVL2 regulates levels of docosahexaenoic acid or DHA, a polyunsaturated omega-3 fatty acid abundantly found in the brain and retina. DHA is associated with a number of beneficial effects. Notably, its presence in photoreceptors in eyes promotes healthy retinal function, protects against damage from bright light or oxidative stress and has been linked to improving a variety of vision conditions, from age-related macular (AMD) degeneration to diabetic eye disease and dry eyes.
Skowronska-Krawczyk said the work demonstrated for the first time that a "methylation clock" gene had a functional role in the aging of an organ. In this case, the eye. DNA methylation is used throughout the human body, essentially turning biological switches on an off to maximize efficient operation. It has key regulatory roles in the body's cardiovascular, neurological, reproductive and detoxification systems.
In recent years, there has been much work and progress in identifying possible biomarkers that predict the biological age (not chronological) of individuals. Such biomarkers would be useful in identifying risk and status of age-related diseases. ELOVL2 is among the genes attracting greatest interest.
"I have been asked whether I think ELOVL2 is the aging gene," said Skowronska-Krawczyk. "After thinking about it, it is not unreasonable to think that lower ELOVL2 expression might be at the basis for many age-related conditions. Future work in our lab will address that question."
Read more at Science Daily
Jan 14, 2020
Connecting the dots in the sky could shed new light on dark matter
Astrophysicists have come a step closer to understanding the origin of a faint glow of gamma rays covering the night sky. They found that this light is brighter in regions that contain a lot of matter and dimmer where matter is sparser -- a correlation that could help them narrow down the properties of exotic astrophysical objects and invisible dark matter.
The glow, known as unresolved gamma-ray background, stems from sources that are so faint and far away that researchers can't identify them individually. Yet, the fact that the locations where these gamma rays originate match up with where mass is found in the distant universe could be a key puzzle piece in identifying those sources.
"The background is the sum of a lot of things 'out there' that produce gamma rays. Having been able to measure for the first time its correlation with gravitational lensing -- tiny distortions of images of far galaxies produced by the distribution of matter -- helps us disentangle them," said Simone Ammazzalorso from the University of Turin and the National Institute for Nuclear Physics (INFN) in Italy, who co-led the analysis.
The study used one year of data from the Dark Energy Survey (DES), which takes optical images of the sky, and nine years of data from the Fermi Gamma-ray Space Telescope, which observes cosmic gamma rays while it orbits the Earth.
"What's really intriguing is that the correlation we measured doesn't completely match our expectations," said Panofsky fellow Daniel Gruen from the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the Department of Energy's SLAC National Accelerator Laboratory and Stanford University, who led the analysis for the DES collaboration. "This could mean that we either need to adjust our existing models for objects that emit gamma rays, or it could hint at other sources, such as dark matter."
The study was accepted today for publication in Physical Review Letters.
Two sensitive 'eyes' on the sky
Gamma radiation, the most energetic form of light, is produced in a wide range of cosmic phenomena -- often extremely violent ones, such as exploding stars, dense neutron stars rotating at high speeds and powerful beams of particles shooting out of active galaxies whose central supermassive black holes gobble up matter.
Another potential source is invisible dark matter, which is believed to make up 85 percent of all matter in the universe. It could produce gamma rays when dark matter particles meet and destroy each other in space.
The Large Area Telescope (LAT) onboard the Fermi spacecraft is a highly sensitive "eye" for gamma radiation, and its data provide a detailed map of gamma-ray sources in the sky.
But when scientists subtract all the sources they already know, their map is far from empty; it still contains a gamma-ray background whose brightness varies from region to region.
"Unfortunately gamma rays don't have a label that would tell us where they came from," Gruen said. "That's why we need additional information to unravel their origin."
That's where DES comes in. With its 570-megapixel Dark Energy Camera, mounted on the Victor M. Blanco 4-meter Telescope at the Cerro Tololo Inter-American Observatory in Chile, it snaps images of hundreds of millions of galaxies. Their exact shapes tell researchers how the gravitational pull of matter bends light in the universe -- an effect that shows itself as tiny distortions in galaxy images, known as weak gravitational lensing. Based on these data, the DES researchers create the most detailed maps yet of matter in the cosmos.
In the new study, the scientists superimposed the Fermi and DES maps, which revealed that the two aren't independent. The unresolved gamma-ray background is more intense in regions with more matter and less intense in regions with less matter.
"The result itself is not surprising. We expect that there are more gamma ray producing processes in regions that contain more matter, and we've been predicting this correlation for a while," said Nicolao Fornengo, one of Ammazzalorso's supervisors in Turin. "But now we've succeeded in actually detecting this correlation for the first time, and we can use it to understand what causes the gamma ray background."
Potential hint at dark matter
One of the most likely sources for the gamma-ray glow is very distant blazars -- active galaxies with supermassive black holes at their centers. As the black holes swallow surrounding matter, they spew high-speed jets of plasma and gamma rays that, if the jets point at us, are detected by the Fermi spacecraft.
Blazars would be the simplest assumption, but the new data suggest that a simple population of blazars might not be enough to explain the observed correlation between gamma rays and mass distribution, the researchers said.
"In fact, our models for emissions from blazars can fairly well explain the low-energy part of the correlation, but we see deviations for high-energy gamma rays," Gruen said. "This can mean several things: It could indicate that we need to improve our models for blazars or that the gamma rays could come from other sources."
One of these other sources could be dark matter. A leading theory predicts the mysterious stuff is made of weakly interacting massive particles, or WIMPs, which could annihilate each other in a flash of gamma rays when they collide. Gamma rays from certain matter-rich cosmic regions could therefore stem from these particle interactions.
The idea to look for gamma-ray signatures of annihilating WIMPs is not a new one. Over the past years, scientists have searched for them in various locations believed to contain a lot of dark matter, including the center of the Milky Way and the Milky Way's companion galaxies. However, these searches haven't produced identifiable dark matter signals yet. The new results could be used for additional searches that test the WIMP hypothesis.
Planning next steps
Although the probability that the measured correlation is just a random effect is only about one in a thousand, the researchers need more data for a conclusive analysis.
"These results, connecting for the first time our maps of gamma rays and matter, are very interesting and have a lot of potential, but at the moment the connection is still relatively weak, and one has to interpret the data carefully," said KIPAC Director Risa Wechsler, who was not involved in the study.
One of the main limitations of the current analysis is the amount of available lensing data, Gruen said. "With data from 40 million galaxies, DES has already pushed this to a new level, and that's why we were able to do the analysis in the first place. But we need even better measurements," he said.
With its next data release, DES will provide lensing data for 100 million galaxies, and the future Large Synoptic Survey Telescope (LSST) will look at billions of galaxies in a much larger region of the sky.
"Our study demonstrates with actual data that we can use the correlation between the distributions of matter and gamma rays to learn more about what causes the gamma-ray background," Fornengo said. "With more DES data, LSST coming online and other projects like the Euclid space telescope on the horizon, we'll be able to go much deeper in our understanding of the potential sources."
Read more at Science Daily
The glow, known as unresolved gamma-ray background, stems from sources that are so faint and far away that researchers can't identify them individually. Yet, the fact that the locations where these gamma rays originate match up with where mass is found in the distant universe could be a key puzzle piece in identifying those sources.
"The background is the sum of a lot of things 'out there' that produce gamma rays. Having been able to measure for the first time its correlation with gravitational lensing -- tiny distortions of images of far galaxies produced by the distribution of matter -- helps us disentangle them," said Simone Ammazzalorso from the University of Turin and the National Institute for Nuclear Physics (INFN) in Italy, who co-led the analysis.
The study used one year of data from the Dark Energy Survey (DES), which takes optical images of the sky, and nine years of data from the Fermi Gamma-ray Space Telescope, which observes cosmic gamma rays while it orbits the Earth.
"What's really intriguing is that the correlation we measured doesn't completely match our expectations," said Panofsky fellow Daniel Gruen from the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the Department of Energy's SLAC National Accelerator Laboratory and Stanford University, who led the analysis for the DES collaboration. "This could mean that we either need to adjust our existing models for objects that emit gamma rays, or it could hint at other sources, such as dark matter."
The study was accepted today for publication in Physical Review Letters.
Two sensitive 'eyes' on the sky
Gamma radiation, the most energetic form of light, is produced in a wide range of cosmic phenomena -- often extremely violent ones, such as exploding stars, dense neutron stars rotating at high speeds and powerful beams of particles shooting out of active galaxies whose central supermassive black holes gobble up matter.
Another potential source is invisible dark matter, which is believed to make up 85 percent of all matter in the universe. It could produce gamma rays when dark matter particles meet and destroy each other in space.
The Large Area Telescope (LAT) onboard the Fermi spacecraft is a highly sensitive "eye" for gamma radiation, and its data provide a detailed map of gamma-ray sources in the sky.
But when scientists subtract all the sources they already know, their map is far from empty; it still contains a gamma-ray background whose brightness varies from region to region.
"Unfortunately gamma rays don't have a label that would tell us where they came from," Gruen said. "That's why we need additional information to unravel their origin."
That's where DES comes in. With its 570-megapixel Dark Energy Camera, mounted on the Victor M. Blanco 4-meter Telescope at the Cerro Tololo Inter-American Observatory in Chile, it snaps images of hundreds of millions of galaxies. Their exact shapes tell researchers how the gravitational pull of matter bends light in the universe -- an effect that shows itself as tiny distortions in galaxy images, known as weak gravitational lensing. Based on these data, the DES researchers create the most detailed maps yet of matter in the cosmos.
In the new study, the scientists superimposed the Fermi and DES maps, which revealed that the two aren't independent. The unresolved gamma-ray background is more intense in regions with more matter and less intense in regions with less matter.
"The result itself is not surprising. We expect that there are more gamma ray producing processes in regions that contain more matter, and we've been predicting this correlation for a while," said Nicolao Fornengo, one of Ammazzalorso's supervisors in Turin. "But now we've succeeded in actually detecting this correlation for the first time, and we can use it to understand what causes the gamma ray background."
Potential hint at dark matter
One of the most likely sources for the gamma-ray glow is very distant blazars -- active galaxies with supermassive black holes at their centers. As the black holes swallow surrounding matter, they spew high-speed jets of plasma and gamma rays that, if the jets point at us, are detected by the Fermi spacecraft.
Blazars would be the simplest assumption, but the new data suggest that a simple population of blazars might not be enough to explain the observed correlation between gamma rays and mass distribution, the researchers said.
"In fact, our models for emissions from blazars can fairly well explain the low-energy part of the correlation, but we see deviations for high-energy gamma rays," Gruen said. "This can mean several things: It could indicate that we need to improve our models for blazars or that the gamma rays could come from other sources."
One of these other sources could be dark matter. A leading theory predicts the mysterious stuff is made of weakly interacting massive particles, or WIMPs, which could annihilate each other in a flash of gamma rays when they collide. Gamma rays from certain matter-rich cosmic regions could therefore stem from these particle interactions.
The idea to look for gamma-ray signatures of annihilating WIMPs is not a new one. Over the past years, scientists have searched for them in various locations believed to contain a lot of dark matter, including the center of the Milky Way and the Milky Way's companion galaxies. However, these searches haven't produced identifiable dark matter signals yet. The new results could be used for additional searches that test the WIMP hypothesis.
Planning next steps
Although the probability that the measured correlation is just a random effect is only about one in a thousand, the researchers need more data for a conclusive analysis.
"These results, connecting for the first time our maps of gamma rays and matter, are very interesting and have a lot of potential, but at the moment the connection is still relatively weak, and one has to interpret the data carefully," said KIPAC Director Risa Wechsler, who was not involved in the study.
One of the main limitations of the current analysis is the amount of available lensing data, Gruen said. "With data from 40 million galaxies, DES has already pushed this to a new level, and that's why we were able to do the analysis in the first place. But we need even better measurements," he said.
With its next data release, DES will provide lensing data for 100 million galaxies, and the future Large Synoptic Survey Telescope (LSST) will look at billions of galaxies in a much larger region of the sky.
"Our study demonstrates with actual data that we can use the correlation between the distributions of matter and gamma rays to learn more about what causes the gamma-ray background," Fornengo said. "With more DES data, LSST coming online and other projects like the Euclid space telescope on the horizon, we'll be able to go much deeper in our understanding of the potential sources."
Read more at Science Daily
X-rays and gravitational waves will combine to illuminate massive black hole collisions
A new study by a group of researchers at the University of Birmingham has found that collisions of supermassive black holes may be simultaneously observable in both gravitational waves and X-rays at the beginning of the next decade.
The European Space Agency (ESA) has recently announced that its two major space observatories of the 2030s will have their launches timed for simultaneous use. These missions, Athena, the next generation X-ray space telescope and LISA, the first space-based gravitational wave observatory, will be coordinated to begin observing within a year of each other and are likely to have at least four years of overlapping science operations.
According to the new study, published this week in Nature Astronomy, ESA's decision will give astronomers an unprecedented opportunity to produce multi-messenger maps of some of the most violent cosmic events in the Universe, which have not been observed so far and which lie at the heart of long-standing mysteries surrounding the evolution of the Universe.
They include the collision of supermassive black holes in the core of galaxies in the distant universe and the "swallowing up" of stellar compact objects such as neutron stars and black holes by massive black holes harboured in the centres of most galaxies.
The gravitational waves measured by LISA will pinpoint the ripples of space time that the mergers cause while the X-rays observed with Athena reveal the hot and highly energetic physical processes in that environment. Combining these two messengers to observe the same phenomenon in these systems would bring a huge leap in our understanding of how massive black holes and galaxies co-evolve, how massive black holes grow their mass and accrete, and the role of gas around these black holes.
These are some of the big unanswered questions in astrophysics that have puzzled scientists for decades.
Dr Sean McGee, Lecturer in Astrophysics at the University of Birmingham and a member of both the Athena and LISA consortiums, led the study. He said, "The prospect of simultaneous observations of these events is uncharted territory, and could lead to huge advances. This promises to be a revolution in our understanding of supermassive black holes and how they growth within galaxies."
Professor Alberto Vecchio, Director of the Institute for Gravitational Wave Astronomy, University of Birmingham, and a co-author on the study, said: "I have worked on LISA for twenty years and the prospect of combining forces with the most powerful X-ray eyes ever designed to look right at the centre of galaxies promises to make this long haul even more rewarding. It is difficult to predict exactly what we're going to discover: we should just buckle up, because it is going to be quite a ride."
During the life of the missions, there may be as many as 10 mergers of black holes with masses of 100,000 to 10,000,000 times the mass of the sun that have signals strong enough to be observed by both observatories. Although due to our current lack of understanding of the physics occurring during these mergers and how frequently they occur, the observatories could observe many more or many fewer of these events. Indeed, these are questions which will be answered by the observations.
Read more at Science Daily
The European Space Agency (ESA) has recently announced that its two major space observatories of the 2030s will have their launches timed for simultaneous use. These missions, Athena, the next generation X-ray space telescope and LISA, the first space-based gravitational wave observatory, will be coordinated to begin observing within a year of each other and are likely to have at least four years of overlapping science operations.
According to the new study, published this week in Nature Astronomy, ESA's decision will give astronomers an unprecedented opportunity to produce multi-messenger maps of some of the most violent cosmic events in the Universe, which have not been observed so far and which lie at the heart of long-standing mysteries surrounding the evolution of the Universe.
They include the collision of supermassive black holes in the core of galaxies in the distant universe and the "swallowing up" of stellar compact objects such as neutron stars and black holes by massive black holes harboured in the centres of most galaxies.
The gravitational waves measured by LISA will pinpoint the ripples of space time that the mergers cause while the X-rays observed with Athena reveal the hot and highly energetic physical processes in that environment. Combining these two messengers to observe the same phenomenon in these systems would bring a huge leap in our understanding of how massive black holes and galaxies co-evolve, how massive black holes grow their mass and accrete, and the role of gas around these black holes.
These are some of the big unanswered questions in astrophysics that have puzzled scientists for decades.
Dr Sean McGee, Lecturer in Astrophysics at the University of Birmingham and a member of both the Athena and LISA consortiums, led the study. He said, "The prospect of simultaneous observations of these events is uncharted territory, and could lead to huge advances. This promises to be a revolution in our understanding of supermassive black holes and how they growth within galaxies."
Professor Alberto Vecchio, Director of the Institute for Gravitational Wave Astronomy, University of Birmingham, and a co-author on the study, said: "I have worked on LISA for twenty years and the prospect of combining forces with the most powerful X-ray eyes ever designed to look right at the centre of galaxies promises to make this long haul even more rewarding. It is difficult to predict exactly what we're going to discover: we should just buckle up, because it is going to be quite a ride."
During the life of the missions, there may be as many as 10 mergers of black holes with masses of 100,000 to 10,000,000 times the mass of the sun that have signals strong enough to be observed by both observatories. Although due to our current lack of understanding of the physics occurring during these mergers and how frequently they occur, the observatories could observe many more or many fewer of these events. Indeed, these are questions which will be answered by the observations.
Read more at Science Daily
Long-term memory performance depends upon gating system
Storing and retrieving memories is among the most important tasks our intricate brains must perform, yet how that happens at a molecular level remains incompletely understood. A new study from the lab of Neuroscience Professor Ronald Davis, PhD, at Scripps Research, Florida, sheds light on one element of that memory storage process, namely the storage and retrieval of a type of hardwired long-term memory.
The Davis team found that moving memories to long-term storage involves the interplay of multiple genes, a known group whose activity must be upregulated, and, unexpectedly, another gatekeeping gene set, Ras, and its downstream connecting molecules, which are down-regulated. If either Ras or its downstream connector Raf are silenced, long-term memory storage is eliminated, the team writes in the Proceedings of the National Academies of Sciences, published the week of Jan. 13.
The type of memory they studied, ironically has a rather difficult-to-remember name: "protein-synthesis dependent long-term memory," or PSD-LTM for short. To study how it and other types of memory form, scientists rely upon the fruit fly, Drosophila melanogaster, as a model organism. The genetic underpinnings of memory storage are mostly conserved across species types, Davis explains.
To assess how the flies' memory consolidation process works at a molecular level, they used a process called RNA interference to lower expression of several candidate genes in several areas of the fly brain. Doing so with both the Ras gene and its downstream molecule Raf in the fly brain's mushroom body, its memory-storage area, had a two-pronged effect. It dramatically enhanced intermediate-term memories while completely eliminating PSD long-term memory of an aversive experience, Davis says.
The team's experiments involved exposing flies to certain odors in one section of a glass tube while simultaneously administering a foot-shock. Flies' subsequent avoidant behavior on exposure to that odor indicated their recollection of the unpleasant shock. Regardless of how many times the flies were "trained," lowering expression of Ras and Raf reduced their PSD long-term memory performance, explains first author Nathaniel Noyes, PhD, a research associate in the Davis lab.
While the Ras enzyme, Ras85D, was already known for its roles in organ development and cancer, the studies showed that in the adult brain, it apparently plays memory gatekeeper, helping direct whether experiences should be remembered as intermediate memory that dissipates after a time, or as long-term "protein-synthesis dependent" memory that persists.
Gating off the memory from the intermediate storage process shifted it over to PSD long-term memory storage, indicates that it's an either-or situation. Intermediate storage appears to be the fly brain's preferential, default pathway, Noyes says. He expects that the neurotransmitter dopamine will prove to play a key signaling role.
"We believe that dopamine signals to the brain that this memory is important enough to be stored long-term. We speculate that Ras and Raf receive this dopamine signal and thereby block intermediate memory and promote PSD long-term memory," Noyes says.
How this "intermediate" memory system works in humans requires further study as well, he adds.
Read more at Science Daily
The Davis team found that moving memories to long-term storage involves the interplay of multiple genes, a known group whose activity must be upregulated, and, unexpectedly, another gatekeeping gene set, Ras, and its downstream connecting molecules, which are down-regulated. If either Ras or its downstream connector Raf are silenced, long-term memory storage is eliminated, the team writes in the Proceedings of the National Academies of Sciences, published the week of Jan. 13.
The type of memory they studied, ironically has a rather difficult-to-remember name: "protein-synthesis dependent long-term memory," or PSD-LTM for short. To study how it and other types of memory form, scientists rely upon the fruit fly, Drosophila melanogaster, as a model organism. The genetic underpinnings of memory storage are mostly conserved across species types, Davis explains.
To assess how the flies' memory consolidation process works at a molecular level, they used a process called RNA interference to lower expression of several candidate genes in several areas of the fly brain. Doing so with both the Ras gene and its downstream molecule Raf in the fly brain's mushroom body, its memory-storage area, had a two-pronged effect. It dramatically enhanced intermediate-term memories while completely eliminating PSD long-term memory of an aversive experience, Davis says.
The team's experiments involved exposing flies to certain odors in one section of a glass tube while simultaneously administering a foot-shock. Flies' subsequent avoidant behavior on exposure to that odor indicated their recollection of the unpleasant shock. Regardless of how many times the flies were "trained," lowering expression of Ras and Raf reduced their PSD long-term memory performance, explains first author Nathaniel Noyes, PhD, a research associate in the Davis lab.
While the Ras enzyme, Ras85D, was already known for its roles in organ development and cancer, the studies showed that in the adult brain, it apparently plays memory gatekeeper, helping direct whether experiences should be remembered as intermediate memory that dissipates after a time, or as long-term "protein-synthesis dependent" memory that persists.
Gating off the memory from the intermediate storage process shifted it over to PSD long-term memory storage, indicates that it's an either-or situation. Intermediate storage appears to be the fly brain's preferential, default pathway, Noyes says. He expects that the neurotransmitter dopamine will prove to play a key signaling role.
"We believe that dopamine signals to the brain that this memory is important enough to be stored long-term. We speculate that Ras and Raf receive this dopamine signal and thereby block intermediate memory and promote PSD long-term memory," Noyes says.
How this "intermediate" memory system works in humans requires further study as well, he adds.
Read more at Science Daily
Meteorite contains the oldest material on Earth: 7-billion-year-old stardust
Illustration of meteor entering Earth's atmosphere |
"This is one of the most exciting studies I've worked on," says Philipp Heck, a curator at the Field Museum, associate professor at the University of Chicago, and lead author of a paper describing the findings in the Proceedings of the National Academy of Sciences. "These are the oldest solid materials ever found, and they tell us about how stars formed in our galaxy."
The materials Heck and his colleagues examined are called presolar grains-minerals formed before the Sun was born. "They're solid samples of stars, real stardust," says Heck. These bits of stardust became trapped in meteorites where they remained unchanged for billions of years, making them time capsules of the time before the solar system..
But presolar grains are hard to come by. They're rare, found only in about five percent of meteorites that have fallen to Earth, and they're tiny-a hundred of the biggest ones would fit on the period at the end of this sentence. But the Field Museum has the largest portion of the Murchison meteorite, a treasure trove of presolar grains that fell in Australia in 1969 and that the people of Murchison, Victoria, made available to science. Presolar grains for this study were isolated from the Murchison meteorite for this study about 30 years ago at the University of Chicago.
"It starts with crushing fragments of the meteorite down into a powder ," explains Jennika Greer, a graduate student at the Field Museum and the University of Chicago and co-author of the study. "Once all the pieces are segregated, it's a kind of paste, and it has a pungent characteristic-it smells like rotten peanut butter."
This "rotten-peanut-butter-meteorite paste" was then dissolved with acid, until only the presolar grains remained. "It's like burning down the haystack to find the needle," says Heck.
Once the presolar grains were isolated, the researchers figured out from what types of stars they came and how old they were. "We used exposure age data, which basically measures their exposure to cosmic rays, which are high-energy particles that fly through our galaxy and penetrate solid matter," explains Heck. "Some of these cosmic rays interact with the matter and form new elements. And the longer they get exposed, the more those elements form.
"I compare this with putting out a bucket in a rainstorm. Assuming the rainfall is constant, the amount of water that accumulates in the bucket tells you how long it was exposed," he adds. By measuring how many of these new cosmic-ray produced elements are present in a presolar grain, we can tell how long it was exposed to cosmic rays, which tells us how old it is.
The researchers learned that some of the presolar grains in their sample were the oldest ever discovered-based on how many cosmic rays they'd soaked up, most of the grains had to be 4.6 to 4.9 billion years old, and some grains were even older than 5.5 billion years. For context, our Sun is 4.6 billion years old, and Earth is 4.5 billion.
But the age of the presolar grains wasn't the end of the discovery. Since presolar grains are formed when a star dies, they can tell us about the history of stars. And 7 billion years ago, there was apparently a bumper crop of new stars forming-a sort of astral baby boom.
"We have more young grains that we expected," says Heck. "Our hypothesis is that the majority of those grains, which are 4.9 to 4.6 billion years old, formed in an episode of enhanced star formation. There was a time before the start of the Solar System when more stars formed than normal."
This finding is ammo in a debate between scientists about whether or not new stars form at a steady rate, or if there are highs and lows in the number of new stars over time. "Some people think that the star formation rate of the galaxy is constant," says Heck. "But thanks to these grains, we now have direct evidence for a period of enhanced star formation in our galaxy seven billion years ago with samples from meteorites. This is one of the key findings of our study."
Heck notes that this isn't the only unexpected thing his team found. As almost a side note to the main research questions, in examining the way that the minerals in the grains interacted with cosmic rays, the researchers also learned that presolar grains often float through space stuck together in large clusters, "like granola," says Heck. "No one thought this was possible at that scale."
Heck and his colleagues look forward to all of these discoveries furthering our knowledge of our galaxy. "With this study, we have directly determined the lifetimes of stardust. We hope this will be picked up and studied so that people can use this as input for models of the whole galactic life cycle," he says.
Heck notes that there are lifetimes' worth of questions left to answer about presolar grains and the early Solar System. "I wish we had more people working on it to learn more about our home galaxy, the Milky Way," he says.
"Once learning about this, how do you want to study anything else?" says Greer. "It's awesome, it's the most interesting thing in the world."
"I always wanted to do astronomy with geological samples I can hold in my hand," says Heck. "It's so exciting to look at the history of our galaxy. Stardust is the oldest material to reach Earth, and from it, we can learn about our parent stars, the origin of the carbon in our bodies, the origin of the oxygen we breathe. With stardust, we can trace that material back to the time before the Sun."
"It's the next best thing to being able to take a sample directly from a star," says Greer.
Read more at Science Daily
Jan 13, 2020
Stars need a partner to spin universe's brightest explosions
When it comes to the biggest and brightest explosions seen in the Universe, University of Warwick astronomers have found that it takes two stars to make a gamma-ray burst.
New research solves the mystery of how stars spin fast enough to create conditions to launch a jet of highly energetic material into space, and has found that tidal effects like those between the Moon and the Earth are the answer.
The discovery, reported in Monthly Notices of the Royal Astronomical Society, has been made using simulated models of thousands of binary star systems, that is, solar systems that have two stars orbiting one another.
More than half of all stars are located in binary star systems and this new research has shown that they need to be in binary star systems in order for the massive explosions to be created.
A long gamma-ray burst (GRB), the type examined in this study, occurs when a massive star about ten times the size of our sun goes supernova, collapses into a neutron star or black hole and fires a relativistic jet of material into space. Instead of the star collapsing radially inwards, it flattens down into a disc to conserve angular momentum. As the material falls inwards, that angular momentum launches it in the form of a jet along the polar axis.
But in order to form that jet of material, the star has to be spinning fast enough to launch material along the axis. This presents a problem because stars usually lose any spin they acquire very quickly. By modelling the behaviour of these massive stars as they collapse, the researchers have been able to constrain the factors that cause a jet to be formed.
They found that the effects of tides from a close neighbour -- the same effect that has the Moon and the Earth locked together in their spin -- could be responsible for spinning these stars at the rate needed to create a gamma-ray burst.
Gamma-ray bursts are the most luminous events in the Universe and are observable from Earth when their jet of material is pointed directly at us. This means that we only see around 10-20% of the GRBs in our skies.
Lead author Ashley Chrimes, a PhD student in the University of Warwick Department of Physics, said: "We're predicting what kind of stars or systems produce gamma-ray bursts, which are the biggest explosions in the Universe. Until now it's been unclear what kind of stars or binary systems you need to produce that result.
"The question has been how a star starts spinning, or maintains its spin over time. We found that the effect of a star's tides on its partner is stopping them from slowing down and, in some cases, it is spinning them up. They are stealing rotational energy from their companion, a consequence of which is that they then drift further away.
"What we have determined is that the majority of stars are spinning fast precisely because they're in a binary system."
The study uses a collection of binary stellar evolution models created by researchers from the University of Warwick and Dr J J Eldridge from the University of Auckland. Using a technique called binary population synthesis, the scientists are able to simulate this mechanism in a population of thousands of star systems and so identify the rare examples where an explosion of this type can occur.
Dr Elizabeth Stanway, from the University of Warwick Department of Physics, said: "Scientists haven't modelled in detail for binary evolution in the past because it's a very complex calculation to do. This work has considered a physical mechanism within those models that we haven't examined before, that suggests that binaries can produce enough GRBs using this method to explain the number that we are observing.
"There has also been a big dilemma over the metallicity of stars that produce gamma-ray bursts. As astronomers, we measure the composition of stars and the dominant pathway for gamma-ray bursts requires very few iron atoms or other heavy elements in the stellar atmosphere. There's been a puzzle over why we see a variety of compositions in the stars producing gamma-ray bursts, and this model offers an explanation."
Read more at Science Daily
New research solves the mystery of how stars spin fast enough to create conditions to launch a jet of highly energetic material into space, and has found that tidal effects like those between the Moon and the Earth are the answer.
The discovery, reported in Monthly Notices of the Royal Astronomical Society, has been made using simulated models of thousands of binary star systems, that is, solar systems that have two stars orbiting one another.
More than half of all stars are located in binary star systems and this new research has shown that they need to be in binary star systems in order for the massive explosions to be created.
A long gamma-ray burst (GRB), the type examined in this study, occurs when a massive star about ten times the size of our sun goes supernova, collapses into a neutron star or black hole and fires a relativistic jet of material into space. Instead of the star collapsing radially inwards, it flattens down into a disc to conserve angular momentum. As the material falls inwards, that angular momentum launches it in the form of a jet along the polar axis.
But in order to form that jet of material, the star has to be spinning fast enough to launch material along the axis. This presents a problem because stars usually lose any spin they acquire very quickly. By modelling the behaviour of these massive stars as they collapse, the researchers have been able to constrain the factors that cause a jet to be formed.
They found that the effects of tides from a close neighbour -- the same effect that has the Moon and the Earth locked together in their spin -- could be responsible for spinning these stars at the rate needed to create a gamma-ray burst.
Gamma-ray bursts are the most luminous events in the Universe and are observable from Earth when their jet of material is pointed directly at us. This means that we only see around 10-20% of the GRBs in our skies.
Lead author Ashley Chrimes, a PhD student in the University of Warwick Department of Physics, said: "We're predicting what kind of stars or systems produce gamma-ray bursts, which are the biggest explosions in the Universe. Until now it's been unclear what kind of stars or binary systems you need to produce that result.
"The question has been how a star starts spinning, or maintains its spin over time. We found that the effect of a star's tides on its partner is stopping them from slowing down and, in some cases, it is spinning them up. They are stealing rotational energy from their companion, a consequence of which is that they then drift further away.
"What we have determined is that the majority of stars are spinning fast precisely because they're in a binary system."
The study uses a collection of binary stellar evolution models created by researchers from the University of Warwick and Dr J J Eldridge from the University of Auckland. Using a technique called binary population synthesis, the scientists are able to simulate this mechanism in a population of thousands of star systems and so identify the rare examples where an explosion of this type can occur.
Dr Elizabeth Stanway, from the University of Warwick Department of Physics, said: "Scientists haven't modelled in detail for binary evolution in the past because it's a very complex calculation to do. This work has considered a physical mechanism within those models that we haven't examined before, that suggests that binaries can produce enough GRBs using this method to explain the number that we are observing.
"There has also been a big dilemma over the metallicity of stars that produce gamma-ray bursts. As astronomers, we measure the composition of stars and the dominant pathway for gamma-ray bursts requires very few iron atoms or other heavy elements in the stellar atmosphere. There's been a puzzle over why we see a variety of compositions in the stars producing gamma-ray bursts, and this model offers an explanation."
Read more at Science Daily
TESS dates an ancient collision with our galaxy
A single bright star in the constellation of Indus, visible from the southern hemisphere, has revealed new insights on an ancient collision that our galaxy the Milky Way underwent with another smaller galaxy called Gaia-Enceladus early in its history.
An international team of scientists led by the University of Birmingham adopted the novel approach of applying the forensic characterisation of a single ancient, bright star called ? Indi as a probe of the history of the Milky Way. Stars carry "fossilized records" of their histories and hence the environments in which they formed. The team used data from satellites and ground-based telescopes to unlock this information from ? Indi. Their results are published in the journal Nature Astronomy.
The star was aged using its natural oscillations (asteroseismology), detected in data collected by NASA's recently launched Transiting Exoplanet Survey Satellite (TESS). Launched in 2018, TESS is surveying stars across most of the sky to search for planets orbiting the stars and to study the stars themselves. When combined with data from the European Space Agency (ESA) Gaia Mission, the detective story revealed that this ancient star was born early in the life of the Milky Way, but the Gaia-Enceladus collision altered its motion through our Galaxy.
Bill Chaplin, Professor of Astrophysics at the University of Birmingham and lead author of the study said: "Since the motion of ? Indi was affected by the Gaia-Enceladus collision, the collision must have happened once the star had formed. That is how we have been able to use the asteroseismically-determined age to place new limits on when the Gaia-Enceladus event occurred."
Co-author Dr Ted Mackereth, also from Birmingham, said: "Because we see so many stars from Gaia-Enceladus, we think it must have had a large impact on the evolution of our Galaxy. Understanding that is now a very hot topic in astronomy, and this study is an important step in understanding when this collision occurred."
Bill Chaplin added: "This study demonstrates the potential of asteroseismology with TESS, and what is possible when one has a variety of cutting-edge data available on a single, bright star"
The research clearly shows the strong potential of the TESS programme to draw together rich new insights about the stars that are our closest neighbours in the Milky Way. The research was funded by the Science and Technology Facilities Council and the European Research Council through the Asterochronometry project.
From Science Daily
An international team of scientists led by the University of Birmingham adopted the novel approach of applying the forensic characterisation of a single ancient, bright star called ? Indi as a probe of the history of the Milky Way. Stars carry "fossilized records" of their histories and hence the environments in which they formed. The team used data from satellites and ground-based telescopes to unlock this information from ? Indi. Their results are published in the journal Nature Astronomy.
The star was aged using its natural oscillations (asteroseismology), detected in data collected by NASA's recently launched Transiting Exoplanet Survey Satellite (TESS). Launched in 2018, TESS is surveying stars across most of the sky to search for planets orbiting the stars and to study the stars themselves. When combined with data from the European Space Agency (ESA) Gaia Mission, the detective story revealed that this ancient star was born early in the life of the Milky Way, but the Gaia-Enceladus collision altered its motion through our Galaxy.
Bill Chaplin, Professor of Astrophysics at the University of Birmingham and lead author of the study said: "Since the motion of ? Indi was affected by the Gaia-Enceladus collision, the collision must have happened once the star had formed. That is how we have been able to use the asteroseismically-determined age to place new limits on when the Gaia-Enceladus event occurred."
Co-author Dr Ted Mackereth, also from Birmingham, said: "Because we see so many stars from Gaia-Enceladus, we think it must have had a large impact on the evolution of our Galaxy. Understanding that is now a very hot topic in astronomy, and this study is an important step in understanding when this collision occurred."
Bill Chaplin added: "This study demonstrates the potential of asteroseismology with TESS, and what is possible when one has a variety of cutting-edge data available on a single, bright star"
The research clearly shows the strong potential of the TESS programme to draw together rich new insights about the stars that are our closest neighbours in the Milky Way. The research was funded by the Science and Technology Facilities Council and the European Research Council through the Asterochronometry project.
From Science Daily
How the solar system got its 'Great Divide,' and why it matters for life on Earth
Illustration of inner solar system. |
In a study published today in Nature Astronomy, researchers from the United States and Japan unveil the possible origins of our cosmic neighborhood's "Great Divide." This well-known schism may have separated the solar system just after the sun first formed.
The phenomenon is a bit like how the Rocky Mountains divide North America into east and west. On the one side are "terrestrial" planet, such as Earth and Mars. They are made up of fundamentally different types of materials than the more distant "jovians," such as Jupiter and Saturn.
"The question is: How do you create this compositional dichotomy?" said lead author Ramon Brasser, a researcher at the Earth-Life Science Institute (ELSI) at the Tokyo Institute of Technology in Japan. "How do you ensure that material from the inner and outer solar system didn't mix from very early on in its history?"
Brasser and coauthor Stephen Mojzsis, a professor in CU Boulder's Department of Geological Sciences, think they have the answer, and it may just shed new light on how life originated on Earth.
A sun disk holds vital clues
The duo suggests that the early solar system was partitioned into at least two regions by a ring-like structure that formed a disk around the young sun. This disk might have held major implications for the evolution of planets and asteroids, and even the history of life on Earth.
"The most likely explanation for that compositional difference is that it emerged from an intrinsic structure of this disk of gas and dust," Mojzsis said.
Mojzsis noted that the Great Divide, a term that he and Brasser coined, does not look like much today. It is a relatively empty stretch of space that sits near Jupiter, just beyond what astronomers call the asteroid belt.
But you can still detect its presence throughout the solar system. Move sunward from that line, and most planets and asteroids tend to carry relatively low abundances of organic molecules. Go the other direction toward Jupiter and beyond, however, and a different picture emerges: Almost everything in this distant part of the solar system is made up of materials that are rich in carbon.
This dichotomy "was really a surprise when it was first found," Mojzsis said.
Many scientists assumed that Jupiter was the agent responsible for that surprise. The thinking went that the planet is so massive that it may have acted as a gravitational barrier, preventing pebbles and dust from the outer solar system from spiraling toward the sun.
But Mojzsis and Brasser were not convinced. The scientists used a series of computer simulations to explore Jupiter's role in the evolving solar system. They found that while Jupiter is big, it was probably never big enough early in its formation to entirely block the flow of rocky material from moving sunward.
"We banged our head against the wall," Brasser said. "If Jupiter wasn't the agent responsible for creating and maintaining that compositional dichotomy, what else could be?"
A solution in plain sight
For years, scientists operating an observatory in Chile called the Atacama Large Millimeter/submillimeter Array (ALMA) had noticed something unusual around distant stars: Young stellar systems were often surrounded by disks of gas and dust that, in infrared light, looked a bit like a tiger's eye.
If a similar ring existed in our own solar system billions of years ago, Brasser and Mojzsis reasoned, it could theoretically be responsible for the Great Divide.
That's because such a ring would create alternating bands of high- and low-pressure gas and dust. Those bands, in turn, might pull the solar system's earliest building blocks into several distinct sinks -- one that would have given rise to Jupiter and Saturn, and another Earth and Mars.
In the mountains, "the Great Divide causes water to drain one way or another," Mojzsis said. "It's similar to how this pressure bump would have divided material" in the solar system.
But, he added, there's a caveat: That barrier in space likely was not perfect. Some outer solar system material may still have climbed across the divide. And those fugitives could have been important for the evolution of our own world.
Read more at Science Daily
A replacement for exercise?
Mouse in exercise wheel. |
Michigan Medicine researchers studying a class of naturally occurring protein called Sestrin have found that it can mimic many of exercise's effects in flies and mice. The findings could eventually help scientists combat muscle wasting due to aging and other causes.
"Researchers have previously observed that Sestrin accumulates in muscle following exercise," said Myungjin Kim, Ph.D., a research assistant professor in the Department of Molecular & Integrative Physiology. Kim, working with professor Jun Hee Lee, Ph.D. and a team of researchers wanted to know more about the protein's apparent link to exercise. Their first step was to encourage a bunch of flies to work out.
Taking advantage of Drosophila flies' normal instinct to climb up and out of a test tube, their collaborators Robert Wessells, Ph.D. and Alyson Sujkowski of Wayne State University in Detroit developed a type of fly treadmill. Using it, the team trained the flies for three weeks and compared the running and flying ability of normal flies with that of flies bred to lack the ability to make Sestrin.
"Flies can usually run around four to six hours at this point and the normal flies' abilities improved over that period," says Lee. "The flies without Sestrin did not improve with exercise."
What's more, when they overexpressed Sestrin in the muscles of normal flies, essentially maxing out their Sestrin levels, they found those flies had abilities above and beyond the trained flies, even without exercise. In fact, flies with overexpressed Sestrin didn't develop more endurance when exercised.
The beneficial effects of Sestrin include more than just improved endurance. Mice without Sestrin lacked the improved aerobic capacity, improved respiration and fat burning typically associated with exercise.
"We propose that Sestrin can coordinate these biological activities by turning on or off different metabolic pathways," says Lee. "This kind of combined effect is important for producing exercise's effects."
Lee also helped another collaborator, Pura Muñoz-Cánoves, Ph.D., of Pompeu Fabra University in Spain, to demonstrate that muscle-specific Sestrin can also help prevent atrophy in a muscle that's immobilized, such as the type that occurs when a limb is in a cast for a long period of time. "This independent study again highlights that Sestrin alone is sufficient to produce many benefits of physical movement and exercise," says Lee.
Could Sestrin supplements be on the horizon? Not quite, says Lee. "Sestrins are not small molecules, but we are working to find small molecule modulators of Sestrin."
Read more at Science Daily
High temperatures due to global warming will be dramatic even for tardigrades
Tardigrade |
A research group from Department of Biology, University of Copenhagen has just shown that tardigrades are very vulnerable to long-term high temperature exposures. The tiny animals, in their desiccated state, are best known for their extraordinary tolerance to extreme environments.
In a study published recently in Scientific Reports, Ricardo Neves and Nadja Møbjerg and colleagues at Department of Biology, University of Copenhagen present results on the tolerance to high temperatures of a tardigrade species.
Tardigrades, commonly known as water bears or moss piglets, are microscopic invertebrates distributed worldwide in marine, freshwater and terrestrial microhabitats.
Ricardo Neves, Nadja Møbjerg and colleagues investigated the tolerance to high temperatures of Ramazzottius varieornatus, a tardigrade frequently found in transient freshwater habitats.
"The specimens used in this study were obtained from roof gutters of a house located in Nivå, Denmark. We evaluated the effect of exposures to high temperature in active and desiccated tardigrades, and we also investigated the effect of a brief acclimation period on active animals," explains postdoc Ricardo Neves.
Rather surprisingly the researchers estimated that for non-acclimated active tardigrades the median lethal temperature is 37.1°C, though a short acclimation periods leads to a small but significant increase of the median lethal temperature to 37.6°C. Interestingly, this temperature is not far from the currently measured maximum temperature in Denmark, i.e. 36.4°C. As for the desiccated specimens, the authors observed that the estimated 50% mortality temperature is 82.7°C following 1 hour exposures, though a significant decrease to 63.1°C following 24 hour exposures was registered.
The research group used logistic models to estimate the median lethal temperature (at which 50% mortality is achieved) both for active and desiccated tardigrades.
Approximately 1300 tardigrade species have been described so far. The body of these minute animals is barrel-shaped (or dorsoventrally compressed) and divided into a head and a trunk with four pairs of legs. Their body length varies between 50 micrometers and 1.2 millimeters. Apart from their impressive ability to tolerate extreme environments, tardigrades are also very interesting because of their close evolutionary relationship with arthropods (e.g., insects, crustaceans, spiders).
As aquatic animals, tardigrades need to be surrounded in a film of water to be in their active state (i.e., feeding and reproducing). However, these critters are able to endure periods of desiccation (anhydrobiosis) by entering cryptobiosis, i.e., a reversible ametabolic state common especially among limno-terrestrial species. Succinctly, tardigrades enter the so-called "tun" state by contracting their anterior-posterior body axis, retracting their legs and rearranging the internal organs. This provides them with the capacity to tolerate severe environmental conditions including oxygen depletion (anoxybiosis), high toxicant concentrations (chemobiosis), high solute concentration (osmobiosis) and extremely low temperatures (cryobiosis).
The extraordinary tolerance of tardigrades to extreme environments includes also high temperature endurance. Some tardigrade species were reported to tolerate temperatures as high as 151°C. However, the exposure time was only of 30 minutes. Other studies on thermotolerance of desiccated (anhydrobiotic) tardigrades revealed that exposures higher than 80°C for 1 hour resulted in high mortality, with almost all specimens dying at temperatures above 103°C. It remained, yet, unknown how anhydrobiotic tardigrades handle exposures to high temperatures for long periods, i.e., exceeding 1 hour.
"From this study, we can conclude that active tardigrades are vulnerable to high temperatures, though it seems that these critters would be able to acclimatize to increasing temperatures in their natural habitat. Desiccated tardigrades are much more resilient and can endure temperatures much higher than those endured by active tardigrades. However, exposure-time is clearly a limiting factor that constrains their tolerance to high temperatures," says Ricardo Neves.
Read more at Science Daily
Subscribe to:
Posts (Atom)