Our universe is filled with gobs of galaxies, bound together by gravity into larger families called clusters. Lying at the heart of most clusters is a monster galaxy thought to grow in size by merging with neighboring galaxies, a process astronomers call galactic cannibalism.
New research from NASA's Spitzer Space Telescope and Wide-field Infrared Survey Explorer (WISE) is showing that, contrary to previous theories, these gargantuan galaxies appear to slow their growth over time, feeding less and less off neighboring galaxies.
"We've found that these massive galaxies may have started a diet in the last 5 billion years, and therefore have not gained much weight lately," said Yen-Ting Lin of the Academia Sinica in Taipei, Taiwan, lead author of a study published in the Astrophysical Journal.
Peter Eisenhardt, a co-author from NASA's Jet Propulsion Laboratory in Pasadena, Calif., said, "WISE and Spitzer are letting us see that there is a lot we do understand -- but also a lot we don't understand -- about the mass of the most massive galaxies." Eisenhardt identified the sample of galaxy clusters studied by Spitzer, and is the project scientist for WISE.
The new findings will help researchers understand how galaxy clusters -- among the most massive structures in our universe -- form and evolve.
Galaxy clusters are made up of thousands of galaxies, gathered around their biggest member, what astronomers call the brightest cluster galaxy, or BCG. BCGs can be up to dozens of times the mass of galaxies like our own Milky Way. They plump up in size by cannibalizing other galaxies, as well as assimilating stars that are funneled into the middle of a growing cluster.
To monitor how this process works, the astronomers surveyed nearly 300 galaxy clusters spanning 9 billion years of cosmic time. The farthest cluster dates back to a time when the universe was 4.3 billion years old, and the closest, when the universe was much older, 13 billion years old (our universe is presently 13.8 billion years old).
"You can't watch a galaxy grow, so we took a population census," said Lin. "Our new approach allows us to connect the average properties of clusters we observe in the relatively recent past with ones we observe further back in the history of the universe."
Spitzer and WISE are both infrared telescopes, but they have unique characteristics that complement each other in studies like these. For instance, Spitzer can see more detail than WISE, which enables it to capture the farthest clusters best. On the other hand, WISE, an infrared all-sky survey, is better at capturing images of nearby clusters, thanks to its larger field of view. Spitzer is still up and observing; WISE went into hibernation in 2011 after successfully scanning the sky twice.
The findings showed that BCG growth proceeded along rates predicted by theories until 5 billion years ago, or a time when the universe was about 8 billion years old. After that time, it appears the galaxies, for the most part, stopped munching on other galaxies around them.
The scientists are uncertain about the cause of BCGs' diminished appetites, but the results suggest current models need tinkering.
"BCGs are a bit like blue whales -- both are gigantic and very rare in number. Our census of the population of BCGs is in a way similar to measuring how the whales gain their weight as they age. In our case, the whales aren't gaining as much weight as we thought. Our theories aren't matching what we observed, leading us to new questions," said Lin.
Another possible explanation is that the surveys are missing large numbers of stars in the more mature clusters. Clusters can be violent environments, where stars are stripped from colliding galaxies and flung into space. If the recent observations are not detecting those stars, it's possible that the enormous galaxies are, in fact, continuing to bulk up.
Future studies from Lin and others should reveal more about the feeding habits of one of nature's largest galactic species.
Read more at Science Daily
Aug 3, 2013
Baby Owls Sleep Like Baby Humans
Baby birds have sleep patterns similar to baby mammals, and their sleep changes in the same way when growing up. This is what a team from the Max Planck Institute for Ornithology and the University of Lausanne found out working with barn owls in the wild. The team also discovered that this change in sleep was strongly correlated with the expression of a gene involved in producing dark, melanic feather spots, a trait known to covary with behavioral and physiological traits in adult owls. These findings raise the intriguing possibility that sleep-related developmental processes in the brain contribute to the link between melanism and other traits observed in adult barn owls and other animals.
Sleep in mammals and birds consists of two phases, REM sleep ("Rapid Eye Movement Sleep") and non-REM sleep. We experience our most vivid dreams during REM sleep, a paradoxical state characterized by awake-like brain activity. Despite extensive research, REM sleep's purpose remains a mystery. One of the most salient features of REM sleep is its preponderance early in life. A variety of mammals spend far more time in REM sleep during early life than when they are adults. For example, as newborns, half of our time asleep is spent in REM sleep, whereas last night REM sleep probably encompassed only 20-25% percent of your time snoozing.
Although birds are the only non-mammalian group known to clearly engage in REM sleep, it has been unclear whether sleep develops in the same manner in baby birds. Consequently, Niels Rattenborg of the MPIO, Alexandre Roulin of Unil, and their PhD student Madeleine Scriba, reexamined this question in a population of wild barn owls. They used an electroencephalogram (EEG) and movement data logger in conjunction with minimally invasive EEG sensors designed for use in humans, to record sleep in 66 owlets of varying age. During the recordings, the owlets remained in their nest box and were fed normally by their parents. After having their sleep patterns recorded for up to five days, the logger was removed. All of the owlets subsequently fledged and returned at normal rates to breed in the following year, indicating that there were no long-term adverse effects of eves-dropping on their sleeping brains.
Despite lacking significant eye movements (a trait common to owls), the owlets spent large amounts of time in REM sleep. "During this sleep phase, the owlets' EEG showed awake-like activity, their eyes remained closed, and their heads nodded slowly," reports Madeleine Scriba from the University of Lausanne (see video in the link below). Importantly, the researchers discovered that just as in baby humans, the time spent in REM sleep declined as the owlets aged.
In addition, the team examined the relationship between sleep and the expression of a gene in the feather follicles involved in producing dark, melanic feather spots. "As in several other avian and mammalian species, we have found that melanic spotting in owls covaries with a variety of behavioral and physiological traits, many of which also have links to sleep, such as immune system function and energy regulation," notes Alexander Roulin from the University of Lausanne. Indeed, the team found that owlets expressing higher levels of the gene involved in melanism had less REM sleep than expected for their age, suggesting that their brains were developing faster than in owlets expressing lower levels of this gene. In line with this interpretation, the enzyme encoded by this gene also plays a role in producing hormones (thyroid and insulin) involved in brain development.
Read more at Science Daily
Sleep in mammals and birds consists of two phases, REM sleep ("Rapid Eye Movement Sleep") and non-REM sleep. We experience our most vivid dreams during REM sleep, a paradoxical state characterized by awake-like brain activity. Despite extensive research, REM sleep's purpose remains a mystery. One of the most salient features of REM sleep is its preponderance early in life. A variety of mammals spend far more time in REM sleep during early life than when they are adults. For example, as newborns, half of our time asleep is spent in REM sleep, whereas last night REM sleep probably encompassed only 20-25% percent of your time snoozing.
Although birds are the only non-mammalian group known to clearly engage in REM sleep, it has been unclear whether sleep develops in the same manner in baby birds. Consequently, Niels Rattenborg of the MPIO, Alexandre Roulin of Unil, and their PhD student Madeleine Scriba, reexamined this question in a population of wild barn owls. They used an electroencephalogram (EEG) and movement data logger in conjunction with minimally invasive EEG sensors designed for use in humans, to record sleep in 66 owlets of varying age. During the recordings, the owlets remained in their nest box and were fed normally by their parents. After having their sleep patterns recorded for up to five days, the logger was removed. All of the owlets subsequently fledged and returned at normal rates to breed in the following year, indicating that there were no long-term adverse effects of eves-dropping on their sleeping brains.
Despite lacking significant eye movements (a trait common to owls), the owlets spent large amounts of time in REM sleep. "During this sleep phase, the owlets' EEG showed awake-like activity, their eyes remained closed, and their heads nodded slowly," reports Madeleine Scriba from the University of Lausanne (see video in the link below). Importantly, the researchers discovered that just as in baby humans, the time spent in REM sleep declined as the owlets aged.
In addition, the team examined the relationship between sleep and the expression of a gene in the feather follicles involved in producing dark, melanic feather spots. "As in several other avian and mammalian species, we have found that melanic spotting in owls covaries with a variety of behavioral and physiological traits, many of which also have links to sleep, such as immune system function and energy regulation," notes Alexander Roulin from the University of Lausanne. Indeed, the team found that owlets expressing higher levels of the gene involved in melanism had less REM sleep than expected for their age, suggesting that their brains were developing faster than in owlets expressing lower levels of this gene. In line with this interpretation, the enzyme encoded by this gene also plays a role in producing hormones (thyroid and insulin) involved in brain development.
Read more at Science Daily
Aug 2, 2013
Ancient Feathered Shield Found in Peru Temple
Hidden in a sealed part of an ancient Peruvian temple, archaeologists have discovered a feathered shield dating back around 1,300 years.
Made by the Moche people, the rare artifact was found face down on a sloped surface that had been turned into a bench or altar at the site of Pañamarca. Located near two ancient murals, one of which depicts a supernatural monster, the shield measures about 10 inches (25 centimeters) in diameter and has a base made of carefully woven basketry with a handle.
Its surface is covered with red-and-brown textiles along with about a dozen yellow feathers that were sewn on and appear to be from the body of a macaw. The shield would have served a ritualistic rather than a practical use, and the placement of the shield on the bench or altar appears to have been the last act carried out before this space was sealed and a new, larger, temple built on top of it.
The discovery of this small shield, combined with the discovery of other small Moche shields and depictions of them in art, may also shed light on Moche combat. Their shields may have been used in ceremonial performances or ritualized battles similar to gladiatorial combat, Lisa Trever, a professor at the University of California, Berkeley, told LiveScience.
Trever and her colleagues, Jorge Gamboa, Ricardo Toribio and Flannery Surette, describe the shield in the most recent edition of Ñawpa Pacha: Journal of Andean Archaeology.
Though only about a dozen feathers now remain on the shield, in ancient times it may have had a more feathered appearance. "I suspect that originally it had at least 100 feathers sewn on the surface" in two or more concentric circles, Trever said.
The Moche people, who lived on the desert coasts and irrigated valleys of the Pacific side of the Andes Mountains, likely had to import the feathers, as macaws resided on the eastern side of the Andes, closer to the Amazon.
What symbolic meaning the macaw had for the Moche is a mystery. "We know that the Moche used many animal metaphors in their art and visual culture," Trever said. "They may have had a specific symbolic meaning to the macaw, but because the Moche didn't leave us any written records we don't know precisely what they thought."
The shield was found close to two ancient murals, one of which depicts a "Strombus Monster," a supernatural beast with both snail and feline characteristics, and the other an iguanalike creature. The researchers note in their paper that the monster is often shown in Moche art battling a fanged humanlike character called "Wrinkle Face" by some scholars. The iguana in turn is often shown as an attendant accompanying Wrinkle Face on his journeys.
Although a depiction of Wrinkle Face has yet to be found in the sealed area where the shield is located he may yet turn up in future excavations. "What the exact relationship is between the deposition of the shield and the adjacent pictorial narrative is an active question," Trever said.
It appears as if the Moche liked to keep their shields small, bringing up the question of whether they were meant for something like gladiatorial combat or some other type of fighting.
Whereas the newly discovered shield was meant for ritual and not for combat, the researchers note that another small Moche shield, this one found at the site of Huaca de la Luna, was likely meant for combat, being made of woven cane and leather, but measuring only 17 inches (43 cm) in diameter. In addition, depictions of Moche shields in ceramic art show people wearing small circular or square shields on their forearm.
It's "more like a small shield that's used to protect the forearm and maybe held over the face in hand-to-hand combat with clubs," she said of the Moche shields. "They apparently didn't need, or didn't use, large shields to protect themselves from volleys of arrows or spears that were thrown."
We "have to think about the style of hand-to-hand combat" they were used for, she added. "Is it something that is more ritual in nature, more of a ritual combat, gladiatorial combat," Trever said.
Jeffrey Quilter, director of the Peabody Museum of Archaeology and Ethnology at Harvard University, has proposed another idea as to why Moche shields were so small. He points out that the Moche used a two-handed club that gave them great reach and could land a lethal blow.
Read more at Discovery News
Made by the Moche people, the rare artifact was found face down on a sloped surface that had been turned into a bench or altar at the site of Pañamarca. Located near two ancient murals, one of which depicts a supernatural monster, the shield measures about 10 inches (25 centimeters) in diameter and has a base made of carefully woven basketry with a handle.
Its surface is covered with red-and-brown textiles along with about a dozen yellow feathers that were sewn on and appear to be from the body of a macaw. The shield would have served a ritualistic rather than a practical use, and the placement of the shield on the bench or altar appears to have been the last act carried out before this space was sealed and a new, larger, temple built on top of it.
The discovery of this small shield, combined with the discovery of other small Moche shields and depictions of them in art, may also shed light on Moche combat. Their shields may have been used in ceremonial performances or ritualized battles similar to gladiatorial combat, Lisa Trever, a professor at the University of California, Berkeley, told LiveScience.
Trever and her colleagues, Jorge Gamboa, Ricardo Toribio and Flannery Surette, describe the shield in the most recent edition of Ñawpa Pacha: Journal of Andean Archaeology.
Though only about a dozen feathers now remain on the shield, in ancient times it may have had a more feathered appearance. "I suspect that originally it had at least 100 feathers sewn on the surface" in two or more concentric circles, Trever said.
The Moche people, who lived on the desert coasts and irrigated valleys of the Pacific side of the Andes Mountains, likely had to import the feathers, as macaws resided on the eastern side of the Andes, closer to the Amazon.
What symbolic meaning the macaw had for the Moche is a mystery. "We know that the Moche used many animal metaphors in their art and visual culture," Trever said. "They may have had a specific symbolic meaning to the macaw, but because the Moche didn't leave us any written records we don't know precisely what they thought."
The shield was found close to two ancient murals, one of which depicts a "Strombus Monster," a supernatural beast with both snail and feline characteristics, and the other an iguanalike creature. The researchers note in their paper that the monster is often shown in Moche art battling a fanged humanlike character called "Wrinkle Face" by some scholars. The iguana in turn is often shown as an attendant accompanying Wrinkle Face on his journeys.
Although a depiction of Wrinkle Face has yet to be found in the sealed area where the shield is located he may yet turn up in future excavations. "What the exact relationship is between the deposition of the shield and the adjacent pictorial narrative is an active question," Trever said.
It appears as if the Moche liked to keep their shields small, bringing up the question of whether they were meant for something like gladiatorial combat or some other type of fighting.
Whereas the newly discovered shield was meant for ritual and not for combat, the researchers note that another small Moche shield, this one found at the site of Huaca de la Luna, was likely meant for combat, being made of woven cane and leather, but measuring only 17 inches (43 cm) in diameter. In addition, depictions of Moche shields in ceramic art show people wearing small circular or square shields on their forearm.
It's "more like a small shield that's used to protect the forearm and maybe held over the face in hand-to-hand combat with clubs," she said of the Moche shields. "They apparently didn't need, or didn't use, large shields to protect themselves from volleys of arrows or spears that were thrown."
We "have to think about the style of hand-to-hand combat" they were used for, she added. "Is it something that is more ritual in nature, more of a ritual combat, gladiatorial combat," Trever said.
Jeffrey Quilter, director of the Peabody Museum of Archaeology and Ethnology at Harvard University, has proposed another idea as to why Moche shields were so small. He points out that the Moche used a two-handed club that gave them great reach and could land a lethal blow.
Read more at Discovery News
Greenland Icebergs May Have Triggered Big Freeze
In a warming world, what could cause temperatures to suddenly plummet across the Northern Hemisphere? Scientists have tried to answer this question for decades, ever since they discovered geological and biological evidence for the "Big Freeze."
Now, a new study points to an armada of icebergs or meltwater from Greenland as a possible cause for the sudden climate change called the Younger Dryas, or the Big Freeze. The findings were published online July 10 in the journal Earth and Planetary Science Letters.
Starting roughly 12,900 years ago, the Big Freeze halted the Northern Hemisphere's transition from an Ice Age to today's relatively warm, interglacial period. In just a decade, glacial cold returned to the northern latitudes. The tropics shifted more slowly, with changes in monsoon intensity and the amount of rainfall they received. Only Antarctica went untouched.
Big Freeze
In the most widely accepted model, researchers have suggested massive glacial floods from North America shut down warm ocean currents in the North Atlantic, leading to the climate cooling. Just before the Younger Dryas, the continent's Laurentide Ice Sheet was melting, and freshwater floods could have poured into the Atlantic or Arctic oceans through the St. Lawrence or Mackenzie rivers, respectively. However, there is ongoing debate about the size and timing of the floods.
Greenland's ice sheet was also presumably melting 13,000 years ago, but it has rarely been named as a prime suspect in the Younger Dryas cooling. Widespread geological clues for a big Greenland ice breakup hadn't been found.
But in seafloor sediments in the Labrador Sea, near Greenland's southern tip, scientists from the Geological Survey of Denmark and Greenland think they've found their smoking gun. There, a ship pulled up cores of mud with rock fragments carried by icebergs from Greenland and dropped into the ocean as the ice melted. Some of the rubble is distinctly older, by about 1 billion years, than rock rafted into the Labrador Sea by North American icebergs.
Predicting future from the past
Combined with other geochemical evidence from the mud cores (cylinders of sediment drilled out of the seafloor), the findings suggest a sudden pulse of Greenland meltwater hit the Labrador Sea about 13,000 years ago, just before the Younger Dryas cooling started.
"It wasn't as giant as the Laurentide Ice Sheet, but these more minor ice sheets can also contribute to ocean-climate interactions," said Paul Knutz, lead study author and a marine geologist at the Geological Survey of Denmark and Greenland. Through either a crack-up that released an iceberg flotilla, or a freshwater flood, the Greenland Ice Sheet lowered salinity in the Labrador Sea so much that it affected heat transport in the North Atlantic, according to oceanographic models.
Though the study still can't rule out the Laurentide Ice Sheet as the cause of the Younger Dryas cooling, the evidence points to Greenland as "a very likely culprit," Knutz told LiveScience.
Evidence stacking up
Although the link between the melting of the Greenland Ice Sheet and climate change during the Younger Dryas is still not conclusively established, the evidence seems to be stacking up in favor of a connection, said Eelco Rohling, a paleoclimatologist at the Australian National University in Canberra who wasn't involved in the study. "The timing relationship seems OK, but coincidence does not imply causality," he told LiveScience.
Understanding how Greenland melting changed ocean circulation and climate in the past can help predict the ice sheet's future role in climate change, the researchers said. "This does have implications for the future," Knutz said.
Read more at Discovery News
Now, a new study points to an armada of icebergs or meltwater from Greenland as a possible cause for the sudden climate change called the Younger Dryas, or the Big Freeze. The findings were published online July 10 in the journal Earth and Planetary Science Letters.
Starting roughly 12,900 years ago, the Big Freeze halted the Northern Hemisphere's transition from an Ice Age to today's relatively warm, interglacial period. In just a decade, glacial cold returned to the northern latitudes. The tropics shifted more slowly, with changes in monsoon intensity and the amount of rainfall they received. Only Antarctica went untouched.
Big Freeze
In the most widely accepted model, researchers have suggested massive glacial floods from North America shut down warm ocean currents in the North Atlantic, leading to the climate cooling. Just before the Younger Dryas, the continent's Laurentide Ice Sheet was melting, and freshwater floods could have poured into the Atlantic or Arctic oceans through the St. Lawrence or Mackenzie rivers, respectively. However, there is ongoing debate about the size and timing of the floods.
Greenland's ice sheet was also presumably melting 13,000 years ago, but it has rarely been named as a prime suspect in the Younger Dryas cooling. Widespread geological clues for a big Greenland ice breakup hadn't been found.
But in seafloor sediments in the Labrador Sea, near Greenland's southern tip, scientists from the Geological Survey of Denmark and Greenland think they've found their smoking gun. There, a ship pulled up cores of mud with rock fragments carried by icebergs from Greenland and dropped into the ocean as the ice melted. Some of the rubble is distinctly older, by about 1 billion years, than rock rafted into the Labrador Sea by North American icebergs.
Predicting future from the past
Combined with other geochemical evidence from the mud cores (cylinders of sediment drilled out of the seafloor), the findings suggest a sudden pulse of Greenland meltwater hit the Labrador Sea about 13,000 years ago, just before the Younger Dryas cooling started.
"It wasn't as giant as the Laurentide Ice Sheet, but these more minor ice sheets can also contribute to ocean-climate interactions," said Paul Knutz, lead study author and a marine geologist at the Geological Survey of Denmark and Greenland. Through either a crack-up that released an iceberg flotilla, or a freshwater flood, the Greenland Ice Sheet lowered salinity in the Labrador Sea so much that it affected heat transport in the North Atlantic, according to oceanographic models.
Though the study still can't rule out the Laurentide Ice Sheet as the cause of the Younger Dryas cooling, the evidence points to Greenland as "a very likely culprit," Knutz told LiveScience.
Evidence stacking up
Although the link between the melting of the Greenland Ice Sheet and climate change during the Younger Dryas is still not conclusively established, the evidence seems to be stacking up in favor of a connection, said Eelco Rohling, a paleoclimatologist at the Australian National University in Canberra who wasn't involved in the study. "The timing relationship seems OK, but coincidence does not imply causality," he told LiveScience.
Understanding how Greenland melting changed ocean circulation and climate in the past can help predict the ice sheet's future role in climate change, the researchers said. "This does have implications for the future," Knutz said.
Read more at Discovery News
Genetic 'Adam' and 'Eve' Uncovered
Almost every man alive can trace his origins to one man who lived about 135,000 years ago, new research suggests. And that ancient man likely shared the planet with the mother of all women.
The findings, detailed today (Aug. 1) in the journal Science, come from the most complete analysis of the male sex chromosome, or the Y chromosome, to date. The results overturn earlier research, which suggested that men's most recent common ancestor lived just 50,000 to 60,000 years ago.
Despite their overlap in time, ancient "Adam" and ancient "Eve" probably didn't even live near each other, let alone mate.
"Those two people didn't know each other," said Melissa Wilson Sayres, a geneticist at the University of California, Berkeley, who was not involved in the study.
Tracing history
Researchers believe that modern humans left Africa between 60,000 and 200,000 years ago, and that the mother of all women likely emerged from East Africa. But beyond that, the details get fuzzy.
The Y chromosome is passed down identically from father to son, so mutations, or point changes, in the male sex chromosome can trace the male line back to the father of all humans. By contrast, DNA from the mitochondria, the energy powerhouse of the cell, is carried inside the egg, so only women pass it on to their children. The DNA hidden inside mitochondria, therefore, can reveal the maternal lineage to an ancient Eve.
But over time, the male chromosome gets bloated with duplicated, jumbled-up stretches of DNA, said study co-author Carlos Bustamante, a geneticist at Stanford University in California. As a result, piecing together fragments of DNA from gene sequencing was like trying to assemble a puzzle without the image on the box top, making thorough analysis difficult.
Y chromosome
Bustamante and his colleagues assembled a much bigger piece of the puzzle by sequencing the entire genome of the Y chromosome for 69 men from seven global populations, from African San Bushmen to the Yakut of Siberia.
By assuming a mutation rate anchored to archaeological events (such as the migration of people across the Bering Strait), the team concluded that all males in their global sample shared a single male ancestor in Africa roughly 125,000 to 156,000 years ago.
In addition, mitochondrial DNA from the men, as well as similar samples from 24 women, revealed that all women on the planet trace back to a mitochondrial Eve, who lived in Africa between 99,000 and 148,000 years ago — almost the same time period during which the Y-chromosome Adam lived.
More ancient Adam
But the results, though fascinating, are just part of the story, said Michael Hammer, an evolutionary geneticist at the University of Arizona who was not involved in the study.
A separate study in the same issue of the journal Science found that men shared a common ancestor between 180,000 and 200,000 years ago.
And in a study detailed in March in the American Journal of Human Genetics, Hammer's group showed that several men in Africa have unique, divergent Y chromosomes that trace back to an even more ancient man who lived between 237,000 and 581,000 years ago.
"It doesn't even fit on the family tree that the Bustamante lab has constructed — It's older," Hammer told LiveScience.
Gene studies always rely on a sample of DNA and, therefore, provide an incomplete picture of human history. For instance, Hammer's group sampled a different group of men than Bustamante's lab did, leading to different estimates of how old common ancestors really are.
Adam and Eve?
These primeval people aren't parallel to the biblical Adam and Eve. They weren't the first modern humans on the planet, but instead just the two out of thousands of people alive at the time with unbroken male or female lineages that continue on today.
The rest of the human genome contains tiny snippets of DNA from many other ancestors — they just don't show up in mitochondrial or Y-chromosome DNA, Hammer said. (For instance, if an ancient woman had only sons, then her mitochondrial DNA would disappear, even though the son would pass on a quarter of her DNA via the rest of his genome.)
Read more at Discovery News
The findings, detailed today (Aug. 1) in the journal Science, come from the most complete analysis of the male sex chromosome, or the Y chromosome, to date. The results overturn earlier research, which suggested that men's most recent common ancestor lived just 50,000 to 60,000 years ago.
Despite their overlap in time, ancient "Adam" and ancient "Eve" probably didn't even live near each other, let alone mate.
"Those two people didn't know each other," said Melissa Wilson Sayres, a geneticist at the University of California, Berkeley, who was not involved in the study.
Tracing history
Researchers believe that modern humans left Africa between 60,000 and 200,000 years ago, and that the mother of all women likely emerged from East Africa. But beyond that, the details get fuzzy.
The Y chromosome is passed down identically from father to son, so mutations, or point changes, in the male sex chromosome can trace the male line back to the father of all humans. By contrast, DNA from the mitochondria, the energy powerhouse of the cell, is carried inside the egg, so only women pass it on to their children. The DNA hidden inside mitochondria, therefore, can reveal the maternal lineage to an ancient Eve.
But over time, the male chromosome gets bloated with duplicated, jumbled-up stretches of DNA, said study co-author Carlos Bustamante, a geneticist at Stanford University in California. As a result, piecing together fragments of DNA from gene sequencing was like trying to assemble a puzzle without the image on the box top, making thorough analysis difficult.
Y chromosome
Bustamante and his colleagues assembled a much bigger piece of the puzzle by sequencing the entire genome of the Y chromosome for 69 men from seven global populations, from African San Bushmen to the Yakut of Siberia.
By assuming a mutation rate anchored to archaeological events (such as the migration of people across the Bering Strait), the team concluded that all males in their global sample shared a single male ancestor in Africa roughly 125,000 to 156,000 years ago.
In addition, mitochondrial DNA from the men, as well as similar samples from 24 women, revealed that all women on the planet trace back to a mitochondrial Eve, who lived in Africa between 99,000 and 148,000 years ago — almost the same time period during which the Y-chromosome Adam lived.
More ancient Adam
But the results, though fascinating, are just part of the story, said Michael Hammer, an evolutionary geneticist at the University of Arizona who was not involved in the study.
A separate study in the same issue of the journal Science found that men shared a common ancestor between 180,000 and 200,000 years ago.
And in a study detailed in March in the American Journal of Human Genetics, Hammer's group showed that several men in Africa have unique, divergent Y chromosomes that trace back to an even more ancient man who lived between 237,000 and 581,000 years ago.
"It doesn't even fit on the family tree that the Bustamante lab has constructed — It's older," Hammer told LiveScience.
Gene studies always rely on a sample of DNA and, therefore, provide an incomplete picture of human history. For instance, Hammer's group sampled a different group of men than Bustamante's lab did, leading to different estimates of how old common ancestors really are.
Adam and Eve?
These primeval people aren't parallel to the biblical Adam and Eve. They weren't the first modern humans on the planet, but instead just the two out of thousands of people alive at the time with unbroken male or female lineages that continue on today.
The rest of the human genome contains tiny snippets of DNA from many other ancestors — they just don't show up in mitochondrial or Y-chromosome DNA, Hammer said. (For instance, if an ancient woman had only sons, then her mitochondrial DNA would disappear, even though the son would pass on a quarter of her DNA via the rest of his genome.)
Read more at Discovery News
Hints of New Physics Detected in the LHC?
It seems that at every turn, the Standard Model of physics reinforces its death choke around the Universe as we know it, but are physicists beginning to see data that bucks this trend? According to one experiment in the Large Hadron Collider (LHC), it appears that there’s a slight deviation from the “norm,” hinting that the Standard Model ain’t all that.
Before we go into the details of this tentative discovery, let’s quickly review why some physicists are excited while others… are, well, not so much.
A Flawed — Yet Reliable — Recipe
The Standard Model is the recipe book of the Universe that’s matured over decades. It explains how subatomic particles should act and predicts interactions within particle colliders such as the LHC with incredible accuracy. But it’s not a perfect description of the Universe, it has some major shortfalls.
For example, the Standard Model does not explain dark matter and dark energy. Also, it has a gaping hole where gravity should be — for an all-encompassing theory of the Universe, the Standard Model is like a cake recipe that mysteriously forgets to add flour.
These shortcomings to one side, however, the Standard Model has had some huge victories in recent months. For one — and this is a biggie — the Higgs boson, a particle that mediates mass in all matter, has (to a high degree of certainty) been discovered at an exact energy where the Standard Model predicts it should be. Also, more recently, the extremely rare decay of the BS meson — a particle that decays into two muons at the ridiculously low rate of three decays out of every billion — was measured by the LHC’s crazy-high resolution detectors at the exact same rate as predicted by the Standard Model. In both these cases, if there were some weirdness in the results, physicists would be getting pretty excited about the possibility of “new physics.”
New (or exotic) physics is basically experimental results that scientists cannot explain with current (Standard Model) ideas, thereby the requirement of coming up with new ideas. One key theory beyond the Standard Model is that of supersymmetry (a.k.a. SUSY) that predicts the existence of more massive superpartner particles for all normal particles. But to be able to detect these exotic particles, you need powerful accelerators like the LHC to generate the necessary energies to dig deeper into increasingly energetic regimes. It’s a bit like a high energy archaeological dig; the more energy you generate, the more exotic and primordial the particle interactions become.
So, in an effort to detect any hints of any new physics, the LHC has been collecting data on countless trillions of particle collisions to see if the resulting decays do anything out of the (Standard Model) ordinary. Physicists from Spain and France have been pouring over this data and they have just reported the possible discovery of a deviation from the Standard Model.
New Physics Fingerprint?
While analyzing data from the LHCb detector, Sébastien Descotes-Genon of the University of Paris teamed up with Joaquim Matias and Javier Virto of the Autonomous University of Barcelona to report on some weirdness in the results of B-meson decays. B-mesons, which are hadrons composed of a quark and anti-quark, are generated inside the LHCb experiments and rapidly decay into a kaon (K*) particle and two muons (muons are the larger cousins of electrons), i.e. B → K*μ+μ–
In their results, the team noted a deviation in the angular distribution of the B-meson’s decay products. What’s more, this deviation isn’t random, there’s a pattern, a pattern not predicted by the Standard Model.
Though exciting, we’re not exactly at the champagne cork-popping stage quite yet. The team has found a 4.5σ significance with their statistical results, just shy of the 5σ required for a bona fide discovery. Still, the statistical certainty in these results are certainly suggestive of something odd going on.
So what could explain this coherent pattern not predicted by the Standard Model? In a preprint of their published results uploaded to the arXiv service, the team suggest that it could be proof of the existence of a Z’ boson. The hypothetical Z’ is a more massive supersymmetric cousin of the Standard Model Z boson — a particle that mediates the weak force. So could this by the fingerprint of supersymmetry in LHC results?
Read more at Discovery News
Before we go into the details of this tentative discovery, let’s quickly review why some physicists are excited while others… are, well, not so much.
A Flawed — Yet Reliable — Recipe
The Standard Model is the recipe book of the Universe that’s matured over decades. It explains how subatomic particles should act and predicts interactions within particle colliders such as the LHC with incredible accuracy. But it’s not a perfect description of the Universe, it has some major shortfalls.
For example, the Standard Model does not explain dark matter and dark energy. Also, it has a gaping hole where gravity should be — for an all-encompassing theory of the Universe, the Standard Model is like a cake recipe that mysteriously forgets to add flour.
These shortcomings to one side, however, the Standard Model has had some huge victories in recent months. For one — and this is a biggie — the Higgs boson, a particle that mediates mass in all matter, has (to a high degree of certainty) been discovered at an exact energy where the Standard Model predicts it should be. Also, more recently, the extremely rare decay of the BS meson — a particle that decays into two muons at the ridiculously low rate of three decays out of every billion — was measured by the LHC’s crazy-high resolution detectors at the exact same rate as predicted by the Standard Model. In both these cases, if there were some weirdness in the results, physicists would be getting pretty excited about the possibility of “new physics.”
New (or exotic) physics is basically experimental results that scientists cannot explain with current (Standard Model) ideas, thereby the requirement of coming up with new ideas. One key theory beyond the Standard Model is that of supersymmetry (a.k.a. SUSY) that predicts the existence of more massive superpartner particles for all normal particles. But to be able to detect these exotic particles, you need powerful accelerators like the LHC to generate the necessary energies to dig deeper into increasingly energetic regimes. It’s a bit like a high energy archaeological dig; the more energy you generate, the more exotic and primordial the particle interactions become.
So, in an effort to detect any hints of any new physics, the LHC has been collecting data on countless trillions of particle collisions to see if the resulting decays do anything out of the (Standard Model) ordinary. Physicists from Spain and France have been pouring over this data and they have just reported the possible discovery of a deviation from the Standard Model.
New Physics Fingerprint?
While analyzing data from the LHCb detector, Sébastien Descotes-Genon of the University of Paris teamed up with Joaquim Matias and Javier Virto of the Autonomous University of Barcelona to report on some weirdness in the results of B-meson decays. B-mesons, which are hadrons composed of a quark and anti-quark, are generated inside the LHCb experiments and rapidly decay into a kaon (K*) particle and two muons (muons are the larger cousins of electrons), i.e. B → K*μ+μ–
In their results, the team noted a deviation in the angular distribution of the B-meson’s decay products. What’s more, this deviation isn’t random, there’s a pattern, a pattern not predicted by the Standard Model.
Though exciting, we’re not exactly at the champagne cork-popping stage quite yet. The team has found a 4.5σ significance with their statistical results, just shy of the 5σ required for a bona fide discovery. Still, the statistical certainty in these results are certainly suggestive of something odd going on.
So what could explain this coherent pattern not predicted by the Standard Model? In a preprint of their published results uploaded to the arXiv service, the team suggest that it could be proof of the existence of a Z’ boson. The hypothetical Z’ is a more massive supersymmetric cousin of the Standard Model Z boson — a particle that mediates the weak force. So could this by the fingerprint of supersymmetry in LHC results?
Read more at Discovery News
Aug 1, 2013
Aerial Pictures Reveal Climate Change
As a result of climate change, certain undesirable aquatic plants are starting to invade German water bodies. Even popular recreation areas like Lake Starnberg have been affected, leading to a growing need to monitor the spread of these plants. Up to now, regular monitoring has proven to be a costly process. But in a new approach, researchers at Technische Universität München (TUM) have developed a quicker and less expensive method.
Taking a dip in a freshwater lake can quickly lose its appeal on contact with slippery aquatic plants. These might include Elodea nuttallii and Najas marina, better known as western waterweed and spiny naiad, both of which have been spreading rapidly in German water bodies in recent years.
Ecologists are able to use them as indicator plants. Their proliferation allows researchers to draw conclusions on water quality -- Elodea nuttallii and Najas marina are particularly common in lakes with rising water temperatures. The rapid spread of such plants over a wide area can upset the balance of sensitive lake ecosystems.
Satellite images support research divers
To investigate changes in lake ecosystems, water management authorities regularly monitor plant populations. This requires the observations of divers, who map the "vegetation blankets" at different depths.
This process does produce highly detailed information, but it requires a lot of effort. Doctoral students from TUM's Limnological Research Station in Iffeldorf have carried out research on this topic for their dissertations. The result of their work is a new process that will save both time and money.
"This new idea involves replacing some of the diving effort with high-resolution aerial and satellite images," explains project supervisor Dr. Thomas Schneider. "In order to draw conclusions on plant growth from the imagery produced, we measure reflectance. Each plant species reflects the incident light in a specific way, depending on its pigmentation and structure."
Every lake has its own reflectance characteristics
The researchers developed a digital library with the spectral characteristics of plants to help them evaluate the aerial and satellite images. This was a lengthy process, however, as doctoral student Patrick Wolf explains: "It took us two years to photograph the plants from a boat and measure their reflectance. In order to capture the plants from a suitable angle and avoid shadow, the cameras and sensors were submerged using an extension arm."
The problem is that factors like dissolved matter, sediment type, light refraction and different depths of water make it hard to assess plant populations. That is why the researchers developed mathematical algorithms to "factor out" the image errors in combination with the measurement data from the boat. Since every body of water has its own distinct characteristics, a different algorithm was developed for each lake.
Read more at Science Daily
Taking a dip in a freshwater lake can quickly lose its appeal on contact with slippery aquatic plants. These might include Elodea nuttallii and Najas marina, better known as western waterweed and spiny naiad, both of which have been spreading rapidly in German water bodies in recent years.
Ecologists are able to use them as indicator plants. Their proliferation allows researchers to draw conclusions on water quality -- Elodea nuttallii and Najas marina are particularly common in lakes with rising water temperatures. The rapid spread of such plants over a wide area can upset the balance of sensitive lake ecosystems.
Satellite images support research divers
To investigate changes in lake ecosystems, water management authorities regularly monitor plant populations. This requires the observations of divers, who map the "vegetation blankets" at different depths.
This process does produce highly detailed information, but it requires a lot of effort. Doctoral students from TUM's Limnological Research Station in Iffeldorf have carried out research on this topic for their dissertations. The result of their work is a new process that will save both time and money.
"This new idea involves replacing some of the diving effort with high-resolution aerial and satellite images," explains project supervisor Dr. Thomas Schneider. "In order to draw conclusions on plant growth from the imagery produced, we measure reflectance. Each plant species reflects the incident light in a specific way, depending on its pigmentation and structure."
Every lake has its own reflectance characteristics
The researchers developed a digital library with the spectral characteristics of plants to help them evaluate the aerial and satellite images. This was a lengthy process, however, as doctoral student Patrick Wolf explains: "It took us two years to photograph the plants from a boat and measure their reflectance. In order to capture the plants from a suitable angle and avoid shadow, the cameras and sensors were submerged using an extension arm."
The problem is that factors like dissolved matter, sediment type, light refraction and different depths of water make it hard to assess plant populations. That is why the researchers developed mathematical algorithms to "factor out" the image errors in combination with the measurement data from the boat. Since every body of water has its own distinct characteristics, a different algorithm was developed for each lake.
Read more at Science Daily
3,000-Year-Old Text May Reveal Biblical History
A few characters on the side of a 3,000-year-old earthenware jug dating back to the time of King David has stumped archaeologists until now -- and a fresh translation may have profound ramifications for our understanding of the Bible.
Experts had suspected the fragmentary inscription was written in the language of the Canaanites, a biblical people who lived in the present-day Israel. Not so, says one expert who claims to have cracked the code: The mysterious language is actually the oldest form of written Hebrew, placing the ancient Israelites in Jerusalem earlier than previously believed.
"Hebrew speakers were controlling Jerusalem in the 10th century, which biblical chronology points to as the time of David and Solomon," ancient Near Eastern history and biblical studies expert Douglas Petrovich told FoxNews.com.
"Whoever they were, they were writing in Hebrew like they owned the place," he said.
First discovered near the Temple Mount in Jerusalem last year, the 10th century B.C. fragment has been labeled the Ophel Inscription. It likely bears the name of the jug's owners and its contents.
If Petrovich's analysis proves true, it would be evidence of the accuracy of Old Testament tales. If Hebrew as a written language existed in the 10th century, as he says, the ancient Israelites were recording their history in real time as opposed to writing it down several hundred years later. That would make the Old Testament an historical account of real-life events.
According to Petrovich, archaeologists are unwilling to call it Hebrew to avoid conflict.
"It's just the climate among scholars that they want to attribute as little as possible to the ancient Israelites," he said.
Needless to say, his claims are stirring up controversy among those who do not like to mix the hard facts of archaeology -- dirt, stone and bone -- with stories from the Bible.
Tel Aviv University archaeologist Israel Finkelstein told FoxNews.com that the Ophel Inscription is critical to the early history of Israel. But romantic notions of the Bible shouldn't cloud scientific methods -- a message he pushed in 2008 when a similar inscription was found at a site many now call one of King David's palaces.
At the time, he warned the Associated Press against the "revival in the belief that what's written in the Bible is accurate like a newspaper."
Today, he told FoxNews.com that the Ophel Inscription speaks to "the expansion of Jerusalem from the Temple Mount, and shows us the growth of Jerusalem and the complexity of the city during that time." But the Bible? Maybe, maybe not.
Professor Aren Maeir of Bar Ilan University agrees that some archaeologists are simply relying too heavily on the Bible itself as a source of evidence.
"(Can we) raise arguments about the kingdom of David and Solomon? That seems to me a grandiose upgrade," he told Haaretz recently.
In the past decade, there has been a renaissance in Israel of archaeologists looking for historical evidence of biblical stories. FoxNews.com has reported on several excavations this year claiming to prove a variety of stories from the Bible.
Most recently, a team lead by archaeologist Yossi Garfinkel wrapped up a 10-year excavation of the possible palace of King David, overlooking the valley where the Hebrew king victoriously smote the giant Goliath.
Garfinkel has another explanation as to the meaning behind the Ophel Inscription.
"I think it's like a (cellphone) text," Garfinkel told FoxNews.com. "If someone takes a text from us 3,000 years from now, he will not be able to understand it."
Read more at Discovery News
Experts had suspected the fragmentary inscription was written in the language of the Canaanites, a biblical people who lived in the present-day Israel. Not so, says one expert who claims to have cracked the code: The mysterious language is actually the oldest form of written Hebrew, placing the ancient Israelites in Jerusalem earlier than previously believed.
"Hebrew speakers were controlling Jerusalem in the 10th century, which biblical chronology points to as the time of David and Solomon," ancient Near Eastern history and biblical studies expert Douglas Petrovich told FoxNews.com.
"Whoever they were, they were writing in Hebrew like they owned the place," he said.
First discovered near the Temple Mount in Jerusalem last year, the 10th century B.C. fragment has been labeled the Ophel Inscription. It likely bears the name of the jug's owners and its contents.
If Petrovich's analysis proves true, it would be evidence of the accuracy of Old Testament tales. If Hebrew as a written language existed in the 10th century, as he says, the ancient Israelites were recording their history in real time as opposed to writing it down several hundred years later. That would make the Old Testament an historical account of real-life events.
According to Petrovich, archaeologists are unwilling to call it Hebrew to avoid conflict.
"It's just the climate among scholars that they want to attribute as little as possible to the ancient Israelites," he said.
Needless to say, his claims are stirring up controversy among those who do not like to mix the hard facts of archaeology -- dirt, stone and bone -- with stories from the Bible.
Tel Aviv University archaeologist Israel Finkelstein told FoxNews.com that the Ophel Inscription is critical to the early history of Israel. But romantic notions of the Bible shouldn't cloud scientific methods -- a message he pushed in 2008 when a similar inscription was found at a site many now call one of King David's palaces.
At the time, he warned the Associated Press against the "revival in the belief that what's written in the Bible is accurate like a newspaper."
Today, he told FoxNews.com that the Ophel Inscription speaks to "the expansion of Jerusalem from the Temple Mount, and shows us the growth of Jerusalem and the complexity of the city during that time." But the Bible? Maybe, maybe not.
Professor Aren Maeir of Bar Ilan University agrees that some archaeologists are simply relying too heavily on the Bible itself as a source of evidence.
"(Can we) raise arguments about the kingdom of David and Solomon? That seems to me a grandiose upgrade," he told Haaretz recently.
In the past decade, there has been a renaissance in Israel of archaeologists looking for historical evidence of biblical stories. FoxNews.com has reported on several excavations this year claiming to prove a variety of stories from the Bible.
Most recently, a team lead by archaeologist Yossi Garfinkel wrapped up a 10-year excavation of the possible palace of King David, overlooking the valley where the Hebrew king victoriously smote the giant Goliath.
Garfinkel has another explanation as to the meaning behind the Ophel Inscription.
"I think it's like a (cellphone) text," Garfinkel told FoxNews.com. "If someone takes a text from us 3,000 years from now, he will not be able to understand it."
Read more at Discovery News
North Pole 'Lake' Vanishes
Like a politician whose peccadilloes lead to "family time," the North Pole lake has had its fill of Internet notoriety. The stunning blue meltwater lake that formed on the Arctic ice disappeared on Monday (July 29), draining through a crack in the underlying ice floe.
Now, instead of 2 feet (0.6 meters) of freshwater slopping against a bright-yellow buoy, a remote webcam shows only ice and clouds.
Though the North Pole lake's 15 minutes of fame focused worldwide attention on global warming's effects on Arctic sea ice, the melting is actually part of an annual summer thaw, according to researchers who run the North Pole Environmental Observatory. "The formation of these ponds and their disappearance is part of a natural cycle," said Axel Schweiger, head of the Applied Physics Laboratory's Polar Science Center at the University of Washington, which helps run the observatory.
The lake, about the size of an Olympic swimming pool, started forming in mid-July, LiveScience first reported on July 23. The size and timing of the lake are typical for this time of year and location, the researchers said.
However, scientists at the observatory and elsewhere are studying the Arctic's meltwater ponds to understand how global warming is changing their total extent.
"It's important to recognize that these ponds may be linked to global warming, but the questions are more: How many and how deep they are, and when they appear and when they drain," Schweiger told LiveScience.
For instance, warmer temperatures in the Arctic already cause surface melting to start earlier on the ice, so the ponds are forming sooner than they used to, Schweiger said. But other factors play a role, such as snow cover and ice thickness. "It's a very open research question," he said.
Read more at Discovery News
Now, instead of 2 feet (0.6 meters) of freshwater slopping against a bright-yellow buoy, a remote webcam shows only ice and clouds.
Though the North Pole lake's 15 minutes of fame focused worldwide attention on global warming's effects on Arctic sea ice, the melting is actually part of an annual summer thaw, according to researchers who run the North Pole Environmental Observatory. "The formation of these ponds and their disappearance is part of a natural cycle," said Axel Schweiger, head of the Applied Physics Laboratory's Polar Science Center at the University of Washington, which helps run the observatory.
The lake, about the size of an Olympic swimming pool, started forming in mid-July, LiveScience first reported on July 23. The size and timing of the lake are typical for this time of year and location, the researchers said.
However, scientists at the observatory and elsewhere are studying the Arctic's meltwater ponds to understand how global warming is changing their total extent.
"It's important to recognize that these ponds may be linked to global warming, but the questions are more: How many and how deep they are, and when they appear and when they drain," Schweiger told LiveScience.
For instance, warmer temperatures in the Arctic already cause surface melting to start earlier on the ice, so the ponds are forming sooner than they used to, Schweiger said. But other factors play a role, such as snow cover and ice thickness. "It's a very open research question," he said.
Read more at Discovery News
Hubble Pieces Together Galactic Evolution Puzzle
New observations from NASA's Hubble Space Telescope have helped astronomers crack a longstanding puzzle about galaxy evolution.
For years, scientists have wondered why galaxies that have ceased forming new stars — so-called "quenched galaxies" — were smaller long ago than they are today. Perhaps, they thought, ancient quenched galaxies continued to grow by merging with smaller cousins that had also stopped producing stars.
But that hypothesis is off the mark, a new study reports.
"We found that a large number of the bigger galaxies instead switch off at later times, joining their smaller quenched siblings and giving the mistaken impression of individual galaxy growth over time,"co-author Simon Lilly, of the Swiss Federal Institute of Technology in Zurich, said in a statement.
The researchers used observations from Hubble's Cosmic Evolution Survey (COSMOS), the Canada-France-Hawaii Telescope and the Subaru Telescope to map an area of the sky about nine times the size of the full moon. They used the observations to make a video of the quenched galaxies as seen by Hubble.
The team studied and tracked the quenched galaxies in this patch through the last eight billion years of the universe's history, eventually determining that most of them did not grow over time but rather remained small and compact.
So it appears that star production simply switched off earlier in older galaxies compared to younger ones. This makes sense, researchers said; star-forming galaxies were smaller in the early universe, after all, so they would hit growth and evolution milestones at a relatively smaller size.
"The apparent puffing up of quenched galaxies has been one of the biggest puzzles about galaxy evolution for many years,"said lead author Marcella Carollo, also of the Swiss Federal Institute of Technology in Zurich. "Our study offers a surprisingly simple and obvious explanation to this puzzle. Whenever we see simplicity in nature amidst apparent complexity, it's very satisfying."
Read more at Discovery News
For years, scientists have wondered why galaxies that have ceased forming new stars — so-called "quenched galaxies" — were smaller long ago than they are today. Perhaps, they thought, ancient quenched galaxies continued to grow by merging with smaller cousins that had also stopped producing stars.
But that hypothesis is off the mark, a new study reports.
"We found that a large number of the bigger galaxies instead switch off at later times, joining their smaller quenched siblings and giving the mistaken impression of individual galaxy growth over time,"co-author Simon Lilly, of the Swiss Federal Institute of Technology in Zurich, said in a statement.
The researchers used observations from Hubble's Cosmic Evolution Survey (COSMOS), the Canada-France-Hawaii Telescope and the Subaru Telescope to map an area of the sky about nine times the size of the full moon. They used the observations to make a video of the quenched galaxies as seen by Hubble.
The team studied and tracked the quenched galaxies in this patch through the last eight billion years of the universe's history, eventually determining that most of them did not grow over time but rather remained small and compact.
So it appears that star production simply switched off earlier in older galaxies compared to younger ones. This makes sense, researchers said; star-forming galaxies were smaller in the early universe, after all, so they would hit growth and evolution milestones at a relatively smaller size.
"The apparent puffing up of quenched galaxies has been one of the biggest puzzles about galaxy evolution for many years,"said lead author Marcella Carollo, also of the Swiss Federal Institute of Technology in Zurich. "Our study offers a surprisingly simple and obvious explanation to this puzzle. Whenever we see simplicity in nature amidst apparent complexity, it's very satisfying."
Read more at Discovery News
Jul 31, 2013
Dawn of Carnivores Explains Animal Boom in Distant Past
A science team that includes researchers from Scripps Institution of Oceanography at UC San Diego has linked increasing oxygen levels and the rise and evolution of carnivores (meat eaters) as the force behind a broad explosion of animal species and body structures millions of years ago.
Led by Erik Sperling of Harvard University, the scientists analyzed how low oxygen zones in modern oceans limit the abundance and types of carnivores to help lead them to the cause of the "Cambrian radiation," a historic proliferation of animals 500-540 million years ago that resulted in the animal diversity seen today. The study is published in the July 29 early online edition of the Proceedings of the National Academy of Sciences.
Although the cause of the influx of oxygen remains a matter a scientific controversy, Sperling called the Cambrian radiation that followed "the most significant evolutionary event in the history of animals."
"During the Cambrian period essentially every major animal body plan -- from arthropods to mollusks to chordates, the phylum to which humans belong -- appeared in the fossil record," said Sperling, who is scheduled to join Scripps as a postdoctoral researcher through National Science Foundation support. The authors linked this proliferation of life to the evolution of carnivorous feeding modes, which require higher oxygen concentrations. Once oxygen increased, animals started consuming other animals, stimulating the Cambrian radiation through an escalatory predator-prey "arms race."
Lisa Levin, a professor of biological oceanography at Scripps, along with graduate student researcher Christina Frieder, contributed to the study by providing expertise on the fauna of the ocean's low-oxygen zones, areas that have been increasing in recent decades due to a variety of factors. While the Cambrian radiation exploded with new species and diversification, Levin believes this study suggests the reverse may ensue as oxygen declines and oxygen minimum zones expand.
"This paper uses modern oxygen gradients and their effects on marine worms to understand past evolutionary events" said Levin, director of Scripps's Center for Marine Biodiversity and Conservation and a 1982 Scripps graduate. "However, the study of oxygen's role in the past is also going to help us understand the effects of and manage for changes in ocean oxygen in the future."
Read more at Science Daily
Led by Erik Sperling of Harvard University, the scientists analyzed how low oxygen zones in modern oceans limit the abundance and types of carnivores to help lead them to the cause of the "Cambrian radiation," a historic proliferation of animals 500-540 million years ago that resulted in the animal diversity seen today. The study is published in the July 29 early online edition of the Proceedings of the National Academy of Sciences.
Although the cause of the influx of oxygen remains a matter a scientific controversy, Sperling called the Cambrian radiation that followed "the most significant evolutionary event in the history of animals."
"During the Cambrian period essentially every major animal body plan -- from arthropods to mollusks to chordates, the phylum to which humans belong -- appeared in the fossil record," said Sperling, who is scheduled to join Scripps as a postdoctoral researcher through National Science Foundation support. The authors linked this proliferation of life to the evolution of carnivorous feeding modes, which require higher oxygen concentrations. Once oxygen increased, animals started consuming other animals, stimulating the Cambrian radiation through an escalatory predator-prey "arms race."
Lisa Levin, a professor of biological oceanography at Scripps, along with graduate student researcher Christina Frieder, contributed to the study by providing expertise on the fauna of the ocean's low-oxygen zones, areas that have been increasing in recent decades due to a variety of factors. While the Cambrian radiation exploded with new species and diversification, Levin believes this study suggests the reverse may ensue as oxygen declines and oxygen minimum zones expand.
"This paper uses modern oxygen gradients and their effects on marine worms to understand past evolutionary events" said Levin, director of Scripps's Center for Marine Biodiversity and Conservation and a 1982 Scripps graduate. "However, the study of oxygen's role in the past is also going to help us understand the effects of and manage for changes in ocean oxygen in the future."
Read more at Science Daily
Rat Laughs Off Cancer With HA
The latest possible cure for cancer may come from the world’s longest living rats, which laugh off the disease with a super sugar molecule called HMM-HA, new research finds.
The discovery really is sweet, as humans might be able to boost their HA power, improving their chances for longevity and warding off cancer.
It’s certainly worked well for naked mole rats, since they live around 6 times longer than the average rodent.
HMM-HA (the acronym for Hyaluronan) prevents cancer and aging because it stops cells from overcrowding and forming tumors, according to scientists from the University of Rochester and the University of Haifa. “Overcrowding,” in this case, essentially means that the super sugar prevents cells from coming into unnecessary contact with each other and growing into something bigger.
“Contact inhibition, a powerful anticancer mechanism, discovered by the Rochester team, arresting cell growth when cells come into contact with each other, is lost in cancer cells,” said Eviatar Nevo from the University of Haifa group, in a press release. “The experiments showed that when HMM-HA was removed from naked mole rat cells, they became susceptible to tumors and lost their contact inhibition.”
Human cells can secrete HMM-HA too, the researchers determined, but the trick in future will be to control that properly. If the sugar wards off cancer and aging in humans as it does in rats, it could be our proverbial fountain of youth.
Already, researchers are adding the super sugar to anti-wrinkle skin care products. It’s also used in certain arthritis treatments.
Read more at Discovery News
The discovery really is sweet, as humans might be able to boost their HA power, improving their chances for longevity and warding off cancer.
It’s certainly worked well for naked mole rats, since they live around 6 times longer than the average rodent.
HMM-HA (the acronym for Hyaluronan) prevents cancer and aging because it stops cells from overcrowding and forming tumors, according to scientists from the University of Rochester and the University of Haifa. “Overcrowding,” in this case, essentially means that the super sugar prevents cells from coming into unnecessary contact with each other and growing into something bigger.
“Contact inhibition, a powerful anticancer mechanism, discovered by the Rochester team, arresting cell growth when cells come into contact with each other, is lost in cancer cells,” said Eviatar Nevo from the University of Haifa group, in a press release. “The experiments showed that when HMM-HA was removed from naked mole rat cells, they became susceptible to tumors and lost their contact inhibition.”
Human cells can secrete HMM-HA too, the researchers determined, but the trick in future will be to control that properly. If the sugar wards off cancer and aging in humans as it does in rats, it could be our proverbial fountain of youth.
Already, researchers are adding the super sugar to anti-wrinkle skin care products. It’s also used in certain arthritis treatments.
Read more at Discovery News
'Dolly' Scientist Says Mammoth Should Be Cloned
The astonishingly well-preserved blood from a 10,000-year-old frozen mammoth could lead to mammoth stem cells, said Ian Wilmut, the scientist responsible for Dolly, the world’s first cloned animal -- and might ultimately lead to a cloned mammoth.
There are several hurdles to such a venture, of course, and it may ultimately prove unsuccessful.
But Wilmut’s weight lends credibility to the growing possibility of bringing back the mammoth -- “de-extinction” of a long-lost species.
"I think it should be done as long as we can provide great care for the animal,” Wilmut told The Guardian. “If there are reasonable prospects of them being healthy, we should do it. We can learn a lot about them," he said.
In an essay on The Conversation, Wilmut spelled out the two main methods for turning an ancient pile of mammoth bones and blood into a living, breathing creature. The two he focused on were the use of elephant eggs to grow an embryo -- similar to the process that led to Dolly -- and the creation of embryonic mammoth stem cells.
“Stem cells of this type can also be induced to form gametes. If the cells were from a female, this might provide an alternative source of eggs for use in research, and perhaps in breeding, including the cloning of mammoths,” Wilmut wrote.
Wilmut, emeritus professor at the MRC Center for Regenerative Medicine at University of Edinburgh, made headlines in 1996 when he and his colleagues cloned Dolly the sheep. Their technique involved injecting DNA into a special egg cell and transferring the product into a third sheep, which carried the egg to term. While Dolly lived a brief life, dying in 2003, her very existence was hailed as a medical marvel.
That such a noted scientist could even discuss the process of bringing back the mammoth stems from an astonishing find on a remote Russian island in the Arctic Ocean: blood so well preserved that it flowed freely from a 10,000- to 15,000-year-old creature.
“The fragments of muscle tissues, which we’ve found out of the body, have a natural red color of fresh meat. The reason for such preservation is that the lower part of the body was underlying in pure ice, and the upper part was found in the middle of tundra,” said Semyon Grigoriev, the head of the expedition and chairman of the Mammoth Museum, after announcing the discovery.
Wooly mammoths are thought to have died out around 10,000 years ago, although scientists think small groups of them lived longer in Alaska and on Russia's Wrangel Island off the Siberian coast.
A growing chorus of scientists have been targeting the mammoth for so called “de-extinction” in recent years, at the same time that others argue against tampering with Mother Nature’s plans. Bringing back a dead species raises a host of issues, wrote two ethicists recently.
Read more at Discovery News
There are several hurdles to such a venture, of course, and it may ultimately prove unsuccessful.
But Wilmut’s weight lends credibility to the growing possibility of bringing back the mammoth -- “de-extinction” of a long-lost species.
"I think it should be done as long as we can provide great care for the animal,” Wilmut told The Guardian. “If there are reasonable prospects of them being healthy, we should do it. We can learn a lot about them," he said.
In an essay on The Conversation, Wilmut spelled out the two main methods for turning an ancient pile of mammoth bones and blood into a living, breathing creature. The two he focused on were the use of elephant eggs to grow an embryo -- similar to the process that led to Dolly -- and the creation of embryonic mammoth stem cells.
“Stem cells of this type can also be induced to form gametes. If the cells were from a female, this might provide an alternative source of eggs for use in research, and perhaps in breeding, including the cloning of mammoths,” Wilmut wrote.
Wilmut, emeritus professor at the MRC Center for Regenerative Medicine at University of Edinburgh, made headlines in 1996 when he and his colleagues cloned Dolly the sheep. Their technique involved injecting DNA into a special egg cell and transferring the product into a third sheep, which carried the egg to term. While Dolly lived a brief life, dying in 2003, her very existence was hailed as a medical marvel.
That such a noted scientist could even discuss the process of bringing back the mammoth stems from an astonishing find on a remote Russian island in the Arctic Ocean: blood so well preserved that it flowed freely from a 10,000- to 15,000-year-old creature.
“The fragments of muscle tissues, which we’ve found out of the body, have a natural red color of fresh meat. The reason for such preservation is that the lower part of the body was underlying in pure ice, and the upper part was found in the middle of tundra,” said Semyon Grigoriev, the head of the expedition and chairman of the Mammoth Museum, after announcing the discovery.
Wooly mammoths are thought to have died out around 10,000 years ago, although scientists think small groups of them lived longer in Alaska and on Russia's Wrangel Island off the Siberian coast.
A growing chorus of scientists have been targeting the mammoth for so called “de-extinction” in recent years, at the same time that others argue against tampering with Mother Nature’s plans. Bringing back a dead species raises a host of issues, wrote two ethicists recently.
Read more at Discovery News
Volcanic Magma On the Highway from Hell
The Costa Rican volcano Irazú drove the opposite direction of the classic AC/DC song by rocking out on the “highway from hell.” Thunderstruck scientists suggest understanding the seismic dirty deeds that shook Costa Rica all night long prior to Irazú’s rapid eruption, could lead to better predictions for other similar eruptions.
A study published today in the journal Nature found evidence that magma spewed by Irazú in the 1960s shot up from the bowels of the Earth in only a few months, as opposed to the centuries it takes for magma to reach the surface in many other volcanoes.
“There has to be a conduit from the mantle to the magma chamber,” said study co-author Terry Plank, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. “We like to call it the highway from hell.”
AC/DC references weren’t the only thing metal about the Irazú. Lava rock from the 1960s eruption contained the metal nickel. Magma in the Earth’s core holds trace amounts of nickel, but the metal usually diffuses out as the magma moves to the surface. The presence of nickel suggests the magma rose faster than the metal could escape.
Lava rock containing nickel also occurs in Mexico, Siberia and the Cascades of the U.S. Pacific Northwest.
“It’s clearly not a local phenomenon,” said Columbia University geochemist Susanne Straub in a press release.
Prior to even fast moving eruptions like Irazú, the Earth gives signs that calamity draws near. The cone at the top of the volcano bulges, gases escape through rock fissures and the temperature of the volcano rises. Earthquakes and other seismic activity also warn of an impending eruption.
Seismologists monitor these quakes and use them to give warnings that officials use to justify evacuating humans from volcanic danger zones. The new study in Nature helps scientists to set a speed limit for how fast magma can potentially rise.
Read more at Discovery News
A study published today in the journal Nature found evidence that magma spewed by Irazú in the 1960s shot up from the bowels of the Earth in only a few months, as opposed to the centuries it takes for magma to reach the surface in many other volcanoes.
“There has to be a conduit from the mantle to the magma chamber,” said study co-author Terry Plank, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. “We like to call it the highway from hell.”
AC/DC references weren’t the only thing metal about the Irazú. Lava rock from the 1960s eruption contained the metal nickel. Magma in the Earth’s core holds trace amounts of nickel, but the metal usually diffuses out as the magma moves to the surface. The presence of nickel suggests the magma rose faster than the metal could escape.
Lava rock containing nickel also occurs in Mexico, Siberia and the Cascades of the U.S. Pacific Northwest.
“It’s clearly not a local phenomenon,” said Columbia University geochemist Susanne Straub in a press release.
Prior to even fast moving eruptions like Irazú, the Earth gives signs that calamity draws near. The cone at the top of the volcano bulges, gases escape through rock fissures and the temperature of the volcano rises. Earthquakes and other seismic activity also warn of an impending eruption.
Seismologists monitor these quakes and use them to give warnings that officials use to justify evacuating humans from volcanic danger zones. The new study in Nature helps scientists to set a speed limit for how fast magma can potentially rise.
Read more at Discovery News
Jul 30, 2013
Why Monogamy Evolved in Mammals
Male primates may have become monogamous to protect their offspring from being killed by rival males, a new study finds. However, others disagree, saying monogamy evolved in mammals so that males could guard their mates.
A team of British and Australian researchers compared data across 230 primate species over 75 million years, and found that the threat of infanticide -- specifically, the threat of baby primates being killed by unrelated males -- likely triggered monogamy.
Since infants are dependent on their mothers throughout childhood, and since female primates typically delay further conception while they are nurturing their young, male competitors may see advantages in doing away with babies that their rivals have sired, said study lead author Christopher Opie, a postdoctoral research fellow in the department of anthropology at the University College London in the United Kingdom.
"For a male who knows he's not the father of an infant, it can pay for him to kill that infant, because then he can make sure the female comes back into ovulation. And he can mate with her," Opie told LiveScience. "It's a way for males to try to increase their genes that are passed into the next generation."
The researchers examined the prevalence of infanticide across different primate species over time and found links between this threat and the onset of monogamy.
"When we looked across all 230 species, we saw that infanticide evolved at different points, but in all cases, it had already evolved by the time monogamy evolved," Opie said. The results were published online today (July 29) in the journal Proceedings of the National Academy of Sciences (PNAS).
Another study out today, however, suggests monogamy may have evolved to protect females against competition from other females.
Neither study purports to explain monogamy in people. "We are cautious about making any definite statement about monogamy in humans," study researcher Tim Clutton-Brock of the University of Cambridge said in a press briefing, adding that when it comes to monogamy, "humans are obviously fantastically variable."
Primate Family Tree
Only 3 percent to 5 percent of all mammals bond for life, but researchers have long debated the evolution of monogamy, with scientists trying to pinpoint when in history animals displayed monogamous tendencies -- and why.
To trace monogamy's evolutionary pathway, Opie and his colleagues constructed a giant family tree based on genetic data of the relationships among the species of primates. The researchers then used statistical models to identify where behavioral changes -- such as the emergence of paternal care of offspring or the ranging patterns of females -- likely occurred throughout the primates' evolutionary history.
"We effectively simulate evolution millions of times across the family tree and get probabilities for how each of the behaviors would change over time," Opie explained.
This technique resembles the one used by famed American statistician Nate Silver when he predicts the results of presidential elections, and the method used by Google when it produces search engine results, Opie said.
The models determined that male infanticide coincided with the switch from behavior in which females mated with multiple males, to monogamy in primates. The results also suggest that other behaviors, such as paternal care, resulted from monogamy. (The Animal Kingdom's Most Devoted Dads)
"In all the species where males provide care, monogamy already evolved in those species," Opie said. "So, we can see an evolutionary pathway where infanticide evolved first, then as one of the responses to that, monogamy evolved, and then in those species -- but not all -- paternal care evolved."
A Far-Reaching Analysis?
While the study offers insight into the evolution of monogamy, the results are highly dependent on how the researchers classified the various species of primates, said Eduardo Fernandez-Duque, an associate professor of anthropology at the University of Pennsylvania in Philadelphia, who was not involved in the new study.
Fernandez-Duque, who has studied monogamy and paternal care in primates for 20 years, noted some inconsistencies in the descriptions of a few of the species, such as the classification that some primates in the genus Callicebus are sexually monogamous but not socially monogamous (they don't stay together to raise the offspring, for instance).
In addition, "the researchers treat infanticide as binary, which makes me a little uncomfortable," Fernandez-Duque told LiveScience. "For example, they categorize infanticide as high or low, but there's no room for species that don't show infanticide."
Still, Fernandez-Duque says the research represents exciting progress in the field of primatology, and he hopes to look deeper into the data.
Tracing the Evolution of Monogamy
Another study, this one detailed today in the journal Science, suggests monogamy evolved to allow males to protect females.
Using a new genetic classification technique, the researchers of the new study inferred how species were related and when they split off from one another in the evolutionary tree. The scientists classified each species as solitary (living alone), socially monogamous (living in breeding pairs) or as group-living. A total of 2,500 mammalian species were involved.
Then the scientists simulated how solitary females might evolve social monogamy versus how group-living females might evolve the trait. Researchers used sophisticated statistical methods to determine which scenarios were more likely.
Social monogamy evolved 61 times among the animals studied, the analysis showed. All but one of these transitions involved solitary females, rather than group-living females. In addition, the common ancestor of all mammals was solitary.
The findings suggest that for species in which females lived alone in large territories to avoid competition for food and other resources, males were unable to defend multiple females, and therefore became monogamous.
"In mammals, social monogamy is the result of resource distribution," study researcher Dieter Lukas, of the University of Cambridge, said in a press briefing today. Females were limited by the distribution of food, and males were limited by the distribution of females, Lukas said.
Social monogamy was also more common among primates and carnivores than other species, the study found. The more specialized diets of these animals may have increased competition for food, leading females to isolate themselves.
Read more at Discovery News
A team of British and Australian researchers compared data across 230 primate species over 75 million years, and found that the threat of infanticide -- specifically, the threat of baby primates being killed by unrelated males -- likely triggered monogamy.
Since infants are dependent on their mothers throughout childhood, and since female primates typically delay further conception while they are nurturing their young, male competitors may see advantages in doing away with babies that their rivals have sired, said study lead author Christopher Opie, a postdoctoral research fellow in the department of anthropology at the University College London in the United Kingdom.
"For a male who knows he's not the father of an infant, it can pay for him to kill that infant, because then he can make sure the female comes back into ovulation. And he can mate with her," Opie told LiveScience. "It's a way for males to try to increase their genes that are passed into the next generation."
The researchers examined the prevalence of infanticide across different primate species over time and found links between this threat and the onset of monogamy.
"When we looked across all 230 species, we saw that infanticide evolved at different points, but in all cases, it had already evolved by the time monogamy evolved," Opie said. The results were published online today (July 29) in the journal Proceedings of the National Academy of Sciences (PNAS).
Another study out today, however, suggests monogamy may have evolved to protect females against competition from other females.
Neither study purports to explain monogamy in people. "We are cautious about making any definite statement about monogamy in humans," study researcher Tim Clutton-Brock of the University of Cambridge said in a press briefing, adding that when it comes to monogamy, "humans are obviously fantastically variable."
Primate Family Tree
Only 3 percent to 5 percent of all mammals bond for life, but researchers have long debated the evolution of monogamy, with scientists trying to pinpoint when in history animals displayed monogamous tendencies -- and why.
To trace monogamy's evolutionary pathway, Opie and his colleagues constructed a giant family tree based on genetic data of the relationships among the species of primates. The researchers then used statistical models to identify where behavioral changes -- such as the emergence of paternal care of offspring or the ranging patterns of females -- likely occurred throughout the primates' evolutionary history.
"We effectively simulate evolution millions of times across the family tree and get probabilities for how each of the behaviors would change over time," Opie explained.
This technique resembles the one used by famed American statistician Nate Silver when he predicts the results of presidential elections, and the method used by Google when it produces search engine results, Opie said.
The models determined that male infanticide coincided with the switch from behavior in which females mated with multiple males, to monogamy in primates. The results also suggest that other behaviors, such as paternal care, resulted from monogamy. (The Animal Kingdom's Most Devoted Dads)
"In all the species where males provide care, monogamy already evolved in those species," Opie said. "So, we can see an evolutionary pathway where infanticide evolved first, then as one of the responses to that, monogamy evolved, and then in those species -- but not all -- paternal care evolved."
A Far-Reaching Analysis?
While the study offers insight into the evolution of monogamy, the results are highly dependent on how the researchers classified the various species of primates, said Eduardo Fernandez-Duque, an associate professor of anthropology at the University of Pennsylvania in Philadelphia, who was not involved in the new study.
Fernandez-Duque, who has studied monogamy and paternal care in primates for 20 years, noted some inconsistencies in the descriptions of a few of the species, such as the classification that some primates in the genus Callicebus are sexually monogamous but not socially monogamous (they don't stay together to raise the offspring, for instance).
In addition, "the researchers treat infanticide as binary, which makes me a little uncomfortable," Fernandez-Duque told LiveScience. "For example, they categorize infanticide as high or low, but there's no room for species that don't show infanticide."
Still, Fernandez-Duque says the research represents exciting progress in the field of primatology, and he hopes to look deeper into the data.
Tracing the Evolution of Monogamy
Another study, this one detailed today in the journal Science, suggests monogamy evolved to allow males to protect females.
Using a new genetic classification technique, the researchers of the new study inferred how species were related and when they split off from one another in the evolutionary tree. The scientists classified each species as solitary (living alone), socially monogamous (living in breeding pairs) or as group-living. A total of 2,500 mammalian species were involved.
Then the scientists simulated how solitary females might evolve social monogamy versus how group-living females might evolve the trait. Researchers used sophisticated statistical methods to determine which scenarios were more likely.
Social monogamy evolved 61 times among the animals studied, the analysis showed. All but one of these transitions involved solitary females, rather than group-living females. In addition, the common ancestor of all mammals was solitary.
The findings suggest that for species in which females lived alone in large territories to avoid competition for food and other resources, males were unable to defend multiple females, and therefore became monogamous.
"In mammals, social monogamy is the result of resource distribution," study researcher Dieter Lukas, of the University of Cambridge, said in a press briefing today. Females were limited by the distribution of food, and males were limited by the distribution of females, Lukas said.
Social monogamy was also more common among primates and carnivores than other species, the study found. The more specialized diets of these animals may have increased competition for food, leading females to isolate themselves.
Read more at Discovery News
Final Moments of Incan Child Mummies' Lives Revealed
Three Incan children who were sacrificed 500 years ago were regularly given drugs and alcohol in their final months to make them more compliant in the ritual that ultimately killed them, new research suggests.
Archaeologists analyzed hair samples from the frozen mummies of the three children, who were discovered in 1999, entombed within a shrine near the 22,100-foot (6,739 meters) summit of the Argentinian volcano Llullaillaco.
The samples revealed that all three children consistently consumed coca leaves (from which cocaine is derived) and alcoholic beverages, but the oldest child, the famed "Maiden," ingested markedly more of the substances. Coca was a highly controlled substance during the height of the Inca Empire, when the children were sacrificed.
The evidence, combined with other archaeological and radiological data, suggests that the Maiden was treated very differently from the other two children, Llullaillaco Boy and Lightning Girl (so named by researchers because the mummy appears to have been struck by lightning). After being selected for the deadly rite, the Maiden likely underwent a type of status change, becoming an important figure to the empire; the other two children may have served as her attendants.
Hair analyses
"(The Maiden) became somebody other than who she was before," said study lead author Andrew Wilson, an archaeologist at the University of Bradford in the U.K. "Her sacrifice was seen as an honor."
To learn about the final moments of a mummy's life, scientists will sometimes turn to hair samples, which provide a record of what substances were circulating in the blood when new hair cells formed. And because hair grows at a relatively constant rate, it can provide a kind of timeline of what a person has consumed (the length of the timeline depends on the length of hair available).
In a 2007 study, Wilson and his colleagues analyzed the child mummies' hair to understand how their diets changed over time. They found that the children came from a peasant background, as their diet consisted mainly of common vegetables, potatoes in particular. But in the year leading up to their deaths, they ate "elite" food, including maize and dried llama meat, and appeared to have been fattened up in preparation for the sacrifice.
Additionally, the 13-year-old Maiden consumed more of the elite food than the Llullaillaco Boy and Lightning Girl, who were both 4 to 5 years old, Wilson noted. (The three children were previously believed to be about two years older than these estimates, but a new analysis of CT scans suggests otherwise.)
In the new study, the scientists analyzed the mummies' hair for cocaine (a major alkaloid of coca leaves) and its metabolite benzoylecgonine, as well as cocaethylene, which forms when both cocaine and ethanol are present in the blood. The scientists created a timeline of coca and alcohol consumption for the children — due to respective hair lengths, the chronology for the younger children only went back to about nine months before their deaths, whereas the Maiden's timeline spanned about 21 months before death.
The team found that the younger children ingested coca and alcohol at a steady rate, but the Maiden consumed significantly more coca in her final year, with peak consumption occurring at approximately six months before her death. Her alcohol consumption peaked within her last few weeks of life.
The increase in drug and alcohol ingestion likely made the Maiden more at ease with her impending death, Wilson said, adding that she was discovered with a sizeable coca quid (lump for chewing) in between her teeth, suggesting she was sedated when she died.
The chosen one
The children's burial conditions provide further insight into their final moments. The Maiden sat cross-legged and slightly forward, in a fairly relaxed body position at the time of her death. She also had a feathered headdress on her head, elaborately braided hair and a number of artifacts placed on a textile that was draped over her knees.
Furthermore, scans showed the Maiden had food in her system and that she had not recently defecated.
"To my mind, that suggests she was not in a state of distress at the point at which she died," Wilson said. It's not clear how the Maiden died, but she may have succumbed to the freezing temperatures of the environment and was placed in her final position while she was still alive or very shortly after death, he said.
By contrast, the Llullaillaco Boy had blood on his cloak, a nit infestation in his hair and a cloth binding his body, suggesting he may have died of suffocation. The Lightning Girl didn't appear to be treated as roughly as the boy, though she didn't receive the same care as the Maiden — she lacked, for example, the Maiden's decorated headdress and braids.
"The Maiden was perhaps a chosen woman selected to live apart from her former life, among the elite and under the care of the priestesses," Wilson said.
Read more at Discovery News
Archaeologists analyzed hair samples from the frozen mummies of the three children, who were discovered in 1999, entombed within a shrine near the 22,100-foot (6,739 meters) summit of the Argentinian volcano Llullaillaco.
The samples revealed that all three children consistently consumed coca leaves (from which cocaine is derived) and alcoholic beverages, but the oldest child, the famed "Maiden," ingested markedly more of the substances. Coca was a highly controlled substance during the height of the Inca Empire, when the children were sacrificed.
The evidence, combined with other archaeological and radiological data, suggests that the Maiden was treated very differently from the other two children, Llullaillaco Boy and Lightning Girl (so named by researchers because the mummy appears to have been struck by lightning). After being selected for the deadly rite, the Maiden likely underwent a type of status change, becoming an important figure to the empire; the other two children may have served as her attendants.
Hair analyses
"(The Maiden) became somebody other than who she was before," said study lead author Andrew Wilson, an archaeologist at the University of Bradford in the U.K. "Her sacrifice was seen as an honor."
To learn about the final moments of a mummy's life, scientists will sometimes turn to hair samples, which provide a record of what substances were circulating in the blood when new hair cells formed. And because hair grows at a relatively constant rate, it can provide a kind of timeline of what a person has consumed (the length of the timeline depends on the length of hair available).
In a 2007 study, Wilson and his colleagues analyzed the child mummies' hair to understand how their diets changed over time. They found that the children came from a peasant background, as their diet consisted mainly of common vegetables, potatoes in particular. But in the year leading up to their deaths, they ate "elite" food, including maize and dried llama meat, and appeared to have been fattened up in preparation for the sacrifice.
Additionally, the 13-year-old Maiden consumed more of the elite food than the Llullaillaco Boy and Lightning Girl, who were both 4 to 5 years old, Wilson noted. (The three children were previously believed to be about two years older than these estimates, but a new analysis of CT scans suggests otherwise.)
In the new study, the scientists analyzed the mummies' hair for cocaine (a major alkaloid of coca leaves) and its metabolite benzoylecgonine, as well as cocaethylene, which forms when both cocaine and ethanol are present in the blood. The scientists created a timeline of coca and alcohol consumption for the children — due to respective hair lengths, the chronology for the younger children only went back to about nine months before their deaths, whereas the Maiden's timeline spanned about 21 months before death.
The team found that the younger children ingested coca and alcohol at a steady rate, but the Maiden consumed significantly more coca in her final year, with peak consumption occurring at approximately six months before her death. Her alcohol consumption peaked within her last few weeks of life.
The increase in drug and alcohol ingestion likely made the Maiden more at ease with her impending death, Wilson said, adding that she was discovered with a sizeable coca quid (lump for chewing) in between her teeth, suggesting she was sedated when she died.
The chosen one
The children's burial conditions provide further insight into their final moments. The Maiden sat cross-legged and slightly forward, in a fairly relaxed body position at the time of her death. She also had a feathered headdress on her head, elaborately braided hair and a number of artifacts placed on a textile that was draped over her knees.
Furthermore, scans showed the Maiden had food in her system and that she had not recently defecated.
"To my mind, that suggests she was not in a state of distress at the point at which she died," Wilson said. It's not clear how the Maiden died, but she may have succumbed to the freezing temperatures of the environment and was placed in her final position while she was still alive or very shortly after death, he said.
By contrast, the Llullaillaco Boy had blood on his cloak, a nit infestation in his hair and a cloth binding his body, suggesting he may have died of suffocation. The Lightning Girl didn't appear to be treated as roughly as the boy, though she didn't receive the same care as the Maiden — she lacked, for example, the Maiden's decorated headdress and braids.
"The Maiden was perhaps a chosen woman selected to live apart from her former life, among the elite and under the care of the priestesses," Wilson said.
Read more at Discovery News
10 Bogus -- And Widely Believed -- Statistics
Numbers are all around us, especially in the news: A new drug has a 72 percent success rate in treating a disease (but only in patients under 35), while the president enjoys a 48 percent approval rating, even though according to a 2010 poll, almost one in five of the estimated 314 million Americans believe he is Muslim.
We hear statistics all the time, and science studies are often based on them. While some cynics dismiss all statistics as easily manipulated (the famous “there are lies, damned lies and statistics” quote), the truth is that statistics are important and indeed essential to understanding the world around us.
Numbers can be wrong for many reasons, including mistakes, miscalculations, different studies using different definitions, bias in promoting political or social agendas, and, of course, outright fraud. Often, the statistics themselves are correct; it’s how those numbers are interpreted. After all, a glass can be both half full and half empty, depending on how you look at it.
Here are 10 examples of spectacularly flawed statistics that are (or have been) influential and widely believed.
"The teen pregnancy rate has been high for years, and is on the rise."
Worrying about “kids today” is a time-honored tradition. Every generation wrings their hands and laments the wild, immoral and reckless ways of today’s wayward youth. The sky-high rate of teen pregnancy is often cited as a prominent example, along with a list of suspected corrupting influences such as sex-saturated TV shows and music lyrics.
It’s also wrong; the fact is that teen pregnancy is low and has been dropping. Data from the Centers for Disease Control and Prevention’s National Center for Health Statistics found that the birth rate for U.S. teenagers (ages 15 to 19) fell 6 percent in 2009, the lowest level ever recorded in nearly seven decades of tracking teenage childbearing.
The report, “Births: Preliminary Data for 2009” found that the rate for the youngest teenagers, 10-14 years, fell from 0.6 to 0.5 per 1,000, also the lowest level ever reported. The birth rate for teenagers 15-17 years declined 7 percent to 20.1 per 1,000. This rate dropped 9 percent from 2007 (22.1) to 2009, and was 48 percent lower than the rate reported in 1991. In fact, teen pregnancy has dropped 39 percent since its peak in 1990, partly because the most recent studies report that 87 percent of boys and 79 percent of girls use birth control.
"There are XX number of gays in America."
How many homosexual men and women are there in the United States? It’s hard to say. Accurately counting the number of gays is fraught with difficulty for many reasons, including that different researchers use different definitions (Who do we count as gay? Anyone who has sex with someone of the same gender? Only those who self-identify as gay? Bisexuals?), and because sexual activity is private and surveys must rely on often-biased self reports.
Estimates of the percentage of gays by Alfred Kinsey from the 1930s and 1940s concluded that about one in ten American men were more or less exclusively homosexual. This 10 percent estimate, though controversial in some quarters, was widely quoted and circulated for decades.
In his book, “Damned Lies and Statistics,” Joel Best, professor and chair of sociology and criminal justice at the University of Delaware, noted that, “Later surveys, based on more representative samples, have concluded that the one-in-ten estimate exaggerated the amount of homosexuality; typically, they find the 3 to 6 percent of males (and a lower percentage of females) have had significant homosexual experience at some point.... and that the incidence of homosexuality among adults is lower—between 1 and 3 percent.”
A 2011 study by UCLA demographer Gary Gates of the Williams Institute on Sexual Orientation Law and Public Policy concluded that America has approximately 4 million homosexual adults, representing about 1.7 percent of the population.
"Up until recently in human history, our forefathers usually died by age 40."
Most of us have, at one time or another, heard someone talk about how our forefathers died so much younger than we do today. Sometimes, for example, it’s used to help explain why many women got married in their teens centuries ago; after all, they had to get started with families early since they’d be dead by 40! According to the National Center for Health Statistics, life expectancy for American men in 1907 was only 45 years, though by 1957 it rose to 66. However, this does not mean that our great-grandfathers rarely lived into their fifties.
In fact maximum human lifespan -- which is not the same as life expectancy -- has remained more or less the same for thousands of years. The inclusion of high infant mortality rates in calculating life expectancy creates the mistaken impression that earlier generations died at a young age. For those with a shaky understanding of statistics, the problem is that giving an average age at which people died tells us almost nothing about the age at which an individual person living at the time might expect to die. The idea that our forefathers routinely died young (say, at age 40) has no basis in historical fact.
Speaking of dying by 40...
"A woman over 40 is more likely to be killed by a terrorist than get married."
This statistic, in wide circulation since the 1980s, is really, really bogus. As the ever-reliable folks at the rumor-debunking website Snopes.com noted, this little nugget first appeared in a June 1986 article in “Newsweek” magazine titled “Too Late for Prince Charming?” which melodramatically (and inaccurately) claimed that women over forty with university degrees are more “likely to be killed by a terrorist: they have a minuscule 2.6 percent probability of tying the knot.”
The 2.6 percent statistic came from a badly flawed 1985 study which, according to Barbara Mikkelson of Snopes.com, “was contradicted by a U.S. Census Bureau report from about that same time which found... that women at age 40 had a 23 percent chance (at marriage), not 2.6%” The original reference to being killed by a terrorist was, of course, a bit of creative hyperbole not meant to be taken literally.
It’s difficult to calculate a precise percentage chance of women over 40 marrying because there are so many factors involved. Women (and career-minded women in particular) are waiting until later in their lives to get married and have children -- to mention that many of them are either happy being single or are in long-term committed relationships and don’t feel the need to legally “legitimize” their relationships with marriage. In any event, women over 40 (college educated or not) are far more likely to marry than be killed by a terrorist.
Read more at Discovery News
We hear statistics all the time, and science studies are often based on them. While some cynics dismiss all statistics as easily manipulated (the famous “there are lies, damned lies and statistics” quote), the truth is that statistics are important and indeed essential to understanding the world around us.
Numbers can be wrong for many reasons, including mistakes, miscalculations, different studies using different definitions, bias in promoting political or social agendas, and, of course, outright fraud. Often, the statistics themselves are correct; it’s how those numbers are interpreted. After all, a glass can be both half full and half empty, depending on how you look at it.
Here are 10 examples of spectacularly flawed statistics that are (or have been) influential and widely believed.
"The teen pregnancy rate has been high for years, and is on the rise."
Worrying about “kids today” is a time-honored tradition. Every generation wrings their hands and laments the wild, immoral and reckless ways of today’s wayward youth. The sky-high rate of teen pregnancy is often cited as a prominent example, along with a list of suspected corrupting influences such as sex-saturated TV shows and music lyrics.
It’s also wrong; the fact is that teen pregnancy is low and has been dropping. Data from the Centers for Disease Control and Prevention’s National Center for Health Statistics found that the birth rate for U.S. teenagers (ages 15 to 19) fell 6 percent in 2009, the lowest level ever recorded in nearly seven decades of tracking teenage childbearing.
The report, “Births: Preliminary Data for 2009” found that the rate for the youngest teenagers, 10-14 years, fell from 0.6 to 0.5 per 1,000, also the lowest level ever reported. The birth rate for teenagers 15-17 years declined 7 percent to 20.1 per 1,000. This rate dropped 9 percent from 2007 (22.1) to 2009, and was 48 percent lower than the rate reported in 1991. In fact, teen pregnancy has dropped 39 percent since its peak in 1990, partly because the most recent studies report that 87 percent of boys and 79 percent of girls use birth control.
"There are XX number of gays in America."
How many homosexual men and women are there in the United States? It’s hard to say. Accurately counting the number of gays is fraught with difficulty for many reasons, including that different researchers use different definitions (Who do we count as gay? Anyone who has sex with someone of the same gender? Only those who self-identify as gay? Bisexuals?), and because sexual activity is private and surveys must rely on often-biased self reports.
Estimates of the percentage of gays by Alfred Kinsey from the 1930s and 1940s concluded that about one in ten American men were more or less exclusively homosexual. This 10 percent estimate, though controversial in some quarters, was widely quoted and circulated for decades.
In his book, “Damned Lies and Statistics,” Joel Best, professor and chair of sociology and criminal justice at the University of Delaware, noted that, “Later surveys, based on more representative samples, have concluded that the one-in-ten estimate exaggerated the amount of homosexuality; typically, they find the 3 to 6 percent of males (and a lower percentage of females) have had significant homosexual experience at some point.... and that the incidence of homosexuality among adults is lower—between 1 and 3 percent.”
A 2011 study by UCLA demographer Gary Gates of the Williams Institute on Sexual Orientation Law and Public Policy concluded that America has approximately 4 million homosexual adults, representing about 1.7 percent of the population.
"Up until recently in human history, our forefathers usually died by age 40."
Most of us have, at one time or another, heard someone talk about how our forefathers died so much younger than we do today. Sometimes, for example, it’s used to help explain why many women got married in their teens centuries ago; after all, they had to get started with families early since they’d be dead by 40! According to the National Center for Health Statistics, life expectancy for American men in 1907 was only 45 years, though by 1957 it rose to 66. However, this does not mean that our great-grandfathers rarely lived into their fifties.
In fact maximum human lifespan -- which is not the same as life expectancy -- has remained more or less the same for thousands of years. The inclusion of high infant mortality rates in calculating life expectancy creates the mistaken impression that earlier generations died at a young age. For those with a shaky understanding of statistics, the problem is that giving an average age at which people died tells us almost nothing about the age at which an individual person living at the time might expect to die. The idea that our forefathers routinely died young (say, at age 40) has no basis in historical fact.
Speaking of dying by 40...
"A woman over 40 is more likely to be killed by a terrorist than get married."
This statistic, in wide circulation since the 1980s, is really, really bogus. As the ever-reliable folks at the rumor-debunking website Snopes.com noted, this little nugget first appeared in a June 1986 article in “Newsweek” magazine titled “Too Late for Prince Charming?” which melodramatically (and inaccurately) claimed that women over forty with university degrees are more “likely to be killed by a terrorist: they have a minuscule 2.6 percent probability of tying the knot.”
The 2.6 percent statistic came from a badly flawed 1985 study which, according to Barbara Mikkelson of Snopes.com, “was contradicted by a U.S. Census Bureau report from about that same time which found... that women at age 40 had a 23 percent chance (at marriage), not 2.6%” The original reference to being killed by a terrorist was, of course, a bit of creative hyperbole not meant to be taken literally.
It’s difficult to calculate a precise percentage chance of women over 40 marrying because there are so many factors involved. Women (and career-minded women in particular) are waiting until later in their lives to get married and have children -- to mention that many of them are either happy being single or are in long-term committed relationships and don’t feel the need to legally “legitimize” their relationships with marriage. In any event, women over 40 (college educated or not) are far more likely to marry than be killed by a terrorist.
Read more at Discovery News
Detecting Exoplanets: NOW WITH X-RAY VISION!
Superman’s x-ray vision has nothing on space-based super ‘scopes Chandra and XMM Newton, which have detected a distant exoplanet passing in front of its star for the very first time in high-energy x-rays.
The planet that’s been spotted doesn’t resemble the fictional Krypton, though, nor is it anything like Earth: exoplanet HD 189733b is a hot Jupiter — a bloated, broiling gas giant racing through the searing glow of its parent star, locked in a 2-day-long orbit 30 times closer than we are from our sun.
The overheated exoplanet orbits HD 189733, a sun-like star located 63 light-years away in the northern constellation Vulpecula.
Of course, exoplanets have been observed many times before using various methods, such as detecting the faint reduction in a star’s apparent brightness caused by a passing planet and identifying the slight wobble in a star’s position resulting from the gravitational tug of orbiting worlds. But this is the first time that an exoplanet’s transit has been observed in x-ray wavelengths.
“Thousands of planet candidates have been seen to transit in only optical light,” said Katja Poppenhaeger of the Harvard-Smithsonian Center for Astrophysics (CfA), leader of a new study to be published in the Aug. 10 edition of The Astrophysical Journal. “Finally being able to study one in X-rays is important because it reveals new information about the properties of an exoplanet.”
Located so close to its star, HD 189733b is literally being blown away by the powerful solar wind. As it turns out, its large size is working against it; HD 189733b’s extended atmosphere makes a big target for all that stellar outpouring. It’s estimated that the planet is losing 100 million to 600 million kilograms of mass per second, evaporated by high-energy particles.
Then again, if it wasn’t for its size it may not have been detected by Chandra. As HD 189733b passes in front of its star from our viewpoint much more x-ray light gets blocked than visible light — three times more, in fact.
“The X-ray data suggest there are extended layers of the planet’s atmosphere that are transparent to optical light but opaque to X-rays,” said co-author Jurgen Schmitt of Hamburger Sternwarte in Hamburg, Germany. “However, we need more data to confirm this idea.”
The rendering above shows HD 189733b as it passes its star, surrounded by a vast, hazy atmosphere. Its blue color has been confirmed with optical observations with Spitzer and Hubble — a result of silicate particles in its atmosphere.
The faint red star in the bottom right corner is a smaller companion. It’s also visible in the x-ray image at lower right (the bright point directly underneath HD 189733 is a more distant source).
Read more at Discovery News
The planet that’s been spotted doesn’t resemble the fictional Krypton, though, nor is it anything like Earth: exoplanet HD 189733b is a hot Jupiter — a bloated, broiling gas giant racing through the searing glow of its parent star, locked in a 2-day-long orbit 30 times closer than we are from our sun.
The overheated exoplanet orbits HD 189733, a sun-like star located 63 light-years away in the northern constellation Vulpecula.
Of course, exoplanets have been observed many times before using various methods, such as detecting the faint reduction in a star’s apparent brightness caused by a passing planet and identifying the slight wobble in a star’s position resulting from the gravitational tug of orbiting worlds. But this is the first time that an exoplanet’s transit has been observed in x-ray wavelengths.
“Thousands of planet candidates have been seen to transit in only optical light,” said Katja Poppenhaeger of the Harvard-Smithsonian Center for Astrophysics (CfA), leader of a new study to be published in the Aug. 10 edition of The Astrophysical Journal. “Finally being able to study one in X-rays is important because it reveals new information about the properties of an exoplanet.”
Located so close to its star, HD 189733b is literally being blown away by the powerful solar wind. As it turns out, its large size is working against it; HD 189733b’s extended atmosphere makes a big target for all that stellar outpouring. It’s estimated that the planet is losing 100 million to 600 million kilograms of mass per second, evaporated by high-energy particles.
Then again, if it wasn’t for its size it may not have been detected by Chandra. As HD 189733b passes in front of its star from our viewpoint much more x-ray light gets blocked than visible light — three times more, in fact.
“The X-ray data suggest there are extended layers of the planet’s atmosphere that are transparent to optical light but opaque to X-rays,” said co-author Jurgen Schmitt of Hamburger Sternwarte in Hamburg, Germany. “However, we need more data to confirm this idea.”
The rendering above shows HD 189733b as it passes its star, surrounded by a vast, hazy atmosphere. Its blue color has been confirmed with optical observations with Spitzer and Hubble — a result of silicate particles in its atmosphere.
The faint red star in the bottom right corner is a smaller companion. It’s also visible in the x-ray image at lower right (the bright point directly underneath HD 189733 is a more distant source).
Read more at Discovery News
Jul 29, 2013
Cockatoos Know What's Going On Behind Barriers
How do you know that the cookies are still there even though they have been placed out of your sight into the drawer? How do you know when and where a car that has driven into a tunnel will reappear? The ability to represent and to track the trajectory of objects, which are temporally out of sight, is highly important in many aspects but is also cognitively demanding. Alice Auersperg and her team from the University of Vienna and Oxford show that "object permanence" abilities in a cockatoo rivals that of apes and four-year-old humans.
The researchers published their findings in the journal Journal of Comparative Psychology.
For investigating spatial memory and tracking in animals and human infants a number of setups have been habitually used. These can roughly be subdivided depending on what is being moved: a desired object (food reward), the hiding places for this object or the test animal itself: In the original invisible displacement tasks, designed by French psychologist Jean Piaget in the 50s, the reward is moved underneath a small cup behind one or more bigger screens and its contents is shown in between visits: if the cup is empty we know that the reward must be behind the last screen visited. Humans solve this task after about two years of age, whereas in primates only the great apes show convincing results.
Likely to be even more challenging in terms of attention, are "Transposition" tasks: the reward is hidden underneath one of several equal cups, which are interchanged one or more times. Human children struggle with this task type more than with the previous and do not solve it reliably before the age of three to four years whereas adult apes solve it but have more trouble with double than single swaps.
In "Rotation" tasks several equal cups, one bearing a reward are aligned in parallel on a rotatable platform, which is rotated at different angles. "Translocation" tasks are similar except that the cups are not rotated but the test animal is carried around the arrangement and released at different angles to the cup alignment. Children find Translocation tasks easier than Rotation tasks and solve them at two to three years of age.
A team of international Scientists tested eight Goffin cockatoos (Cacatua goffini), a conspicuously inquisitive and playful species on visible as well as invisible Piagetian object displacements and derivations of spatial transposition, rotation and translocation tasks. Birgit Szabo, one of the experimenters from the University of Vienna, says: "The majority of our eight birds readily and spontaneously solved Transposition, Rotation and Translocation tasks whereas only two out of eight choose immediately and reliably the correct location in the original Piagetian invisible displacement task in which a smaller cup is visiting two of three bigger screens." Alice Auersperg, the manager of the Goffin Lab who was also one of the experimenters, explains: "Interestingly and just opposite to human toddlers our cockatoos had more problems solving the Piagetian invisible displacements than the transposition task with which children struggle until the age of four. Transpositions are highly demanding in terms of attention since two occluding objects are moved simultaneously. Nevertheless, in contrast to apes, which find single swaps easier than double the cockatoos perform equally in both conditions."
Similarly, Goffins had little complications with Rotations and Translocation tasks and some of them solved them at four different angles. Again, in contrast to children, which find Translocations easier than Rotations, the cockatoos showed no significant differences between the two tasks. Auguste von Bayern from the University of Oxford adds: " We assume that the ability to fly and prey upon or being preyed upon from the air is likely to require pronounced spatial rotation abilities and may be a candidate trait influencing the animals' performance in rotation and translocation tasks."
Read more at Science Daily
The researchers published their findings in the journal Journal of Comparative Psychology.
For investigating spatial memory and tracking in animals and human infants a number of setups have been habitually used. These can roughly be subdivided depending on what is being moved: a desired object (food reward), the hiding places for this object or the test animal itself: In the original invisible displacement tasks, designed by French psychologist Jean Piaget in the 50s, the reward is moved underneath a small cup behind one or more bigger screens and its contents is shown in between visits: if the cup is empty we know that the reward must be behind the last screen visited. Humans solve this task after about two years of age, whereas in primates only the great apes show convincing results.
Likely to be even more challenging in terms of attention, are "Transposition" tasks: the reward is hidden underneath one of several equal cups, which are interchanged one or more times. Human children struggle with this task type more than with the previous and do not solve it reliably before the age of three to four years whereas adult apes solve it but have more trouble with double than single swaps.
In "Rotation" tasks several equal cups, one bearing a reward are aligned in parallel on a rotatable platform, which is rotated at different angles. "Translocation" tasks are similar except that the cups are not rotated but the test animal is carried around the arrangement and released at different angles to the cup alignment. Children find Translocation tasks easier than Rotation tasks and solve them at two to three years of age.
A team of international Scientists tested eight Goffin cockatoos (Cacatua goffini), a conspicuously inquisitive and playful species on visible as well as invisible Piagetian object displacements and derivations of spatial transposition, rotation and translocation tasks. Birgit Szabo, one of the experimenters from the University of Vienna, says: "The majority of our eight birds readily and spontaneously solved Transposition, Rotation and Translocation tasks whereas only two out of eight choose immediately and reliably the correct location in the original Piagetian invisible displacement task in which a smaller cup is visiting two of three bigger screens." Alice Auersperg, the manager of the Goffin Lab who was also one of the experimenters, explains: "Interestingly and just opposite to human toddlers our cockatoos had more problems solving the Piagetian invisible displacements than the transposition task with which children struggle until the age of four. Transpositions are highly demanding in terms of attention since two occluding objects are moved simultaneously. Nevertheless, in contrast to apes, which find single swaps easier than double the cockatoos perform equally in both conditions."
Similarly, Goffins had little complications with Rotations and Translocation tasks and some of them solved them at four different angles. Again, in contrast to children, which find Translocations easier than Rotations, the cockatoos showed no significant differences between the two tasks. Auguste von Bayern from the University of Oxford adds: " We assume that the ability to fly and prey upon or being preyed upon from the air is likely to require pronounced spatial rotation abilities and may be a candidate trait influencing the animals' performance in rotation and translocation tasks."
Read more at Science Daily
Damselfish Eye Confuses Would-Be Predators
Young damselfish grow large fake eyes and bulk up their body to confuse predators and avoid being eaten, Australian researchers have found.
Marine biologist Oona Lönnstedt from James Cook University and colleagues published their research in the Nature journal Scientific Reports.
"Juvenile damselfish have lightly colored bodies and a conspicuous eyespot on the rear dorsal fin that fades away as individuals approach maturation," said Lönnstedt.
Their study shows that false eyespots grow and real eyes shrink in response to the presence of predator. It is also the first to demonstrate that the presence of predators can affect both growth and color patterns in their prey.
Like some species of insects and fish, damselfish are highly vulnerable to predation when they are juveniles, and eyespots are thought to have evolved to deter predators.
A Taste of the Enemy
To explore what impact the presence of predators have on the development juvenile Ambon damselfish, (Pomacentrus amboinensis), young damselfish were caught at the end of their larval phase, before they had been exposed to predators.
One group was placed in a tank back in the lab and conditioned to dusky dottybacks (Pseudochromis fuscus). These were placed in the tank inside a transparent plastic bag for 30 minutes at a time, while skin extracts and other odor cues were simultaneously injected into the tank.
Another group was similarly exposed to the herbivorous goby (Amblygobius phalanea), while a third group was not exposed to any other fish.
The researchers found significant differences in both the behavior and morphology between the groups of study fish.
"Prey exposed to predators for six weeks grew deeper bodies, developed larger eyespots and exhibited stunted eye growth compared to prey exposed to herbivores or those that were isolated from other fish," said Lönnstedt.
Increased body depth is a common prey response to gape limited predators, she adds. Not only do deeper bodies deter attacks, they also improve speed, acceleration and maneuverability.
"But what is intriguing is the finding that the juvenile prey grow larger eyespots and display smaller eyes when continuously exposed to predators."
Lönnstedt believes the large eyespot tail area of the damselfish, along with smaller eyes, gives the impression that the fish is pointing in the other direction, "potentially confusing predators about the orientation of the prey."
She adds that it could also lure predator to attack the tail rather than the head.
"An attack on the head would damage vital parts allowing almost no chance of survival."
Experience Counts
The researchers also found that damselfish exposed to predators were more cautious in their behavior: they foraged significantly less, were less active, and spent more time sheltering than the control fish, especially in the early stages of the experiment.
"Reduced activity levels increase prey survival by making the prey less conspicuous to the predator," said Lönnstedt. "(It) also saves energy, allowing individuals to allocate more into growth and/or development."
Read more at Discovery News
Marine biologist Oona Lönnstedt from James Cook University and colleagues published their research in the Nature journal Scientific Reports.
"Juvenile damselfish have lightly colored bodies and a conspicuous eyespot on the rear dorsal fin that fades away as individuals approach maturation," said Lönnstedt.
Their study shows that false eyespots grow and real eyes shrink in response to the presence of predator. It is also the first to demonstrate that the presence of predators can affect both growth and color patterns in their prey.
Like some species of insects and fish, damselfish are highly vulnerable to predation when they are juveniles, and eyespots are thought to have evolved to deter predators.
A Taste of the Enemy
To explore what impact the presence of predators have on the development juvenile Ambon damselfish, (Pomacentrus amboinensis), young damselfish were caught at the end of their larval phase, before they had been exposed to predators.
One group was placed in a tank back in the lab and conditioned to dusky dottybacks (Pseudochromis fuscus). These were placed in the tank inside a transparent plastic bag for 30 minutes at a time, while skin extracts and other odor cues were simultaneously injected into the tank.
Another group was similarly exposed to the herbivorous goby (Amblygobius phalanea), while a third group was not exposed to any other fish.
The researchers found significant differences in both the behavior and morphology between the groups of study fish.
"Prey exposed to predators for six weeks grew deeper bodies, developed larger eyespots and exhibited stunted eye growth compared to prey exposed to herbivores or those that were isolated from other fish," said Lönnstedt.
Increased body depth is a common prey response to gape limited predators, she adds. Not only do deeper bodies deter attacks, they also improve speed, acceleration and maneuverability.
"But what is intriguing is the finding that the juvenile prey grow larger eyespots and display smaller eyes when continuously exposed to predators."
Lönnstedt believes the large eyespot tail area of the damselfish, along with smaller eyes, gives the impression that the fish is pointing in the other direction, "potentially confusing predators about the orientation of the prey."
She adds that it could also lure predator to attack the tail rather than the head.
"An attack on the head would damage vital parts allowing almost no chance of survival."
Experience Counts
The researchers also found that damselfish exposed to predators were more cautious in their behavior: they foraged significantly less, were less active, and spent more time sheltering than the control fish, especially in the early stages of the experiment.
"Reduced activity levels increase prey survival by making the prey less conspicuous to the predator," said Lönnstedt. "(It) also saves energy, allowing individuals to allocate more into growth and/or development."
Read more at Discovery News
Wolves Help Grizzly Bears Get Berries
Wolves and grizzly bears would seem to be archenemies, but a new study shows how wolves are actually helping grizzly bears to obtain tasty, nutritious berries in Yellowstone National Park.
The discovery shows just how tightly woven ecosystems are, and how the domino effect can both hurt and benefit members.
The situation began to unfold back in the early 1900s, when officials had the short-sighted idea of moving wolves out of Yellowstone. It was called “predator control.” By the 1970’s, scientists found no evidence of a wolf population in Yellowstone, a verdant place that had previously been home to wolves for ages.
In October 1991, Congress provided funds to the U.S. Fish & Wildlife Service to start wolf restoration efforts at Yellowstone. (A central Idaho restoration was also funded then.)
The new study, published in the Journal of Animal Ecology, examined how the re-introduction of wolves is affecting other wildlife in the park.
Lead author William Ripple, an Oregon State University professor in the Department of Forest Ecosystems and Society, and his team found that, during the period with few or no wolves, elk herds expanded and over-browsed berry bushes. This was bad news for grizzly bears.
“Wild fruit is typically an important part of grizzly bear diet, especially in late summer when they are trying to gain weight as rapidly as possible before winter hibernation,” Ripple said in a press release. “Berries are one part of a diverse food source that aids bear survival and reproduction, and at certain times of the year can be more than half their diet in many places in North America.”
Now, however, with wolves hunting elk again, there are more berries. Yellowstone is berry central for bears, with numerous types that they love: serviceberry, chokecherry, buffaloberry, twinberry, huckleberry and others. Since the reintroduction of wolves, the percentage of berry waste in bear poo has nearly doubled.
“Studies like this also point to the need for an ecologically effective number of wolves,” co-author Robert Beschta, an OSU professor emeritus, said. “As we learn more about the cascading effects they have on ecosystems, the issue may be more than having just enough individual wolves so they can survive as a species. In some situations, we may wish to consider the numbers necessary to help control over-browsing, allow tree and shrub recovery, and restore ecosystem health.”
Read more at Discovery News
The discovery shows just how tightly woven ecosystems are, and how the domino effect can both hurt and benefit members.
The situation began to unfold back in the early 1900s, when officials had the short-sighted idea of moving wolves out of Yellowstone. It was called “predator control.” By the 1970’s, scientists found no evidence of a wolf population in Yellowstone, a verdant place that had previously been home to wolves for ages.
In October 1991, Congress provided funds to the U.S. Fish & Wildlife Service to start wolf restoration efforts at Yellowstone. (A central Idaho restoration was also funded then.)
The new study, published in the Journal of Animal Ecology, examined how the re-introduction of wolves is affecting other wildlife in the park.
Lead author William Ripple, an Oregon State University professor in the Department of Forest Ecosystems and Society, and his team found that, during the period with few or no wolves, elk herds expanded and over-browsed berry bushes. This was bad news for grizzly bears.
“Wild fruit is typically an important part of grizzly bear diet, especially in late summer when they are trying to gain weight as rapidly as possible before winter hibernation,” Ripple said in a press release. “Berries are one part of a diverse food source that aids bear survival and reproduction, and at certain times of the year can be more than half their diet in many places in North America.”
Now, however, with wolves hunting elk again, there are more berries. Yellowstone is berry central for bears, with numerous types that they love: serviceberry, chokecherry, buffaloberry, twinberry, huckleberry and others. Since the reintroduction of wolves, the percentage of berry waste in bear poo has nearly doubled.
“Studies like this also point to the need for an ecologically effective number of wolves,” co-author Robert Beschta, an OSU professor emeritus, said. “As we learn more about the cascading effects they have on ecosystems, the issue may be more than having just enough individual wolves so they can survive as a species. In some situations, we may wish to consider the numbers necessary to help control over-browsing, allow tree and shrub recovery, and restore ecosystem health.”
Read more at Discovery News
Coffin at King Richard III Site Holds...Another Coffin
King Richard III's rediscovered resting place is turning out more mysteries this summer. Excavators finally lifted the heavy lid of a medieval stone coffin found at the site in Leicester, England, only to reveal another lead coffin inside.
The "coffin-within-a-coffin" is thought to have been sealed in the 13th or 14th century — more than 100 years before Richard, an infamous English king slain in battle, received his hasty burial in 1485.
The team of archaeologists from the University of Leicester thinks this grave in the Grey Friars monastery might contain one of the friary's founders or a medieval knight.
"The inner coffin is likely to contain a high-status burial — though we don't currently know who it contains," reads a statement from the university.
The outer stone coffin measures about 7 feet (2.1 meters) long and 2 feet (0.6 meters) wide at the head and 1 foot (0.3 meters) at the feet. Eight people were needed to remove its lid.
The lead funerary box inside has been carried off to the university, where researchers will conduct tests to determine the safest way to open it without damaging the remains. But so far, they've been able to get a look at the feet through a hole in the bottom of the inner coffin.
The archaeologists suspect the grave may belong to one of Grey Friar's founders: Peter Swynsfeld, who died in 1272, or William of Nottingham, who died in 1330. Records also suggest "a knight called Mutton, sometime mayor of Leicester," was buried at the site. This name may refer to the 14th-century knight Sir William de Moton of Peckleton, who died between 1356 and 1362, the researchers say.
"None of us in the team have ever seen a lead coffin within a stone coffin before," archaeologist Mathew Morris, the Grey Friars site director, said in a statement. "We will now need to work out how to open it safely, as we don't want to damage the contents when we are opening the lid."
Richard III, the last king of the House of York, reigned from 1483 until 1485, when he was killed in battle during the War of Roses. He received a quick burial at the Grey Friars monastery in Leicester as his defeater, Henry Tudor, ascended to the throne.
Richard's rise to power was controversial. His two young nephews, who had a claim to the throne, vanished from the Tower of London shortly before Richard became king, leading to rumors that he had them killed. After his death, Richard was demonized by the Tudor dynasty and his reputation as a power-hungry, murderous hunchback was cemented in William Shakespeare's play "Richard III." Meanwhile, Grey Friars was destroyed in the 16th century during the Protestant Reformation, and its ruins became somewhat lost to history.
Setting out to find the lost king, archaeologists started digging beneath a parking lot in Leicester last summer where they believed they would find Grey Friars. They soon uncovered the remains of the monastery and a battle-ravaged skeleton that was later confirmed through a DNA analysis to be that of Richard III.
Read more at Discovery News
The "coffin-within-a-coffin" is thought to have been sealed in the 13th or 14th century — more than 100 years before Richard, an infamous English king slain in battle, received his hasty burial in 1485.
The team of archaeologists from the University of Leicester thinks this grave in the Grey Friars monastery might contain one of the friary's founders or a medieval knight.
"The inner coffin is likely to contain a high-status burial — though we don't currently know who it contains," reads a statement from the university.
The outer stone coffin measures about 7 feet (2.1 meters) long and 2 feet (0.6 meters) wide at the head and 1 foot (0.3 meters) at the feet. Eight people were needed to remove its lid.
The lead funerary box inside has been carried off to the university, where researchers will conduct tests to determine the safest way to open it without damaging the remains. But so far, they've been able to get a look at the feet through a hole in the bottom of the inner coffin.
The archaeologists suspect the grave may belong to one of Grey Friar's founders: Peter Swynsfeld, who died in 1272, or William of Nottingham, who died in 1330. Records also suggest "a knight called Mutton, sometime mayor of Leicester," was buried at the site. This name may refer to the 14th-century knight Sir William de Moton of Peckleton, who died between 1356 and 1362, the researchers say.
"None of us in the team have ever seen a lead coffin within a stone coffin before," archaeologist Mathew Morris, the Grey Friars site director, said in a statement. "We will now need to work out how to open it safely, as we don't want to damage the contents when we are opening the lid."
Richard III, the last king of the House of York, reigned from 1483 until 1485, when he was killed in battle during the War of Roses. He received a quick burial at the Grey Friars monastery in Leicester as his defeater, Henry Tudor, ascended to the throne.
Richard's rise to power was controversial. His two young nephews, who had a claim to the throne, vanished from the Tower of London shortly before Richard became king, leading to rumors that he had them killed. After his death, Richard was demonized by the Tudor dynasty and his reputation as a power-hungry, murderous hunchback was cemented in William Shakespeare's play "Richard III." Meanwhile, Grey Friars was destroyed in the 16th century during the Protestant Reformation, and its ruins became somewhat lost to history.
Setting out to find the lost king, archaeologists started digging beneath a parking lot in Leicester last summer where they believed they would find Grey Friars. They soon uncovered the remains of the monastery and a battle-ravaged skeleton that was later confirmed through a DNA analysis to be that of Richard III.
Read more at Discovery News
Jul 28, 2013
Mechanism Behind Squids' and Octopuses' Ability to Change Color Revealed
Color in living organisms can be formed two ways: pigmentation or anatomical structure. Structural colors arise from the physical interaction of light with biological nanostructures. A wide range of organisms possess this ability, but the biological mechanisms underlying the process have been poorly understood.
Two years ago, an interdisciplinary team from UC Santa Barbara discovered the mechanism by which a neurotransmitter dramatically changes color in the common market squid, Doryteuthis opalescens. That neurotransmitter, acetylcholine, sets in motion a cascade of events that culminate in the addition of phosphate groups to a family of unique proteins called reflectins. This process allows the proteins to condense, driving the animal's color-changing process.
Now the researchers have delved deeper to uncover the mechanism responsible for the dramatic changes in color used by such creatures as squids and octopuses. The findings -- published in the Proceedings of the National Academy of Science, in a paper by molecular biology graduate student and lead author Daniel DeMartini and co-authors Daniel V. Krogstad and Daniel E. Morse -- are featured in the current issue of The Scientist.
Structural colors rely exclusively on the density and shape of the material rather than its chemical properties. The latest research from the UCSB team shows that specialized cells in the squid skin called iridocytes contain deep pleats or invaginations of the cell membrane extending deep into the body of the cell. This creates layers or lamellae that operate as a tunable Bragg reflector. Bragg reflectors are named after the British father and son team who more than a century ago discovered how periodic structures reflect light in a very regular and predicable manner.
"We know cephalopods use their tunable iridescence for camouflage so that they can control their transparency or in some cases match the background," said co-author Daniel E. Morse, Wilcox Professor of Biotechnology in the Department of Molecular, Cellular and Developmental Biology and director of the Marine Biotechnology Center/Marine Science Institute at UCSB.
"They also use it to create confusing patterns that disrupt visual recognition by a predator and to coordinate interactions, especially mating, where they change from one appearance to another," he added. "Some of the cuttlefish, for example, can go from bright red, which means stay away, to zebra-striped, which is an invitation for mating."
The researchers created antibodies to bind specifically to the reflectin proteins, which revealed that the reflectins are located exclusively inside the lamellae formed by the folds in the cell membrane. They showed that the cascade of events culminating in the condensation of the reflectins causes the osmotic pressure inside the lamellae to change drastically due to the expulsion of water, which shrinks and dehydrates the lamellae and reduces their thickness and spacing. The movement of water was demonstrated directly using deuterium-labeled heavy water.
When the acetylcholine neurotransmitter is washed away and the cell can recover, the lamellae imbibe water, rehydrating and allowing them to swell to their original thickness. This reversible dehydration and rehydration, shrinking and swelling, changes the thickness and spacing, which, in turn, changes the wavelength of the light that's reflected, thus "tuning" the color change over the entire visible spectrum.
"This effect of the condensation on the reflectins simultaneously increases the refractive index inside the lamellae," explained Morse. "Initially, before the proteins are consolidated, the refractive index -- you can think of it as the density -- inside the lamellae and outside, which is really the outside water environment, is the same. There's no optical difference so there's no reflection. But when the proteins consolidate, this increases the refractive index so the contrast between the inside and outside suddenly increases, causing the stack of lamellae to become reflective, while at the same time they dehydrate and shrink, which causes color changes. The animal can control the extent to which this happens -- it can pick the color -- and it's also reversible. The precision of this tuning by regulating the nanoscale dimensions of the lamellae is amazing."
Another paper by the same team of researchers, published in Journal of the Royal Society Interface, with optical physicist Amitabh Ghoshal as the lead author, conducted a mathematical analysis of the color change and confirmed that the changes in refractive index perfectly correspond to the measurements made with live cells.
A third paper, in press at Journal of Experimental Biology, reports the team's discovery that female market squid show a set of stripes that can be brightly activated and may function during mating to allow the female to mimic the appearance of the male, thereby reducing the number of mating encounters and aggressive contacts from males. The most significant finding in this study is the discovery of a pair of stripes that switch from being completely transparent to bright white.
"This is the first time that switchable white cells based on the reflectin proteins have been discovered," Morse noted. "The facts that these cells are switchable by the neurotransmitter acetylcholine, that they contain some of the same reflectin proteins, and that the reflectins are induced to condense to increase the refractive index and trigger the change in reflectance all suggest that they operate by a molecular mechanism fundamentally related to that controlling the tunable color."
Could these findings one day have practical applications? "In telecommunications we're moving to more rapid communication carried by light," said Morse. "We already use optical cables and photonic switches in some of our telecommunications devices. The question is -- and it's a question at this point -- can we learn from these novel biophotonic mechanisms that have evolved over millions of years of natural selection new approaches to making tunable and switchable photonic materials to more efficiently encode, transmit, and decode information via light?"
Read more at Science Daily
Two years ago, an interdisciplinary team from UC Santa Barbara discovered the mechanism by which a neurotransmitter dramatically changes color in the common market squid, Doryteuthis opalescens. That neurotransmitter, acetylcholine, sets in motion a cascade of events that culminate in the addition of phosphate groups to a family of unique proteins called reflectins. This process allows the proteins to condense, driving the animal's color-changing process.
Now the researchers have delved deeper to uncover the mechanism responsible for the dramatic changes in color used by such creatures as squids and octopuses. The findings -- published in the Proceedings of the National Academy of Science, in a paper by molecular biology graduate student and lead author Daniel DeMartini and co-authors Daniel V. Krogstad and Daniel E. Morse -- are featured in the current issue of The Scientist.
Structural colors rely exclusively on the density and shape of the material rather than its chemical properties. The latest research from the UCSB team shows that specialized cells in the squid skin called iridocytes contain deep pleats or invaginations of the cell membrane extending deep into the body of the cell. This creates layers or lamellae that operate as a tunable Bragg reflector. Bragg reflectors are named after the British father and son team who more than a century ago discovered how periodic structures reflect light in a very regular and predicable manner.
"We know cephalopods use their tunable iridescence for camouflage so that they can control their transparency or in some cases match the background," said co-author Daniel E. Morse, Wilcox Professor of Biotechnology in the Department of Molecular, Cellular and Developmental Biology and director of the Marine Biotechnology Center/Marine Science Institute at UCSB.
"They also use it to create confusing patterns that disrupt visual recognition by a predator and to coordinate interactions, especially mating, where they change from one appearance to another," he added. "Some of the cuttlefish, for example, can go from bright red, which means stay away, to zebra-striped, which is an invitation for mating."
The researchers created antibodies to bind specifically to the reflectin proteins, which revealed that the reflectins are located exclusively inside the lamellae formed by the folds in the cell membrane. They showed that the cascade of events culminating in the condensation of the reflectins causes the osmotic pressure inside the lamellae to change drastically due to the expulsion of water, which shrinks and dehydrates the lamellae and reduces their thickness and spacing. The movement of water was demonstrated directly using deuterium-labeled heavy water.
When the acetylcholine neurotransmitter is washed away and the cell can recover, the lamellae imbibe water, rehydrating and allowing them to swell to their original thickness. This reversible dehydration and rehydration, shrinking and swelling, changes the thickness and spacing, which, in turn, changes the wavelength of the light that's reflected, thus "tuning" the color change over the entire visible spectrum.
"This effect of the condensation on the reflectins simultaneously increases the refractive index inside the lamellae," explained Morse. "Initially, before the proteins are consolidated, the refractive index -- you can think of it as the density -- inside the lamellae and outside, which is really the outside water environment, is the same. There's no optical difference so there's no reflection. But when the proteins consolidate, this increases the refractive index so the contrast between the inside and outside suddenly increases, causing the stack of lamellae to become reflective, while at the same time they dehydrate and shrink, which causes color changes. The animal can control the extent to which this happens -- it can pick the color -- and it's also reversible. The precision of this tuning by regulating the nanoscale dimensions of the lamellae is amazing."
Another paper by the same team of researchers, published in Journal of the Royal Society Interface, with optical physicist Amitabh Ghoshal as the lead author, conducted a mathematical analysis of the color change and confirmed that the changes in refractive index perfectly correspond to the measurements made with live cells.
A third paper, in press at Journal of Experimental Biology, reports the team's discovery that female market squid show a set of stripes that can be brightly activated and may function during mating to allow the female to mimic the appearance of the male, thereby reducing the number of mating encounters and aggressive contacts from males. The most significant finding in this study is the discovery of a pair of stripes that switch from being completely transparent to bright white.
"This is the first time that switchable white cells based on the reflectin proteins have been discovered," Morse noted. "The facts that these cells are switchable by the neurotransmitter acetylcholine, that they contain some of the same reflectin proteins, and that the reflectins are induced to condense to increase the refractive index and trigger the change in reflectance all suggest that they operate by a molecular mechanism fundamentally related to that controlling the tunable color."
Could these findings one day have practical applications? "In telecommunications we're moving to more rapid communication carried by light," said Morse. "We already use optical cables and photonic switches in some of our telecommunications devices. The question is -- and it's a question at this point -- can we learn from these novel biophotonic mechanisms that have evolved over millions of years of natural selection new approaches to making tunable and switchable photonic materials to more efficiently encode, transmit, and decode information via light?"
Read more at Science Daily
Evolution On the Inside Track: How Viruses in Gut Bacteria Change Over Time
Humans are far more than merely the sum total of all the cells that form the organs and tissues. The digestive tract is also home to a vast colony of bacteria of all varieties, as well as the myriad viruses that prey upon them. Because the types of bacteria carried inside the body vary from person to person, so does this viral population, known as the virome.
By closely following and analyzing the virome of one individual over two-and-a-half years, researchers from the Perelman School of Medicine at the University of Pennsylvania, led by professor of Microbiology Frederic D. Bushman, Ph.D., have uncovered some important new insights on how a viral population can change and evolve -- and why the virome of one person can vary so greatly from that of another. The evolution and variety of the virome can affect susceptibility and resistance to disease among individuals, along with variable effectiveness of drugs.
Their work was published in the Proceedings of the National Academy of Sciences.
Most of the virome consists of bacteriophages, viruses that infect bacteria rather than directly attacking their human hosts. However, the changes that bacteriophages wreak upon bacteria can also ultimately affect humans.
"Bacterial viruses are predators on bacteria, so they mold their populations," says Bushman. "Bacterial viruses also transport genes for toxins, virulence factors that modify the phenotype of their bacterial host." In this way, an innocent, benign bacterium living inside the body can be transformed by an invading virus into a dangerous threat.
At 16 time points over 884 days, Bushman and his team collected stool samples from a healthy male subject and extracted viral particles using several methods. They then isolated and analyzed DNA contigs (contiguous sequences) using ultra-deep genome sequencing .
"We assembled raw sequence data to yield complete and partial genomes and analyzed how they changed over two and a half years," Bushman explains. The result was the longest, most extensive picture of the workings of the human virome yet obtained.
The researchers found that while approximately 80 percent of the viral types identified remained mostly unchanged over the course of the study, certain viral species changed so substantially over time that, as Bushman notes, "You could say we observed speciation events."
This was particularly true in the Microviridae group, which are bacteriophages with single-stranded circular DNA genomes. Several genetic mechanisms drove the changes, including substitution of base chemicals; diversity-generating retroelements, in which reverse transcriptase enzymes introduce mutations into the genome; and CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), in which pieces of the DNA sequences of bacteriophages are incorporated as spacers in the genomes of bacteria.
Such rapid evolution of the virome was perhaps the most surprising finding for the research team. Bushman notes that "different people have quite different bacteria in their guts, so the viral predators on those bacteria are also different. However, another reason people are so different from each other in terms of their virome, emphasized in this paper, is that some of the viruses, once inside a person, are changing really fast. So some of the viral community diversifies and becomes unique within each individual."
Read more at Science Daily
By closely following and analyzing the virome of one individual over two-and-a-half years, researchers from the Perelman School of Medicine at the University of Pennsylvania, led by professor of Microbiology Frederic D. Bushman, Ph.D., have uncovered some important new insights on how a viral population can change and evolve -- and why the virome of one person can vary so greatly from that of another. The evolution and variety of the virome can affect susceptibility and resistance to disease among individuals, along with variable effectiveness of drugs.
Their work was published in the Proceedings of the National Academy of Sciences.
Most of the virome consists of bacteriophages, viruses that infect bacteria rather than directly attacking their human hosts. However, the changes that bacteriophages wreak upon bacteria can also ultimately affect humans.
"Bacterial viruses are predators on bacteria, so they mold their populations," says Bushman. "Bacterial viruses also transport genes for toxins, virulence factors that modify the phenotype of their bacterial host." In this way, an innocent, benign bacterium living inside the body can be transformed by an invading virus into a dangerous threat.
At 16 time points over 884 days, Bushman and his team collected stool samples from a healthy male subject and extracted viral particles using several methods. They then isolated and analyzed DNA contigs (contiguous sequences) using ultra-deep genome sequencing .
"We assembled raw sequence data to yield complete and partial genomes and analyzed how they changed over two and a half years," Bushman explains. The result was the longest, most extensive picture of the workings of the human virome yet obtained.
The researchers found that while approximately 80 percent of the viral types identified remained mostly unchanged over the course of the study, certain viral species changed so substantially over time that, as Bushman notes, "You could say we observed speciation events."
This was particularly true in the Microviridae group, which are bacteriophages with single-stranded circular DNA genomes. Several genetic mechanisms drove the changes, including substitution of base chemicals; diversity-generating retroelements, in which reverse transcriptase enzymes introduce mutations into the genome; and CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), in which pieces of the DNA sequences of bacteriophages are incorporated as spacers in the genomes of bacteria.
Such rapid evolution of the virome was perhaps the most surprising finding for the research team. Bushman notes that "different people have quite different bacteria in their guts, so the viral predators on those bacteria are also different. However, another reason people are so different from each other in terms of their virome, emphasized in this paper, is that some of the viruses, once inside a person, are changing really fast. So some of the viral community diversifies and becomes unique within each individual."
Read more at Science Daily
Subscribe to:
Posts (Atom)