A recent study of the impact of climate change on butterflies suggests that some species might adapt much better than others, with implications for the pollination and herbivory associated with these and other insect species.
The research, published in Ecological Entomology, examined changes in the life cycles of butterflies at different elevations of a mountain range in central Spain. They served as a model for some of the changes expected to come with warming temperatures, particularly in mountain landscapes.
The researchers found that butterfly species which already tend to emerge later in the year or fly higher in the mountains have evolved to deal with a shorter window of opportunity to reproduce, and as a result may fare worse in a warming climate, compared to those that emerge over a longer time period.
"Insects and plants are at the base of the food pyramid and are extremely important, but they often get less attention when we are studying the ecological impacts of climate change," said Javier G. Illan, with the Department of Forest Ecosystems and Society at Oregon State University.
"We're already expecting localized extinctions of about one third of butterfly species, so we need to understand how climate change will affect those that survive," he said. "This research makes it clear that some will do a lot better than others."
Butterflies may be particularly sensitive to a changing climate, Illan said, and make a good model to study the broader range of ecological effects linked to insects. Their flight dates are a relevant indicator of future responses to climate change.
Read more at Science Daily
Jun 2, 2012
Sierra Nevada 200-Year Megadroughts Confirmed
The erratic year-to-year swings in precipitation totals in the Reno-Tahoe area conjures up the word "drought" every couple of years, and this year is no exception. The Nevada State Climate Office at the University of Nevada, Reno, in conjunction with the Nevada Drought Response Committee, just announced a Stage 1 drought (moderate) for six counties and a Stage 2 drought (severe) for 11 counties.
Reno, Lake Tahoe and the Sierra Nevada are no strangers to drought, the most famous being the Medieval megadrought lasting from 800 to 1250 A.D. when annual precipitation was less than 60 percent of normal. The Reno-Tahoe region is now about 65 percent of annual normal precipitation for the year, which doesn't seem like much, but imagine if this were the "norm" each and every year for the next 200 years.
Research by scientists at the University of Nevada, Reno and their partners at Scripps Institution of Oceanography in San Diego indicates that there are other instances of such long-lasting, severe droughts in the western United States throughout history. Their recent paper, a culmination of a comprehensive high-tech assessment of Fallen Leaf Lake -- a small moraine-bound lake at the south end of the Lake Tahoe Basin -- reports that stands of pre-Medieval trees in the lake suggest the region experienced severe drought at least every 650 to 1,150 years during the mid- and late-Holocene period.
"Using an arsenal of cutting edge sonar tools, remotely operated vehicles (ROVs), and a manned submersible, we've obtained potentially the most accurate record thus far on the instances of 200-year-long droughts in the Sierra," Graham Kent, director of the Nevada Seismological Laboratory said. "The record from Fallen Leaf Lake confirms what was expected and is likely the most accurate record, in terms of precipitation, than obtained previously from a variety of methods throughout the Sierra."
Kent is part of the University of Nevada, Reno and Scripps research team that traced the megadroughts and dry spells of the region using tree-ring analysis, shoreline records and sediment deposition in Fallen Leaf Lake. Using side-scan and multibeam sonar technology developed to map underwater earthquake fault lines such as the West Tahoe fault beneath Fallen Leaf Lake, the team also imaged standing trees up to 130 feet beneath the lake surface as well as submerged ancient shoreline structure and development. The trees matured while the lake level was 130 to 200 feet below its modern elevation and were not deposited by a landslide as was suspected.
The team, led by John Kleppe, University of Nevada, Reno engineering professor emeritus, published a paper on this research and is presenting its findings in seminars and workshops.
"The lake is like a 'canary in a coal mine' for the Sierra, telling the story of precipitation very clearly," Kent said. "Fallen Leaf Lake elevations change rapidly due to its unique ratio between catchment basin and lake surface of about 8 to 1. With analysis of the standing trees submerged in the lake, sediment cores and our sonar scanning of ancient shorelines, we can more accurately and easily trace the precipitation history of the region."
Water balance calculations and analysis of tree-ring samples undertaken by Kleppe, Kent and Scripps scientists Danny Brothers and Neal Driscoll, along with Professor Franco Biondi of the University's College of Science, suggest annual precipitation was less than 60 percent of normal from the late 10th century to the early 13th century. Their research was documented in a scientific paper, Duration and severity of Medieval drought in the Lake Tahoe Basin, published in the Quaternary Science Reviews in November 2011.
Tree-ring records and submerged paleoshoreline geomorphology suggest a Medieval low-lake level of Fallen Leaf Lake lasted more than 220 years. More than 80 trees were found lying on the lake floor at various elevations above the paleoshoreline.
"Although the ancient cycle of megadroughts seems to occur every 650 to 1150 years and the last one was 750 years ago, it is uncertain when the next megadrought will occur. With climate change upon us, it will be interesting to see how carbon dioxide loading in the atmosphere will affect this cycle," Kent said.
Read more at Science Daily
Reno, Lake Tahoe and the Sierra Nevada are no strangers to drought, the most famous being the Medieval megadrought lasting from 800 to 1250 A.D. when annual precipitation was less than 60 percent of normal. The Reno-Tahoe region is now about 65 percent of annual normal precipitation for the year, which doesn't seem like much, but imagine if this were the "norm" each and every year for the next 200 years.
Research by scientists at the University of Nevada, Reno and their partners at Scripps Institution of Oceanography in San Diego indicates that there are other instances of such long-lasting, severe droughts in the western United States throughout history. Their recent paper, a culmination of a comprehensive high-tech assessment of Fallen Leaf Lake -- a small moraine-bound lake at the south end of the Lake Tahoe Basin -- reports that stands of pre-Medieval trees in the lake suggest the region experienced severe drought at least every 650 to 1,150 years during the mid- and late-Holocene period.
"Using an arsenal of cutting edge sonar tools, remotely operated vehicles (ROVs), and a manned submersible, we've obtained potentially the most accurate record thus far on the instances of 200-year-long droughts in the Sierra," Graham Kent, director of the Nevada Seismological Laboratory said. "The record from Fallen Leaf Lake confirms what was expected and is likely the most accurate record, in terms of precipitation, than obtained previously from a variety of methods throughout the Sierra."
Kent is part of the University of Nevada, Reno and Scripps research team that traced the megadroughts and dry spells of the region using tree-ring analysis, shoreline records and sediment deposition in Fallen Leaf Lake. Using side-scan and multibeam sonar technology developed to map underwater earthquake fault lines such as the West Tahoe fault beneath Fallen Leaf Lake, the team also imaged standing trees up to 130 feet beneath the lake surface as well as submerged ancient shoreline structure and development. The trees matured while the lake level was 130 to 200 feet below its modern elevation and were not deposited by a landslide as was suspected.
The team, led by John Kleppe, University of Nevada, Reno engineering professor emeritus, published a paper on this research and is presenting its findings in seminars and workshops.
"The lake is like a 'canary in a coal mine' for the Sierra, telling the story of precipitation very clearly," Kent said. "Fallen Leaf Lake elevations change rapidly due to its unique ratio between catchment basin and lake surface of about 8 to 1. With analysis of the standing trees submerged in the lake, sediment cores and our sonar scanning of ancient shorelines, we can more accurately and easily trace the precipitation history of the region."
Water balance calculations and analysis of tree-ring samples undertaken by Kleppe, Kent and Scripps scientists Danny Brothers and Neal Driscoll, along with Professor Franco Biondi of the University's College of Science, suggest annual precipitation was less than 60 percent of normal from the late 10th century to the early 13th century. Their research was documented in a scientific paper, Duration and severity of Medieval drought in the Lake Tahoe Basin, published in the Quaternary Science Reviews in November 2011.
Tree-ring records and submerged paleoshoreline geomorphology suggest a Medieval low-lake level of Fallen Leaf Lake lasted more than 220 years. More than 80 trees were found lying on the lake floor at various elevations above the paleoshoreline.
"Although the ancient cycle of megadroughts seems to occur every 650 to 1150 years and the last one was 750 years ago, it is uncertain when the next megadrought will occur. With climate change upon us, it will be interesting to see how carbon dioxide loading in the atmosphere will affect this cycle," Kent said.
Read more at Science Daily
Jun 1, 2012
Astronomers Discover Faintest Distant Galaxy
Astronomers at Arizona State University have found an exceptionally distant galaxy, ranked among the top 10 most distant objects currently known in space. Light from the recently detected galaxy left the object about 800 million years after the beginning of the universe, when the universe was in its infancy.
A team of astronomers, led by James Rhoads, Sangeeta Malhotra, and Pascale Hibon of the School of Earth and Space Exploration at ASU, identified the remote galaxy after scanning a moon-sized patch of sky with the IMACS instrument on the Magellan Telescopes at the Carnegie Institution's Las Campanas Observatory in Chile.
The observational data reveal a faint infant galaxy, located 13 billion light-years away. "This galaxy is being observed at a young age. We are seeing it as it was in the very distant past, when the universe was a mere 800 million years old," says Rhoads, an associate professor in the school. "This image is like a baby picture of this galaxy, taken when the universe was only 5 percent of its current age. Studying these very early galaxies is important because it helps us understand how galaxies form and grow."
The galaxy, designated LAEJ095950.99+021219.1, was first spotted in summer 2011. The find is a rare example of a galaxy from that early epoch, and will help astronomers make progress in understanding the process of galaxy formation. The find was enabled by the combination of the Magellan telescopes' tremendous light gathering capability and exquisite image quality, thanks to the mirrors built in Arizona's Steward Observatory; and by the unique ability of the IMACS instrument to obtain either images or spectra across a very wide field of view. The research, published in the June 1 issue of The Astrophysical Journal Letters, was supported by the National Science Foundation (NSF).
This galaxy, like the others that Malhotra, Rhoads, and their team seek, is extremely faint and was detected by the light emitted by ionized hydrogen. The object was first identified as a candidate early-universe galaxy in a paper led by team member and former ASU postdoctoral researcher Hibon. The search employed a unique technique they pioneered that uses special narrow-band filters that allow a small wavelength range of light through.
A special filter fitted to the telescope camera was designed to catch light of narrow wavelength ranges, allowing the astronomers to conduct a very sensitive search in the infrared wavelength range. "We have been using this technique since 1998 and pushing it to ever-greater distances and sensitivities in our search for the first galaxies at the edge of the universe," says Malhotra, an associate professor in the school. "Young galaxies must be observed at infrared wavelengths and this is not easy to do using ground-based telescopes, since the Earth's atmosphere itself glows and large detectors are hard to make."
To be able to detect these very distant objects which were forming near the beginning of the universe, astronomers look for sources which have very high redshifts. Astronomers refer to an object's distance by a number called its "redshift," which relates to how much its light has stretched to longer, redder wavelengths due to the expansion of the universe. Objects with larger redshifts are farther away and are seen further back in time. LAEJ095950.99+021219.1 has a redshift of 7. Only a handful of galaxies have confirmed redshifts greater than 7, and none of the others is as faint as LAEJ095950.99+021219.1.
"We have used this search to find hundreds of objects at somewhat smaller distances. We have found several hundred galaxies at redshift 4.5, several at redshift 6.5, and now at redshift 7 we have found one," explains Rhoads. "We've pushed the experiment's design to a redshift of 7 -- it's the most distant we can do with well-established, mature technology, and it's about the most distant where people have been finding objects successfully up to now."
Malhotra adds, "With this search, we've not only found one of the furthest galaxies known, but also the faintest confirmed at that distance. Up to now, the redshift 7 galaxies we know about are literally the top one percent of galaxies. What we're doing here is to start examining some of the fainter ones -- thing that may better represent the other 99 percent."
Resolving the details of objects that are far away is challenging, which is why images of distant young galaxies such as this one appear small, faint, and blurry.
"As time goes by, these small blobs which are forming stars, they'll dance around each other, merge with each other and form bigger and bigger galaxies. Somewhere halfway through the age of the universe they start looking like the galaxies we see today -- and not before. Why, how, when, where that happens is a fairly active area of research," explains Malhotra.
Read more at Science Daily
A team of astronomers, led by James Rhoads, Sangeeta Malhotra, and Pascale Hibon of the School of Earth and Space Exploration at ASU, identified the remote galaxy after scanning a moon-sized patch of sky with the IMACS instrument on the Magellan Telescopes at the Carnegie Institution's Las Campanas Observatory in Chile.
The observational data reveal a faint infant galaxy, located 13 billion light-years away. "This galaxy is being observed at a young age. We are seeing it as it was in the very distant past, when the universe was a mere 800 million years old," says Rhoads, an associate professor in the school. "This image is like a baby picture of this galaxy, taken when the universe was only 5 percent of its current age. Studying these very early galaxies is important because it helps us understand how galaxies form and grow."
The galaxy, designated LAEJ095950.99+021219.1, was first spotted in summer 2011. The find is a rare example of a galaxy from that early epoch, and will help astronomers make progress in understanding the process of galaxy formation. The find was enabled by the combination of the Magellan telescopes' tremendous light gathering capability and exquisite image quality, thanks to the mirrors built in Arizona's Steward Observatory; and by the unique ability of the IMACS instrument to obtain either images or spectra across a very wide field of view. The research, published in the June 1 issue of The Astrophysical Journal Letters, was supported by the National Science Foundation (NSF).
This galaxy, like the others that Malhotra, Rhoads, and their team seek, is extremely faint and was detected by the light emitted by ionized hydrogen. The object was first identified as a candidate early-universe galaxy in a paper led by team member and former ASU postdoctoral researcher Hibon. The search employed a unique technique they pioneered that uses special narrow-band filters that allow a small wavelength range of light through.
A special filter fitted to the telescope camera was designed to catch light of narrow wavelength ranges, allowing the astronomers to conduct a very sensitive search in the infrared wavelength range. "We have been using this technique since 1998 and pushing it to ever-greater distances and sensitivities in our search for the first galaxies at the edge of the universe," says Malhotra, an associate professor in the school. "Young galaxies must be observed at infrared wavelengths and this is not easy to do using ground-based telescopes, since the Earth's atmosphere itself glows and large detectors are hard to make."
To be able to detect these very distant objects which were forming near the beginning of the universe, astronomers look for sources which have very high redshifts. Astronomers refer to an object's distance by a number called its "redshift," which relates to how much its light has stretched to longer, redder wavelengths due to the expansion of the universe. Objects with larger redshifts are farther away and are seen further back in time. LAEJ095950.99+021219.1 has a redshift of 7. Only a handful of galaxies have confirmed redshifts greater than 7, and none of the others is as faint as LAEJ095950.99+021219.1.
"We have used this search to find hundreds of objects at somewhat smaller distances. We have found several hundred galaxies at redshift 4.5, several at redshift 6.5, and now at redshift 7 we have found one," explains Rhoads. "We've pushed the experiment's design to a redshift of 7 -- it's the most distant we can do with well-established, mature technology, and it's about the most distant where people have been finding objects successfully up to now."
Malhotra adds, "With this search, we've not only found one of the furthest galaxies known, but also the faintest confirmed at that distance. Up to now, the redshift 7 galaxies we know about are literally the top one percent of galaxies. What we're doing here is to start examining some of the fainter ones -- thing that may better represent the other 99 percent."
Resolving the details of objects that are far away is challenging, which is why images of distant young galaxies such as this one appear small, faint, and blurry.
"As time goes by, these small blobs which are forming stars, they'll dance around each other, merge with each other and form bigger and bigger galaxies. Somewhere halfway through the age of the universe they start looking like the galaxies we see today -- and not before. Why, how, when, where that happens is a fairly active area of research," explains Malhotra.
Read more at Science Daily
Even Early Human Hands Left Prominent Ecological Footprints
Early human activity has left a greater footprint on today's ecosystem than previously thought, say researchers working at the University of Pittsburgh and in the multidisciplinary Long Term Ecological Research (LTER) Network, created by the National Science Foundation to conduct long time scale research on ecological issues that span huge geographical areas. Highlighted in the June issue of BioScience, the Pitt/LTER collaboration shows how historic human actions caused changes in nature that continue to reverberate throughout present-day ecosystems.
In the article, researchers take a retrospective look at the impact of human activity on LTER Network sites spanning states from Georgia to New Hampshire and propose methods for measuring the effects of such activity. The study of legacy effects is important because it provides insights into how today's actions can affect tomorrow's ecological systems, says Daniel Bain, coprincipal investigator at the Baltimore Ecosystem Study LTER Network site and an assistant professor in the Department of Geology and Planetary Science in Pitt's Kenneth P. Dietrich School of Arts and Sciences. Bain notes that decision makers at all levels, including those creating policy, need historical information about ecosystems to make more effective environmental policies. In a democracy, says Bain, a diverse group of stakeholders -- such as outdoor enthusiasts like Trout Unlimited, fiscal watchdog groups such as Common Cause, and individual landowners -- needs this kind of data to effectively engage in the management of common resources.
"Increasingly, we propose to manage our ecosystems with sophisticated and complicated strategies," Bain says. "For example, we are attempting to manage agricultural runoff by changing how streams and floodplains are arranged. However, while designing these strategies, we tend to address the most recent impacts rather than the entire history of impacts. This can lead to wasted effort and misuse of relatively limited resources."
Legacy effects from human activities are all around us, says Bain, but few people ever give them a thought. For example, urban systems accumulate a lot of human-made materials, some of which have large ecological footprints and will ultimately leave a legacy. Bain cites the example of lead, which has been banned from gasoline and paint in the United States for several decades but can remain in soils for much longer periods of time. "We should be careful about growing food close to roads or near old houses," he cautions.
In agriculture, areas that were plowed hundreds of years ago react differently to contemporary acid deposition from air pollutants when compared with adjacent unplowed areas. Similarly, our extensive use of cement may add substantial amounts of calcium to urban soils, although the ecological impact of this practice is not yet fully understood, Bain adds.
Indeed, many landscapes that provide baseline ecological data for evaluating environmental change were structured in part by previous human interactions, such as settlements and agricultural practices. To make sense of the observed ecological patterns on such landscapes, Bain says, we must know something of the history of the processes acting to shape those patterns. A recent example of the need for historical data associated with the impact of humans is the debate over global warming and its associated climate change -- the legacy of increased emissions of carbon dioxide and other greenhouse gases over millennia, but hugely accelerated since the industrial revolution and, especially, over the past several decades.
Bain points out that without a systematic collection of data recorded by the LTER Network, the broader geographical patterns of legacy effects would be much more difficult to detect. For example, scientists have discovered that recently glaciated areas have much less dirt accumulation than unglaciated areas. When Europeans first arrived in the eastern United States and dramatically changed local agricultural practices, eroded soil ultimately found its way into waterways. However, the glaciated areas produced less dirt, leaving less of an erosional signal in contrast to unglaciated areas, which lost more dirt and left such erosional legacies as buried valley bottoms and filled harbors. "In terms of policy, the management of glaciated and unglaciated areas requires different approaches," Bain says.
Nevertheless, Bain says, "although LTER sites have decades of data to draw from, we do not necessarily capture these changes, even with our best multidecade studies. It's hard to know what we might have been able to understand now had the LTER Network been established six or nine decades ago instead of three."
Read more at Science Daily
In the article, researchers take a retrospective look at the impact of human activity on LTER Network sites spanning states from Georgia to New Hampshire and propose methods for measuring the effects of such activity. The study of legacy effects is important because it provides insights into how today's actions can affect tomorrow's ecological systems, says Daniel Bain, coprincipal investigator at the Baltimore Ecosystem Study LTER Network site and an assistant professor in the Department of Geology and Planetary Science in Pitt's Kenneth P. Dietrich School of Arts and Sciences. Bain notes that decision makers at all levels, including those creating policy, need historical information about ecosystems to make more effective environmental policies. In a democracy, says Bain, a diverse group of stakeholders -- such as outdoor enthusiasts like Trout Unlimited, fiscal watchdog groups such as Common Cause, and individual landowners -- needs this kind of data to effectively engage in the management of common resources.
"Increasingly, we propose to manage our ecosystems with sophisticated and complicated strategies," Bain says. "For example, we are attempting to manage agricultural runoff by changing how streams and floodplains are arranged. However, while designing these strategies, we tend to address the most recent impacts rather than the entire history of impacts. This can lead to wasted effort and misuse of relatively limited resources."
Legacy effects from human activities are all around us, says Bain, but few people ever give them a thought. For example, urban systems accumulate a lot of human-made materials, some of which have large ecological footprints and will ultimately leave a legacy. Bain cites the example of lead, which has been banned from gasoline and paint in the United States for several decades but can remain in soils for much longer periods of time. "We should be careful about growing food close to roads or near old houses," he cautions.
In agriculture, areas that were plowed hundreds of years ago react differently to contemporary acid deposition from air pollutants when compared with adjacent unplowed areas. Similarly, our extensive use of cement may add substantial amounts of calcium to urban soils, although the ecological impact of this practice is not yet fully understood, Bain adds.
Indeed, many landscapes that provide baseline ecological data for evaluating environmental change were structured in part by previous human interactions, such as settlements and agricultural practices. To make sense of the observed ecological patterns on such landscapes, Bain says, we must know something of the history of the processes acting to shape those patterns. A recent example of the need for historical data associated with the impact of humans is the debate over global warming and its associated climate change -- the legacy of increased emissions of carbon dioxide and other greenhouse gases over millennia, but hugely accelerated since the industrial revolution and, especially, over the past several decades.
Bain points out that without a systematic collection of data recorded by the LTER Network, the broader geographical patterns of legacy effects would be much more difficult to detect. For example, scientists have discovered that recently glaciated areas have much less dirt accumulation than unglaciated areas. When Europeans first arrived in the eastern United States and dramatically changed local agricultural practices, eroded soil ultimately found its way into waterways. However, the glaciated areas produced less dirt, leaving less of an erosional signal in contrast to unglaciated areas, which lost more dirt and left such erosional legacies as buried valley bottoms and filled harbors. "In terms of policy, the management of glaciated and unglaciated areas requires different approaches," Bain says.
Nevertheless, Bain says, "although LTER sites have decades of data to draw from, we do not necessarily capture these changes, even with our best multidecade studies. It's hard to know what we might have been able to understand now had the LTER Network been established six or nine decades ago instead of three."
Read more at Science Daily
Bat, Bee, Frog Deaths May Be Linked
In recent years, diseases have ravaged through bat, honeybee and amphibian populations, and now animal experts suspect that shared factors may link the deaths, which are putting many species at risk for extinction.
The latest setback affects bats, given this week's announcement that the deadly fungal disease known as white-nose syndrome has been confirmed in already endangered gray bats. The illness, caused by the fungus Geomyces destructans, has mortality rates reaching up to 100 percent at some sites.
Simultaneously, Colony Collapse Disorder continues to kill honeybees, while yet another fungus, Batrachochytrium dendrobatidis, has wiped out more than 200 frog species across the world.
"It appears that many species are under an immense amount of stress, allowing opportunistic diseases to take hold," Rob Mies, executive director of the Organization for Bat Conservation, told Discovery News. "Life is far more complex, so a single cause is likely not the only explanation for the bat, bee and frog deaths. There could be five, six or more factors involved."
One is how humans may be helping fungal spread. According to the U.S. Fish & Wildlife Service, white-nose syndrome can be inadvertently transferred from people to bats.
"Some of the first caves in North America to be affected by white nose syndrome were in very high tourism areas," Mies said. "Somebody could have visited a cave in Europe wearing boots, and then brought back a tiny bit of mud on the boots containing dormant fungus."
He explained that the fungus, which is sensitive to body warmth, does not infect humans and most other animals. Bats experience a lower body temperature while hibernating, when the fungus can set in.
"It may eat into a bat's skin, even putting holes in it," Mies said. "The fungus can grow to a point where it winds up replacing the skin."
The amphibian fungus also attacks through the skin, causing an infected frog's skin to become up to 40 times thicker than usual, according to San Francisco State University biologist Vance Vredenburg, who recently conducted a study on the related disease, known as chytrid. Since frogs use their skin to absorb water and vital salts, such sodium and potassium, infection often leads to death.
Other human factors tied to the bat, frog and bee deaths include the use of chemical pesticides that may be absorbed through the skin, climate change, habitat loss and the spread of other health threats, such as viruses and mites.
Helene Marshall of Marshall's Farm Natural Honey told Discovery News that "the virus causing CCD came to us when U.S. beekeepers were importing Australian packaged bees to meet the high pollination demand of the almond growers here in California."
Both bees and bats are critical to agriculture. Bats, like bees, can help to pollinate. They are also a primary predator of agricultural and other insect pests, such as mosquitoes. Frogs additionally consume insect pests.
The U.S. Fish & Wildlife Service now has a national plan for managing white-nose syndrome in bats. It allows for diagnostics, disease management, disease surveillance and more. But Mies points out that for animals like bats and frogs, antifungals can be "pretty nasty medicines," doing damage of their own and perhaps further damaging ecosystems.
Read more at Discovery News
The latest setback affects bats, given this week's announcement that the deadly fungal disease known as white-nose syndrome has been confirmed in already endangered gray bats. The illness, caused by the fungus Geomyces destructans, has mortality rates reaching up to 100 percent at some sites.
Simultaneously, Colony Collapse Disorder continues to kill honeybees, while yet another fungus, Batrachochytrium dendrobatidis, has wiped out more than 200 frog species across the world.
"It appears that many species are under an immense amount of stress, allowing opportunistic diseases to take hold," Rob Mies, executive director of the Organization for Bat Conservation, told Discovery News. "Life is far more complex, so a single cause is likely not the only explanation for the bat, bee and frog deaths. There could be five, six or more factors involved."
One is how humans may be helping fungal spread. According to the U.S. Fish & Wildlife Service, white-nose syndrome can be inadvertently transferred from people to bats.
"Some of the first caves in North America to be affected by white nose syndrome were in very high tourism areas," Mies said. "Somebody could have visited a cave in Europe wearing boots, and then brought back a tiny bit of mud on the boots containing dormant fungus."
He explained that the fungus, which is sensitive to body warmth, does not infect humans and most other animals. Bats experience a lower body temperature while hibernating, when the fungus can set in.
"It may eat into a bat's skin, even putting holes in it," Mies said. "The fungus can grow to a point where it winds up replacing the skin."
The amphibian fungus also attacks through the skin, causing an infected frog's skin to become up to 40 times thicker than usual, according to San Francisco State University biologist Vance Vredenburg, who recently conducted a study on the related disease, known as chytrid. Since frogs use their skin to absorb water and vital salts, such sodium and potassium, infection often leads to death.
Other human factors tied to the bat, frog and bee deaths include the use of chemical pesticides that may be absorbed through the skin, climate change, habitat loss and the spread of other health threats, such as viruses and mites.
Helene Marshall of Marshall's Farm Natural Honey told Discovery News that "the virus causing CCD came to us when U.S. beekeepers were importing Australian packaged bees to meet the high pollination demand of the almond growers here in California."
Both bees and bats are critical to agriculture. Bats, like bees, can help to pollinate. They are also a primary predator of agricultural and other insect pests, such as mosquitoes. Frogs additionally consume insect pests.
The U.S. Fish & Wildlife Service now has a national plan for managing white-nose syndrome in bats. It allows for diagnostics, disease management, disease surveillance and more. But Mies points out that for animals like bats and frogs, antifungals can be "pretty nasty medicines," doing damage of their own and perhaps further damaging ecosystems.
Read more at Discovery News
Lip Smacks of Monkeys Prelude to Speech?
Monkeys smack their lips during friendly face-to-face encounters, and now a new study says that this seemingly simple behavior may be tied to human speech.
Previously experts thought the evolutionary origins of human speech came from primate vocalizations, such as chimpanzee hoots or monkey coos. But now scientists suspect that rapid, controlled movements of the tongue, lips and jaw -- all of which are needed for lip smacking -- were more important to the emergence of speech.
For the study, published in the latest Current Biology, W. Tecumseh Fitch and colleagues used x-ray movies to investigate lip-smacking gestures in macaque monkeys. Mother monkeys do this a lot with their infants, so it seems to be kind of an endearing thing, perhaps like humans going goo-goo-goo in a baby's face while playing. (Monkeys will also vibrate their lips to make a raspberry sound.)
Monkey lip-smacking, however, makes a quiet sound, similar to "p p p p". It's not accompanied by phonation, meaning sound produced by vocal cord vibration in the larynx.
Fitch, who is head of the Department of Cognitive Biology at the University of Vienna, and his team determined that lip-smacking is a complex behavior that requires rapid, coordinated movements of the lips, jaw, tongue and the hyoid bone (which provides the supporting skeleton for the larynx and tongue).
The smacks occur at a rate of about 5 cycles per second, and that's the clincher. It's the exact same rate as for average speed human speech, and much faster than chewing movements (about 2.5 cycles per second).
Read more at Discovery News
Previously experts thought the evolutionary origins of human speech came from primate vocalizations, such as chimpanzee hoots or monkey coos. But now scientists suspect that rapid, controlled movements of the tongue, lips and jaw -- all of which are needed for lip smacking -- were more important to the emergence of speech.
For the study, published in the latest Current Biology, W. Tecumseh Fitch and colleagues used x-ray movies to investigate lip-smacking gestures in macaque monkeys. Mother monkeys do this a lot with their infants, so it seems to be kind of an endearing thing, perhaps like humans going goo-goo-goo in a baby's face while playing. (Monkeys will also vibrate their lips to make a raspberry sound.)
Monkey lip-smacking, however, makes a quiet sound, similar to "p p p p". It's not accompanied by phonation, meaning sound produced by vocal cord vibration in the larynx.
Fitch, who is head of the Department of Cognitive Biology at the University of Vienna, and his team determined that lip-smacking is a complex behavior that requires rapid, coordinated movements of the lips, jaw, tongue and the hyoid bone (which provides the supporting skeleton for the larynx and tongue).
The smacks occur at a rate of about 5 cycles per second, and that's the clincher. It's the exact same rate as for average speed human speech, and much faster than chewing movements (about 2.5 cycles per second).
Read more at Discovery News
May 31, 2012
NASA Preparing to Launch Its Newest X-Ray Eyes
NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, is being prepared for the final journey to its launch pad on Kwajalein Atoll in the central Pacific Ocean. The mission will study everything from massive black holes to our own sun. It is scheduled to launch no earlier than June 13.
"We will see the hottest, densest and most energetic objects with a fundamentally new, high-energy X-ray telescope that can obtain much deeper and crisper images than before," said Fiona Harrison, the NuSTAR principal investigator at the California Institute of Technology in Pasadena, Calif., who first conceived of the mission 20 years ago.
The observatory is perched atop an Orbital Sciences Corporation Pegasus XL rocket. If the mission passes its Flight Readiness Review on June 1, the rocket will be strapped to the bottom of an aircraft, the L-1011 Stargazer, also operated by Orbital, on June 2. The Stargazer is scheduled to fly from Vandenberg Air Force Base in central California to Kwajalein on June 5 to 6.
After taking off on launch day, the Stargazer will drop the rocket around 8:30 a.m. PDT (11:30 a.m. EDT). The rocket will then ignite and carry NuSTAR to a low orbit around Earth.
"NuSTAR uses several innovations for its unprecedented imaging capability and was made possible by many partners," said Yunjin Kim, the project manager for the mission at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "We're all really excited to see the fruition of our work begin its mission in space."
NuSTAR will be the first space telescope to create focused images of cosmic X-rays with the highest energies. These are the same types of X-rays that doctors use to see your bones and airports use to scan your bags. The telescope will have more than 10 times the resolution and more than 100 times the sensitivity of its predecessors while operating in a similar energy range.
The mission will work with other telescopes in space now, including NASA's Chandra X-ray Observatory, which observes lower-energy X-rays. Together, they will provide a more complete picture of the most energetic and exotic objects in space, such as black holes, dead stars and jets traveling near the speed of light.
"NuSTAR truly demonstrates the value that NASA's research and development programs provide in advancing the nation's science agenda," said Paul Hertz, NASA's Astrophysics Division director. "Taking just over four years from receiving the project go-ahead to launch, this low-cost Explorer mission will use new mirror and detector technology that was developed in NASA's basic research program and tested in NASA's scientific ballooning program. The result of these modest investments is a small space telescope that will provide world-class science in an important but relatively unexplored band of the electromagnetic spectrum."
NuSTAR will study black holes that are big and small, far and near, answering questions about the formation and physics behind these wonders of the cosmos. The observatory will also investigate how exploding stars forge the elements that make up planets and people, and it will even study our own sun's atmosphere.
The observatory is able to focus the high-energy X-ray light into sharp images because of a complex, innovative telescope design. High-energy light is difficult to focus because it only reflects off mirrors when hitting at nearly parallel angles. NuSTAR solves this problem with nested shells of mirrors. It has the most nested shells ever used in a space telescope: 133 in each of two optic units. The mirrors were molded from ultra-thin glass similar to that found in laptop screens and glazed with even thinner layers of reflective coating.
The telescope also consists of state-of-the-art detectors and a lengthy 33-foot (10-meter) mast, which connects the detectors to the nested mirrors, providing the long distance required to focus the X-rays. This mast is folded up into a canister small enough to fit atop the Pegasus launch vehicle. It will unfurl about seven days after launch. About 23 days later, science operations will begin.
Read more at Science Daily
"We will see the hottest, densest and most energetic objects with a fundamentally new, high-energy X-ray telescope that can obtain much deeper and crisper images than before," said Fiona Harrison, the NuSTAR principal investigator at the California Institute of Technology in Pasadena, Calif., who first conceived of the mission 20 years ago.
The observatory is perched atop an Orbital Sciences Corporation Pegasus XL rocket. If the mission passes its Flight Readiness Review on June 1, the rocket will be strapped to the bottom of an aircraft, the L-1011 Stargazer, also operated by Orbital, on June 2. The Stargazer is scheduled to fly from Vandenberg Air Force Base in central California to Kwajalein on June 5 to 6.
After taking off on launch day, the Stargazer will drop the rocket around 8:30 a.m. PDT (11:30 a.m. EDT). The rocket will then ignite and carry NuSTAR to a low orbit around Earth.
"NuSTAR uses several innovations for its unprecedented imaging capability and was made possible by many partners," said Yunjin Kim, the project manager for the mission at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "We're all really excited to see the fruition of our work begin its mission in space."
NuSTAR will be the first space telescope to create focused images of cosmic X-rays with the highest energies. These are the same types of X-rays that doctors use to see your bones and airports use to scan your bags. The telescope will have more than 10 times the resolution and more than 100 times the sensitivity of its predecessors while operating in a similar energy range.
The mission will work with other telescopes in space now, including NASA's Chandra X-ray Observatory, which observes lower-energy X-rays. Together, they will provide a more complete picture of the most energetic and exotic objects in space, such as black holes, dead stars and jets traveling near the speed of light.
"NuSTAR truly demonstrates the value that NASA's research and development programs provide in advancing the nation's science agenda," said Paul Hertz, NASA's Astrophysics Division director. "Taking just over four years from receiving the project go-ahead to launch, this low-cost Explorer mission will use new mirror and detector technology that was developed in NASA's basic research program and tested in NASA's scientific ballooning program. The result of these modest investments is a small space telescope that will provide world-class science in an important but relatively unexplored band of the electromagnetic spectrum."
NuSTAR will study black holes that are big and small, far and near, answering questions about the formation and physics behind these wonders of the cosmos. The observatory will also investigate how exploding stars forge the elements that make up planets and people, and it will even study our own sun's atmosphere.
The observatory is able to focus the high-energy X-ray light into sharp images because of a complex, innovative telescope design. High-energy light is difficult to focus because it only reflects off mirrors when hitting at nearly parallel angles. NuSTAR solves this problem with nested shells of mirrors. It has the most nested shells ever used in a space telescope: 133 in each of two optic units. The mirrors were molded from ultra-thin glass similar to that found in laptop screens and glazed with even thinner layers of reflective coating.
The telescope also consists of state-of-the-art detectors and a lengthy 33-foot (10-meter) mast, which connects the detectors to the nested mirrors, providing the long distance required to focus the X-rays. This mast is folded up into a canister small enough to fit atop the Pegasus launch vehicle. It will unfurl about seven days after launch. About 23 days later, science operations will begin.
Read more at Science Daily
Mystery of Monarch Butterfly Migration Takes New Turn
During the fall, hundreds of millions of monarch butterflies living in eastern North America fly up to 1,500 miles to the volcanic forests of Mexico to spend the winter, while monarchs west of the Rocky Mountains fly to the California coast. The phenomenon is both spectacular and mysterious: How do the insects learn these particular routes and why do they stick to them?
A prevailing theory contends that eastern and western monarchs are genetically distinct, and that genetic mechanisms trigger their divergent migratory paths.
An analysis led by Emory University biologists, however, finds that the two groups of monarchs are genetically mixed. Their research, published in the journal Molecular Ecology, suggests that environmental factors may be the key to the butterflies' choice of winter homes, and to where they wind up in the spring.
"Our data gives the strongest signal yet that the eastern and western monarchs belong to a single genetic population," says Emory biologist Jaap de Roode, who led the research. "This distinction is important to help us better understand the behavior of the organism, and to conserve the monarch flyways."
In addition to researchers in the de Roode lab, the study involved a scientist from the Institute of Integrative Biology in Zurich, Switzerland.
Biologists have long been fascinated by the innate and learned behaviors underlying animal migrations. When monarchs are breeding, for instance, they can live up to four weeks, but when they are migrating, they can live as long as six months.
"As the day length gets shorter, their sexual organs do not fully mature and they don't put energy into reproduction. That enables them to fly long distances to warmer zones, and survive the winter," de Roode says. "It's one of the basic lessons in biology: Reproduction is very costly, and if you don't use it, you can live much longer."
Mass movements of animals have huge ecological impacts. They are also visually arresting, from the spectacle of giant herds of wildebeest trekking across the Serengeti to hundreds of thousands of sandhill cranes flocking along the banks of Nebraska's Platte River.
In the case of long-lived mammals and birds, the younger animals may learn some of the behaviors associated with migration. That's not the case with the monarchs, notes Amanda Pierce, a graduate student in Emory's Population Biology, Ecology and Evolution program, and a co-author of the study.
"We know there is no learning component for the butterflies, because each migration is separated by two to three generations," Pierce says. "To me, that makes the problem even more interesting. How can these small, delicate animals travel thousands of kilometers and arrive at the same destination as their great-great grandparents?"
The question of whether eastern and western monarchs are genetically the same has been hotly debated, and may be an essential piece to the puzzle of their divergent migration patterns.
The researchers used 11 genetic markers to compare the genetic structures of eastern and western monarchs, as well as non-migratory monarch populations in Hawaii and New Zealand. The results showed extensive gene flow between the eastern and western monarchs, and a genetic divergence between these North American butterflies and those from Hawaii and New Zealand.
"In a sense, the genetic markers provide a DNA 'fingerprint' for the butterflies," de Roode says. "Just by looking at this fingerprint, you can easily separate the butterflies of North America from those in Hawaii and New Zealand, but you can't tell the difference between the eastern and western monarchs."
The Emory researchers have now joined a project headed by Harvard, which also involves the University of Georgia and the University of Massachusetts, to sequence the full genomes of monarch butterflies from places around the world. That data should rule out genetic differences between the eastern and western monarchs, or reveal whether any smaller genetic differences, beyond the 11 markers used in the study, may be at play between the two groups.
The idea that eastern and western monarchs are distinct populations has been bolstered by tagging-and-tracking efforts based in the United States. That data, gathered through citizen science, indicates that the butterflies stay on separate sides of the Rocky Mountains -- a formidable high-altitude barrier.
De Roode, however, theorizes that when spring signals the eastern monarchs to leave the overwintering grounds in Mexico, they may simply keep radiating out, reproducing and expanding as long as they find milkweed plants, the food for their caterpillars.
"Few people have tagged the monarchs within Mexico to see where they go," he says, "because Mexico doesn't have as much citizen science as the U.S."
If the theory is correct, some of the monarchs leaving Mexico each spring may wind up in western North America, while others may filter into the eastern United States. This influx to the western U.S. could be crucial to survival of monarchs on that side of the continental divide.
"There are far fewer monarchs west of the Rockies," de Roode says. He notes that all of the overwintering monarchs on a typical overwintering site along the California coast consist of about the same number clustered onto a single big tree in Mexico's Monarch Butterfly Biosphere Reserve, where hundreds of millions of monarchs blanket the landscape in the winter.
The monarch butterfly migration has been called an endangered phenomenon, due to the loss of habitat along the routes. The Mexican overwintering sites, located in the Trans-Mexican Volcanic Belt region northwest of Mexico City, particularly suffer from deforestation. Drug trafficking in the region has decimated eco-tourism and hampered efforts to protect the trees.
"We hope our research can aid in the conservation of the monarch flyways," de Roode says.
Raising monarchs for release at weddings, memorials and other events is a growing industry, but U.S. Department of Agriculture regulations restrict shipping the butterflies across state lines.
Read more at Science Daily
A prevailing theory contends that eastern and western monarchs are genetically distinct, and that genetic mechanisms trigger their divergent migratory paths.
An analysis led by Emory University biologists, however, finds that the two groups of monarchs are genetically mixed. Their research, published in the journal Molecular Ecology, suggests that environmental factors may be the key to the butterflies' choice of winter homes, and to where they wind up in the spring.
"Our data gives the strongest signal yet that the eastern and western monarchs belong to a single genetic population," says Emory biologist Jaap de Roode, who led the research. "This distinction is important to help us better understand the behavior of the organism, and to conserve the monarch flyways."
In addition to researchers in the de Roode lab, the study involved a scientist from the Institute of Integrative Biology in Zurich, Switzerland.
Biologists have long been fascinated by the innate and learned behaviors underlying animal migrations. When monarchs are breeding, for instance, they can live up to four weeks, but when they are migrating, they can live as long as six months.
"As the day length gets shorter, their sexual organs do not fully mature and they don't put energy into reproduction. That enables them to fly long distances to warmer zones, and survive the winter," de Roode says. "It's one of the basic lessons in biology: Reproduction is very costly, and if you don't use it, you can live much longer."
Mass movements of animals have huge ecological impacts. They are also visually arresting, from the spectacle of giant herds of wildebeest trekking across the Serengeti to hundreds of thousands of sandhill cranes flocking along the banks of Nebraska's Platte River.
In the case of long-lived mammals and birds, the younger animals may learn some of the behaviors associated with migration. That's not the case with the monarchs, notes Amanda Pierce, a graduate student in Emory's Population Biology, Ecology and Evolution program, and a co-author of the study.
"We know there is no learning component for the butterflies, because each migration is separated by two to three generations," Pierce says. "To me, that makes the problem even more interesting. How can these small, delicate animals travel thousands of kilometers and arrive at the same destination as their great-great grandparents?"
The question of whether eastern and western monarchs are genetically the same has been hotly debated, and may be an essential piece to the puzzle of their divergent migration patterns.
The researchers used 11 genetic markers to compare the genetic structures of eastern and western monarchs, as well as non-migratory monarch populations in Hawaii and New Zealand. The results showed extensive gene flow between the eastern and western monarchs, and a genetic divergence between these North American butterflies and those from Hawaii and New Zealand.
"In a sense, the genetic markers provide a DNA 'fingerprint' for the butterflies," de Roode says. "Just by looking at this fingerprint, you can easily separate the butterflies of North America from those in Hawaii and New Zealand, but you can't tell the difference between the eastern and western monarchs."
The Emory researchers have now joined a project headed by Harvard, which also involves the University of Georgia and the University of Massachusetts, to sequence the full genomes of monarch butterflies from places around the world. That data should rule out genetic differences between the eastern and western monarchs, or reveal whether any smaller genetic differences, beyond the 11 markers used in the study, may be at play between the two groups.
The idea that eastern and western monarchs are distinct populations has been bolstered by tagging-and-tracking efforts based in the United States. That data, gathered through citizen science, indicates that the butterflies stay on separate sides of the Rocky Mountains -- a formidable high-altitude barrier.
De Roode, however, theorizes that when spring signals the eastern monarchs to leave the overwintering grounds in Mexico, they may simply keep radiating out, reproducing and expanding as long as they find milkweed plants, the food for their caterpillars.
"Few people have tagged the monarchs within Mexico to see where they go," he says, "because Mexico doesn't have as much citizen science as the U.S."
If the theory is correct, some of the monarchs leaving Mexico each spring may wind up in western North America, while others may filter into the eastern United States. This influx to the western U.S. could be crucial to survival of monarchs on that side of the continental divide.
"There are far fewer monarchs west of the Rockies," de Roode says. He notes that all of the overwintering monarchs on a typical overwintering site along the California coast consist of about the same number clustered onto a single big tree in Mexico's Monarch Butterfly Biosphere Reserve, where hundreds of millions of monarchs blanket the landscape in the winter.
The monarch butterfly migration has been called an endangered phenomenon, due to the loss of habitat along the routes. The Mexican overwintering sites, located in the Trans-Mexican Volcanic Belt region northwest of Mexico City, particularly suffer from deforestation. Drug trafficking in the region has decimated eco-tourism and hampered efforts to protect the trees.
"We hope our research can aid in the conservation of the monarch flyways," de Roode says.
Raising monarchs for release at weddings, memorials and other events is a growing industry, but U.S. Department of Agriculture regulations restrict shipping the butterflies across state lines.
Read more at Science Daily
Modern Birds Are Really Baby Dinosaurs
Modern birds retain the physical characteristics of baby dinosaurs, according to a new Nature study that found birds are even more closely related to dinos than previously thought.
Depending on the non-avian dinosaur and bird compared, that might be hard to believe. A toothy, angry reconstruction of Tyrannosaurus rex, for example, on first glance looks little like a common garden blue jay.
When researchers go beyond the surface to the tissue and skull levels, however, the similarities become more obvious.
Harvard University's Arkhat Abzhanov, associate professor of organismic and evolutionary biology, and Bhart-Anjan Bhullar, a Ph.D. student in Abzhanov laboratory and the first author of the study, did just that and found evidence that the evolution of birds is the result of a drastic change in how dinosaurs developed. Rather than take years to reach sexual maturity, as many dinosaurs did, birds sped up the clock (some species take as little as 12 weeks to mature), allowing them to lock into their baby dinosaur look.
"What is interesting about this research is the way it illustrates evolution as a developmental phenomenon," Abzhanov was quoted as saying in a press release. "By changing the developmental biology in early species, nature has produced the modern bird –- an entirely new creature –- and one that, with approximately 10,000 species, is today the most successful group of land vertebrates on the planet."
"The evolution of the many characteristics of birds –- things like feathers, flight, and wishbones -– has traditionally been a difficult problem for biologists," Mark Norell, chair of the division of paleontology at the American Museum of Natural History and one of the paper's co-authors, added.
"By analyzing fossil evidence from skeletons, eggs, and soft tissue of bird-like dinosaurs and primitive birds, we've learned that birds are living theropod dinosaurs, a group of carnivorous animals that include Velociraptor," Norell continued. "This new work advances our knowledge by providing a powerful example of how developmental changes played a major role in the origin and evolution of birds."
Read more at Discovery News
Depending on the non-avian dinosaur and bird compared, that might be hard to believe. A toothy, angry reconstruction of Tyrannosaurus rex, for example, on first glance looks little like a common garden blue jay.
When researchers go beyond the surface to the tissue and skull levels, however, the similarities become more obvious.
Harvard University's Arkhat Abzhanov, associate professor of organismic and evolutionary biology, and Bhart-Anjan Bhullar, a Ph.D. student in Abzhanov laboratory and the first author of the study, did just that and found evidence that the evolution of birds is the result of a drastic change in how dinosaurs developed. Rather than take years to reach sexual maturity, as many dinosaurs did, birds sped up the clock (some species take as little as 12 weeks to mature), allowing them to lock into their baby dinosaur look.
"What is interesting about this research is the way it illustrates evolution as a developmental phenomenon," Abzhanov was quoted as saying in a press release. "By changing the developmental biology in early species, nature has produced the modern bird –- an entirely new creature –- and one that, with approximately 10,000 species, is today the most successful group of land vertebrates on the planet."
"The evolution of the many characteristics of birds –- things like feathers, flight, and wishbones -– has traditionally been a difficult problem for biologists," Mark Norell, chair of the division of paleontology at the American Museum of Natural History and one of the paper's co-authors, added.
"By analyzing fossil evidence from skeletons, eggs, and soft tissue of bird-like dinosaurs and primitive birds, we've learned that birds are living theropod dinosaurs, a group of carnivorous animals that include Velociraptor," Norell continued. "This new work advances our knowledge by providing a powerful example of how developmental changes played a major role in the origin and evolution of birds."
Read more at Discovery News
Milky Way Doomed to Crash with Andromeda
Four billion years from now, the Milky Way galaxy as we know it will cease to exist.
Our Milky Way is bound for a head-on collision with the similar-sized Andromeda galaxy, researchers announced today (May 31). Over time, the huge galactic smashup will create an entirely new hybrid galaxy, one likely bearing an elliptical shape rather than the Milky Way's trademark spiral-armed disk.
"We do know of other galaxies in the local universe around us that are in the process of colliding and merging," Roeland van der Marel, of the Space Telescope Science Institute in Baltimore, told reporters today. "However, what makes the future merger of the Andromeda galaxy and the Milky Way so special is that it will happen to us."
Astronomers have long known that the Milky Way and Andromeda, which is also known as M31, are barrelling toward one another at a speed of about 250,000 mph (400,000 kph). They have also long suspected that the two galaxies may slam into each other billions of years down the road.
However, such discussions of the future galactic crash have always remained somewhat speculative, because no one had managed to measure Andromeda's sideways motion — a key component of that galaxy's path through space.
But that's no longer the case.
Van der Marel and his colleagues used NASA's Hubble space telescope to repeatedly observe select regions of Andromeda over a seven-year period. They were able to measure the galaxy's sideways (or tangential) motion, and they found that Andromeda and the Milky Way are indeed bound for a direct hit.
"The Andromeda galaxy is heading straight in our direction," van der Marel said. "The galaxies will collide, and they will merge together to form one new galaxy." He and his colleagues also created a video simulation of the Milky Way crash into Andromeda.
That merger, van der Marel added, begins in 4 billion years and will be complete by about 6 billion years from now.
A future cosmic crash
Such a dramatic event has never occurred in the long history of our Milky Way, which likely began taking shape about 13.5 billion years ago.
"The Milky Way has had, probably, quite a lot of small, minor mergers," said Rosemary Wyse of Johns Hopkins University in Baltimore, who was not affiliated with the new study. "But this major merger will be unprecedented."
The merger poses no real danger of destroying Earth or our solar system, researchers said. The stretches of empty space separating the stars in the two galaxies will remain vast, making any collisions or serious perturbations unlikely.
However, our solar system will likely get booted out to a different position in the new galaxy, which some astronomers have dubbed the "Milkomeda galaxy." Simulations show that we'll probably occupy a spot much farther from the galactic core than we do today, researchers said.
A new night sky
And the collision will change our night sky dramatically. If any humans are still around 3.75 billion years from now, they'll see Andromeda fill their field of view as it sidles up next to our own Milky Way. For the next few billion years after that, stargazers will be spellbound by the merger, which will trigger intense bouts of star formation.
Finally, by about 7 billion years from now, the bright core of the elliptical Milkomeda galaxy will dominate the night sky, researchers said. (The odds of viewing this sight, at least from Earth, are pretty slim, since the sun is predicted to bloat into a huge red giant 5 or 6 billion years from now.)
Read more at Discovery News
Our Milky Way is bound for a head-on collision with the similar-sized Andromeda galaxy, researchers announced today (May 31). Over time, the huge galactic smashup will create an entirely new hybrid galaxy, one likely bearing an elliptical shape rather than the Milky Way's trademark spiral-armed disk.
"We do know of other galaxies in the local universe around us that are in the process of colliding and merging," Roeland van der Marel, of the Space Telescope Science Institute in Baltimore, told reporters today. "However, what makes the future merger of the Andromeda galaxy and the Milky Way so special is that it will happen to us."
Astronomers have long known that the Milky Way and Andromeda, which is also known as M31, are barrelling toward one another at a speed of about 250,000 mph (400,000 kph). They have also long suspected that the two galaxies may slam into each other billions of years down the road.
However, such discussions of the future galactic crash have always remained somewhat speculative, because no one had managed to measure Andromeda's sideways motion — a key component of that galaxy's path through space.
But that's no longer the case.
Van der Marel and his colleagues used NASA's Hubble space telescope to repeatedly observe select regions of Andromeda over a seven-year period. They were able to measure the galaxy's sideways (or tangential) motion, and they found that Andromeda and the Milky Way are indeed bound for a direct hit.
"The Andromeda galaxy is heading straight in our direction," van der Marel said. "The galaxies will collide, and they will merge together to form one new galaxy." He and his colleagues also created a video simulation of the Milky Way crash into Andromeda.
That merger, van der Marel added, begins in 4 billion years and will be complete by about 6 billion years from now.
A future cosmic crash
Such a dramatic event has never occurred in the long history of our Milky Way, which likely began taking shape about 13.5 billion years ago.
"The Milky Way has had, probably, quite a lot of small, minor mergers," said Rosemary Wyse of Johns Hopkins University in Baltimore, who was not affiliated with the new study. "But this major merger will be unprecedented."
The merger poses no real danger of destroying Earth or our solar system, researchers said. The stretches of empty space separating the stars in the two galaxies will remain vast, making any collisions or serious perturbations unlikely.
However, our solar system will likely get booted out to a different position in the new galaxy, which some astronomers have dubbed the "Milkomeda galaxy." Simulations show that we'll probably occupy a spot much farther from the galactic core than we do today, researchers said.
A new night sky
And the collision will change our night sky dramatically. If any humans are still around 3.75 billion years from now, they'll see Andromeda fill their field of view as it sidles up next to our own Milky Way. For the next few billion years after that, stargazers will be spellbound by the merger, which will trigger intense bouts of star formation.
Finally, by about 7 billion years from now, the bright core of the elliptical Milkomeda galaxy will dominate the night sky, researchers said. (The odds of viewing this sight, at least from Earth, are pretty slim, since the sun is predicted to bloat into a huge red giant 5 or 6 billion years from now.)
Read more at Discovery News
May 30, 2012
Tiny Genetic Variations Led to Big Changes in the Evolving Human Brain
Changes to just three genetic letters among billions contributed to the evolution and development of the mammalian motor sensory circuits and laid the groundwork for the defining characteristics of the human brain, Yale University researchers report.
In a study published in the May 31 issue of the journal Nature, Yale researchers found that a small, simple change in the mammalian genome was critical to the evolution of the corticospinal neural circuits. This circuitry directly connects the cerebral cortex, the conscious part of the human brain, with the brainstem and the spinal cord to make possible the fine, skilled movements necessary for functions such as tool use and speech. The evolutionary mechanisms that drive the formation of the corticospinal circuit, which is a mammalian-specific advance, had remained largely mysterious.
"What we found is a small genetic element that is part of the gene regulatory network directing neurons in the cerebral cortex to form the motor sensory circuits," said Nenad Sestan, professor of neurobiology, researcher for the Kavli Institute for Neuroscience, and senior author of the paper.
Most mammalian genomes contain approximately 22,000 protein-encoding genes. The critical drivers of evolution and development, however, are thought to reside in the non-coding portions of the genome that regulate when and where genes are active. These so-called cis-regulatory elements control the activation of genes that carry out the formation of basic body plans in all organisms.
Sungbo Shim, the first author, and other members of Sestan's lab identified one such regulatory DNA region they named E4, which specifically drives the development of the corticospinal system by controlling the dynamic activity of a gene called Fezf2 -- which, in turn, directs the formation of the corticospinal circuits. E4 is conserved in all mammals but divergent in other craniates, suggesting that it is important to both the emergence and survival of mammalian species. The species differences within E4 are tiny, but crucially drive the regulation of E4 activity by a group of regulatory proteins, or transcription factors, that include SOX4, SOX11, and SOX5. In cooperation, they control the dynamic activation and repression of E4 to shape the development of the corticospinal circuits in the developing embryo.
Read more at Science Daily
In a study published in the May 31 issue of the journal Nature, Yale researchers found that a small, simple change in the mammalian genome was critical to the evolution of the corticospinal neural circuits. This circuitry directly connects the cerebral cortex, the conscious part of the human brain, with the brainstem and the spinal cord to make possible the fine, skilled movements necessary for functions such as tool use and speech. The evolutionary mechanisms that drive the formation of the corticospinal circuit, which is a mammalian-specific advance, had remained largely mysterious.
"What we found is a small genetic element that is part of the gene regulatory network directing neurons in the cerebral cortex to form the motor sensory circuits," said Nenad Sestan, professor of neurobiology, researcher for the Kavli Institute for Neuroscience, and senior author of the paper.
Most mammalian genomes contain approximately 22,000 protein-encoding genes. The critical drivers of evolution and development, however, are thought to reside in the non-coding portions of the genome that regulate when and where genes are active. These so-called cis-regulatory elements control the activation of genes that carry out the formation of basic body plans in all organisms.
Sungbo Shim, the first author, and other members of Sestan's lab identified one such regulatory DNA region they named E4, which specifically drives the development of the corticospinal system by controlling the dynamic activity of a gene called Fezf2 -- which, in turn, directs the formation of the corticospinal circuits. E4 is conserved in all mammals but divergent in other craniates, suggesting that it is important to both the emergence and survival of mammalian species. The species differences within E4 are tiny, but crucially drive the regulation of E4 activity by a group of regulatory proteins, or transcription factors, that include SOX4, SOX11, and SOX5. In cooperation, they control the dynamic activation and repression of E4 to shape the development of the corticospinal circuits in the developing embryo.
Read more at Science Daily
Can a 16-year-old Be a Math Genius?
There's a story in math circles that a famous Hungarian mathematician, Paul Erdos, used to speak of the Riemann hypothesis, a famous unsolved problem, saying that all infants were probably born knowing how to prove it, but then forgot the proof by the time they learned to talk.
While babies working out mathematical proofs may be a bit of a stretch, the news that a 16-year-old may have solved a 350-year-old problem (whether true or not), poses the question of just how youth approach math differently than adults.
Edward Frenkel, a Professor of Mathematics at University of California, Berkeley, and currently Eilenberg Visiting Professor at Columbia University, is now 44. But by the age of 21, he had already been invited to be a visiting professor at Harvard.
"When you're young, you're fast and quick and able to solve things quickly ... but you may not be able to see how the different pieces of mathematics fit together," he said. "That comes with experience."
In the last few years, he's been working with the The Langlands Program (a system of advanced mathematical ideas developed by Robert Langlands in the late 1960s with the goal of tying together different areas of mathematics). When he was younger, he said, he couldn't possibly have seen the big picture in that way.
"Mathematics is not like running the 100-meter dash," Frenkel said. "It's more like a marathon."
Still, he says "there's something to be said for not being spoiled by the traditional educational system when you're young." What is essential, he believes, is the right mentorship. "Math is so abstract, you need guidance. You need someone to open the door."
Mathematics professors who work with teens say that they often see glimpses of brilliance in kids as young as 13, and that "often these long-standing problems are solved by a mathematician who takes a completely different approach," says Jonathan Rogness, assistant professor and director of the Mathematics Center for Educational Programs at the University of Minnesota.
Still, working on famous, unsolved problems is rare until later in a career.
"Usually these problems are unsuitable for study by very young researchers, since they are too hard, too well studied, and require too much background to do anything of value," said MIT mathematics professor Pavel Etingof, who also works with a high school research program at MIT.
Etingof, who was 24 when he received his Ph.D., says that success in math comes from a combination of brainpower and experience.
"Brainpower starts to decrease after a certain age, while experience accumulates over years," he said. "Experience is not as important in mathematics as in some other fields, but it is important. So the power of a mathematician often peaks in his/her 30s. Which means that young people do have an advantage. But not people who are 16. Certainly someone who is 32 has a lot of advantage over someone who is 16."
Read more at Discovery News
While babies working out mathematical proofs may be a bit of a stretch, the news that a 16-year-old may have solved a 350-year-old problem (whether true or not), poses the question of just how youth approach math differently than adults.
Edward Frenkel, a Professor of Mathematics at University of California, Berkeley, and currently Eilenberg Visiting Professor at Columbia University, is now 44. But by the age of 21, he had already been invited to be a visiting professor at Harvard.
"When you're young, you're fast and quick and able to solve things quickly ... but you may not be able to see how the different pieces of mathematics fit together," he said. "That comes with experience."
In the last few years, he's been working with the The Langlands Program (a system of advanced mathematical ideas developed by Robert Langlands in the late 1960s with the goal of tying together different areas of mathematics). When he was younger, he said, he couldn't possibly have seen the big picture in that way.
"Mathematics is not like running the 100-meter dash," Frenkel said. "It's more like a marathon."
Still, he says "there's something to be said for not being spoiled by the traditional educational system when you're young." What is essential, he believes, is the right mentorship. "Math is so abstract, you need guidance. You need someone to open the door."
Mathematics professors who work with teens say that they often see glimpses of brilliance in kids as young as 13, and that "often these long-standing problems are solved by a mathematician who takes a completely different approach," says Jonathan Rogness, assistant professor and director of the Mathematics Center for Educational Programs at the University of Minnesota.
Still, working on famous, unsolved problems is rare until later in a career.
"Usually these problems are unsuitable for study by very young researchers, since they are too hard, too well studied, and require too much background to do anything of value," said MIT mathematics professor Pavel Etingof, who also works with a high school research program at MIT.
Etingof, who was 24 when he received his Ph.D., says that success in math comes from a combination of brainpower and experience.
"Brainpower starts to decrease after a certain age, while experience accumulates over years," he said. "Experience is not as important in mathematics as in some other fields, but it is important. So the power of a mathematician often peaks in his/her 30s. Which means that young people do have an advantage. But not people who are 16. Certainly someone who is 32 has a lot of advantage over someone who is 16."
Read more at Discovery News
Earhart's Anti-Freckle Ointment Jar Possibly Recovered
A small cosmetic jar offers more circumstantial evidence that the legendary aviator, Amelia Earhart, died on an uninhabited island in the southwestern Pacific republic of Kiribati.
Found broken in five pieces, the ointment pot was collected on Nikumaroro Island by researchers of The International Group for Historic Aircraft Recovery (TIGHAR), which has long been investigating the last, fateful flight taken by Earhart 75 years ago.
When reassembled, the glass fragments make up a nearly complete jar identical in shape to the ones used by Dr. C. H Berry's Freckle Ointment. The ointment was marketed in the early 20th century as a concoction guaranteed to make freckles fade.
"It's well documented Amelia had freckles and disliked having them," Joe Cerniglia, the TIGHAR researcher who spotted the freckle ointment as a possible match, told Discovery News.
The jar fragments were found together with other artifacts during TIGHAR's nine archaeological expeditions to the tiny coral atoll believed to be Earhart's final resting place.
Analysis of the recovered artifacts will be presented at a three-day conference in Arlington, Va. A new study of post loss radio signals and the latest forensic analysis of a photograph believed to show the landing gear of Earhart's aircraft on Nikumaroro reef three months after her disappearance, will be also discussed.
Beginning on June 1st, the symposium will highlight TIGHAR's high-tech search next July to find pieces of Earhart's Lockheed Electra aircraft.
The pilot mysteriously vanished while flying over the Pacific Ocean on July 2, 1937 during a record attempt to fly around the world at the equator. The general consensus has been that Earhart's twin-engined plane ran out of fuel and crashed in the Pacific Ocean, somewhere near Howland Island.
But according to Ric Gillespie, executive director of TIGHAR, there is an alternative scenario.
"The navigation line Amelia described in her final in-flight radio transmission passed through not only Howland Island, her intended destination, but also Gardner Island, now called Nikumaroro," Gillespie said at a special press event on March 20 hosted by Secretary of State Hillary Clinton.
According to Gillespie, the possibility that Earhart and navigator Fred Noonan might have made an emergency landing on Nikumaroro's flat coral reef, some 300 miles southeast of their target destination, is supported by a number of artifacts which, combined with archival research, strongly point to a castaway presence on the remote island.
"Broken shards from several glass containers have been recovered from the Seven Site, the archaeological site on the southeast end of Nikumaroro that fits the description of where the partial skeleton of a castaway was discovered in 1940," Gillespie told Discovery News.
Found with the skeletal remains at that time were part of a man's shoe, part of a woman's shoe, a box that had once contained a sextant, remnants of a fire, bird bones and turtle bones -- all suggesting that the site had been the castaways' camp.
"Unfortunately, the bones and artifacts found in 1940 were subsequently lost," said Gillespie.
Like most archaeological sites, the Seven Site has yielded evidence of activity from several different periods in the island's history and not all of the glass recovered from the site is attributable to the castaway.
"For example, the top of a war-time Coke bottle and pieces of what was probably a large salt shaker of a style used by the U.S. military are almost certainly relics of one or more U.S. Coast Guard target shooting forays," Gillespie said.
Much of the glass, however, appears to be associated with a castaway presence.
Two of the bottles, both dating from the 1930s, were found in what had been a small campfire.
"The bottoms of both bottles are melted but the upper portions, although shattered, are not heat-damaged -- implying that the bottles once stood upright in the fire. A length of wire found in the same spot has been twisted in such a way as to serve as a handle for holding a bottleneck," said Gillespie.
Read more at Discovery News
Found broken in five pieces, the ointment pot was collected on Nikumaroro Island by researchers of The International Group for Historic Aircraft Recovery (TIGHAR), which has long been investigating the last, fateful flight taken by Earhart 75 years ago.
When reassembled, the glass fragments make up a nearly complete jar identical in shape to the ones used by Dr. C. H Berry's Freckle Ointment. The ointment was marketed in the early 20th century as a concoction guaranteed to make freckles fade.
"It's well documented Amelia had freckles and disliked having them," Joe Cerniglia, the TIGHAR researcher who spotted the freckle ointment as a possible match, told Discovery News.
The jar fragments were found together with other artifacts during TIGHAR's nine archaeological expeditions to the tiny coral atoll believed to be Earhart's final resting place.
Analysis of the recovered artifacts will be presented at a three-day conference in Arlington, Va. A new study of post loss radio signals and the latest forensic analysis of a photograph believed to show the landing gear of Earhart's aircraft on Nikumaroro reef three months after her disappearance, will be also discussed.
Beginning on June 1st, the symposium will highlight TIGHAR's high-tech search next July to find pieces of Earhart's Lockheed Electra aircraft.
The pilot mysteriously vanished while flying over the Pacific Ocean on July 2, 1937 during a record attempt to fly around the world at the equator. The general consensus has been that Earhart's twin-engined plane ran out of fuel and crashed in the Pacific Ocean, somewhere near Howland Island.
But according to Ric Gillespie, executive director of TIGHAR, there is an alternative scenario.
"The navigation line Amelia described in her final in-flight radio transmission passed through not only Howland Island, her intended destination, but also Gardner Island, now called Nikumaroro," Gillespie said at a special press event on March 20 hosted by Secretary of State Hillary Clinton.
According to Gillespie, the possibility that Earhart and navigator Fred Noonan might have made an emergency landing on Nikumaroro's flat coral reef, some 300 miles southeast of their target destination, is supported by a number of artifacts which, combined with archival research, strongly point to a castaway presence on the remote island.
"Broken shards from several glass containers have been recovered from the Seven Site, the archaeological site on the southeast end of Nikumaroro that fits the description of where the partial skeleton of a castaway was discovered in 1940," Gillespie told Discovery News.
Found with the skeletal remains at that time were part of a man's shoe, part of a woman's shoe, a box that had once contained a sextant, remnants of a fire, bird bones and turtle bones -- all suggesting that the site had been the castaways' camp.
"Unfortunately, the bones and artifacts found in 1940 were subsequently lost," said Gillespie.
Like most archaeological sites, the Seven Site has yielded evidence of activity from several different periods in the island's history and not all of the glass recovered from the site is attributable to the castaway.
"For example, the top of a war-time Coke bottle and pieces of what was probably a large salt shaker of a style used by the U.S. military are almost certainly relics of one or more U.S. Coast Guard target shooting forays," Gillespie said.
Much of the glass, however, appears to be associated with a castaway presence.
Two of the bottles, both dating from the 1930s, were found in what had been a small campfire.
"The bottoms of both bottles are melted but the upper portions, although shattered, are not heat-damaged -- implying that the bottles once stood upright in the fire. A length of wire found in the same spot has been twisted in such a way as to serve as a handle for holding a bottleneck," said Gillespie.
Read more at Discovery News
Setting the Galaxy's Age
Scientists trying to piece together the story of how the universe evolved from clumps of dark matter into the sparkling galaxies of today have had to work around a central problem -- no reliable technique to determine the age of small stars like the sun, which are the most common.
An astronomer at the Space Telescope Science Institute has taken a big step in resolving the puzzle. Jason Kalirai used fresh stellar corpses of sun-like stars to serve as a key to time.
“It’s like putting hands on a clock,” Timothy Beers, director of Kitt Peak National Observatory, told Discovery News.
By analyzing four of these newly dead stars, Kalirai determined that the inner halo of stars surrounding the Milky Way is 11.4 billion years old.
The halo is spherical cloud of objects that are not orderly rotating around the center of the galaxy, like the sun and other stars in the Milky Way’s disk do.
“You’re either rotating around the center in an organized fashion -- that’s the disk -- or you’re supported by rapid, random motions. That’s the halo,” Beers said.
Within the halo are at least two populations of stars, again characterized less by location and more by relative motion.
How the stars came to be part of the Milky Way remains a mystery, but Kalirai’s technique can be used to home in on when they arrived.
“When an object comes into the halo, it gets shredded from its parent galaxy and depending on what the orbit was of that parent galaxy it is going to be moving in that orientation. By knowing when those stars formed, we have a limit on when they were accreted into the Milky Way because they couldn’t have been accreted before they were formed,“ Kalirai told Discovery News.
“Until this came along there was really no good way to quantify the age of the stellar distribution,” Beers added.
“Jason takes a handful of a set of stars, believed to be part of the inner halo population, age them and assigns them to a population from which they were drawn,” said Beers, who called the approach “ingenious.”
Kalirai chose so-called white dwarf stars as his portholes because they are simple stars whose light can be broken down into composite wavelengths impregnated with features directly relating to the star’s properties, such as its mass and temperature.
“You can measure it in a straightforward way. You can’t do that for normal hydrogen-burning stars,” Kalirai said.
White dwarfs are what remain after sun-like stars burn through all their hydrogen and shed their outer layers. They are 1 million times more dense than anything on Earth.
“If you took a tablespoon of material from the surface of a white dwarf, it would weigh as much as a school bus here on Earth,” Kalirai said.
They are also terribly common in the Milky Way. About 98 percent of the stars in the galaxy will end their lives as white dwarfs.
Read more at Discovery News
An astronomer at the Space Telescope Science Institute has taken a big step in resolving the puzzle. Jason Kalirai used fresh stellar corpses of sun-like stars to serve as a key to time.
“It’s like putting hands on a clock,” Timothy Beers, director of Kitt Peak National Observatory, told Discovery News.
By analyzing four of these newly dead stars, Kalirai determined that the inner halo of stars surrounding the Milky Way is 11.4 billion years old.
The halo is spherical cloud of objects that are not orderly rotating around the center of the galaxy, like the sun and other stars in the Milky Way’s disk do.
“You’re either rotating around the center in an organized fashion -- that’s the disk -- or you’re supported by rapid, random motions. That’s the halo,” Beers said.
Within the halo are at least two populations of stars, again characterized less by location and more by relative motion.
How the stars came to be part of the Milky Way remains a mystery, but Kalirai’s technique can be used to home in on when they arrived.
“When an object comes into the halo, it gets shredded from its parent galaxy and depending on what the orbit was of that parent galaxy it is going to be moving in that orientation. By knowing when those stars formed, we have a limit on when they were accreted into the Milky Way because they couldn’t have been accreted before they were formed,“ Kalirai told Discovery News.
“Until this came along there was really no good way to quantify the age of the stellar distribution,” Beers added.
“Jason takes a handful of a set of stars, believed to be part of the inner halo population, age them and assigns them to a population from which they were drawn,” said Beers, who called the approach “ingenious.”
Kalirai chose so-called white dwarf stars as his portholes because they are simple stars whose light can be broken down into composite wavelengths impregnated with features directly relating to the star’s properties, such as its mass and temperature.
“You can measure it in a straightforward way. You can’t do that for normal hydrogen-burning stars,” Kalirai said.
White dwarfs are what remain after sun-like stars burn through all their hydrogen and shed their outer layers. They are 1 million times more dense than anything on Earth.
“If you took a tablespoon of material from the surface of a white dwarf, it would weigh as much as a school bus here on Earth,” Kalirai said.
They are also terribly common in the Milky Way. About 98 percent of the stars in the galaxy will end their lives as white dwarfs.
Read more at Discovery News
May 29, 2012
Why Swine Flu Virus Is Developing Drug Resistance
Computer chips of a type more commonly found in games consoles have been used by scientists at the University of Bristol to reveal how the flu virus resists anti-flu drugs such as Relenza and Tamiflu.
Professor Adrian Mulholland and Dr Christopher Woods from Bristol's School of Chemistry, together with colleagues in Thailand, used graphics processing units (GPUs) to simulate the molecular processes that take place when these drugs are used to treat the H1N1-2009 strain of influenza -- commonly known as 'swine flu'.
Their results, published May 29 in Biochemistry, provide new insight that could lead to the development of the next generation of antiviral treatments for flu.
H1N1-2009 is a new, highly adaptive virus derived from different gene segments of swine, avian, and human influenza. Within a few months of its appearance in early 2009, the H1N1-2009 strain caused the first flu pandemic of the 21st-century.
The antiviral drugs Relenza and Tamiflu, which target the neuraminidase (NA) enzyme, successfully treated the infection but widespread use of these drugs has led to a series of mutations in NA that reduce the drugs' effectiveness.
Clinical studies indicate that the double mutant of swine flu NA known as IRHY2 reduced the effectiveness of Relenza by 21 times and Tamiflu by 12,374 times -- that is, to the point where it has become an ineffective treatment.
To understand why the effectiveness of Relenza and Tamiflu is so seriously reduced by the occurrence of this mutation, the researchers performed long-timescale molecular dynamics (MD) simulations using GPUs.
Professor Mulholland said: "Our simulations showed that IRHY became resistant to Tamiflu due to the loss of key hydrogen bonds between the drug and residues in a part of the NA's structure known as the '150-loop'.
"This allowed NA to change from a closed to an open conformation. Tamiflu binds weakly with the open conformation due to poor electrostatic interactions between the drug and the active site, thus rendering the drug ineffective."
These findings suggest that drug resistance could be overcome by increasing hydrogen bond interactions between NA inhibitors and residues in the 150-loop, with the aim of maintaining the closed conformation.
Read more at Science Daily
Professor Adrian Mulholland and Dr Christopher Woods from Bristol's School of Chemistry, together with colleagues in Thailand, used graphics processing units (GPUs) to simulate the molecular processes that take place when these drugs are used to treat the H1N1-2009 strain of influenza -- commonly known as 'swine flu'.
Their results, published May 29 in Biochemistry, provide new insight that could lead to the development of the next generation of antiviral treatments for flu.
H1N1-2009 is a new, highly adaptive virus derived from different gene segments of swine, avian, and human influenza. Within a few months of its appearance in early 2009, the H1N1-2009 strain caused the first flu pandemic of the 21st-century.
The antiviral drugs Relenza and Tamiflu, which target the neuraminidase (NA) enzyme, successfully treated the infection but widespread use of these drugs has led to a series of mutations in NA that reduce the drugs' effectiveness.
Clinical studies indicate that the double mutant of swine flu NA known as IRHY2 reduced the effectiveness of Relenza by 21 times and Tamiflu by 12,374 times -- that is, to the point where it has become an ineffective treatment.
To understand why the effectiveness of Relenza and Tamiflu is so seriously reduced by the occurrence of this mutation, the researchers performed long-timescale molecular dynamics (MD) simulations using GPUs.
Professor Mulholland said: "Our simulations showed that IRHY became resistant to Tamiflu due to the loss of key hydrogen bonds between the drug and residues in a part of the NA's structure known as the '150-loop'.
"This allowed NA to change from a closed to an open conformation. Tamiflu binds weakly with the open conformation due to poor electrostatic interactions between the drug and the active site, thus rendering the drug ineffective."
These findings suggest that drug resistance could be overcome by increasing hydrogen bond interactions between NA inhibitors and residues in the 150-loop, with the aim of maintaining the closed conformation.
Read more at Science Daily
16th-Century Korean Mummy Provides Clue to Hepatitis B Virus Genetic Code
The discovery of a mummified Korean child with relatively preserved organs enabled an Israeli-South Korean scientific team to conduct a genetic analysis on a liver biopsy which revealed a unique hepatitis B virus (HBV) genotype C2 sequence common in Southeast Asia.
Additional analysis of the ancient HBV genomes may be used as a model to study the evolution of chronic hepatitis B and help understand the spread of the virus, possibly from Africa to East-Asia. It also may shed further light on the migratory pathway of hepatitis B in the Far East from China and Japan to Korea as well as to other regions in Asia and Australia where it is a major cause of cirrhosis and liver cancer.
The reconstruction of the ancient hepatitis B virus genetic code is the oldest full viral genome described in the scientific literature to date. It was reported in the May 21 edition of the scientific journal Hepathology by a research team from the Hebrew University of Jerusalem's Koret School of Veterinary Medicine, the Robert H. Smith Faculty of Agriculture, Food and Environment; the Hebrew University's Faculty of Medicine, the Hadassah Medical Center's Liver Unit; Dankook University and Seoul National University in South Korea.
Carbon 14 tests of the clothing of the mummy suggests that the boy lived around the 16th century during the Korean Joseon Dynasty. The viral DNA sequences recovered from the liver biopsy enabled the scientists to map the entire ancient hepatitis B viral genome.
Using modern-day molecular genetic techniques, the researchers compared the ancient DNA sequences with contemporary viral genomes disclosing distinct differences. The changes in the genetic code are believed to result from spontaneous mutations and possibly environmental pressures during the virus evolutionary process. Based on the observed mutations rates over time, the analysis suggests that the reconstructed mummy's hepatitis B virus DNA had its origin between 3,000 to 100,000 years ago.
The hepatitis B virus is transmitted through the contact with infected body fluids , i.e. from carrier mothers to their babies, through sexual contact and intravenous drug abuse. According to the World Health Organization, there are over 400 million carriers of the virus worldwide, predominantly in Africa, China and South Korea, where up to 15 percent of the population are cariers of the virus. In recent years, universal immunization of newborns against hepatitis B in Israel and in South Korea has lead to a massive decline in the incidence of infection.
Read more at Science Daily
Additional analysis of the ancient HBV genomes may be used as a model to study the evolution of chronic hepatitis B and help understand the spread of the virus, possibly from Africa to East-Asia. It also may shed further light on the migratory pathway of hepatitis B in the Far East from China and Japan to Korea as well as to other regions in Asia and Australia where it is a major cause of cirrhosis and liver cancer.
The reconstruction of the ancient hepatitis B virus genetic code is the oldest full viral genome described in the scientific literature to date. It was reported in the May 21 edition of the scientific journal Hepathology by a research team from the Hebrew University of Jerusalem's Koret School of Veterinary Medicine, the Robert H. Smith Faculty of Agriculture, Food and Environment; the Hebrew University's Faculty of Medicine, the Hadassah Medical Center's Liver Unit; Dankook University and Seoul National University in South Korea.
Carbon 14 tests of the clothing of the mummy suggests that the boy lived around the 16th century during the Korean Joseon Dynasty. The viral DNA sequences recovered from the liver biopsy enabled the scientists to map the entire ancient hepatitis B viral genome.
Using modern-day molecular genetic techniques, the researchers compared the ancient DNA sequences with contemporary viral genomes disclosing distinct differences. The changes in the genetic code are believed to result from spontaneous mutations and possibly environmental pressures during the virus evolutionary process. Based on the observed mutations rates over time, the analysis suggests that the reconstructed mummy's hepatitis B virus DNA had its origin between 3,000 to 100,000 years ago.
The hepatitis B virus is transmitted through the contact with infected body fluids , i.e. from carrier mothers to their babies, through sexual contact and intravenous drug abuse. According to the World Health Organization, there are over 400 million carriers of the virus worldwide, predominantly in Africa, China and South Korea, where up to 15 percent of the population are cariers of the virus. In recent years, universal immunization of newborns against hepatitis B in Israel and in South Korea has lead to a massive decline in the incidence of infection.
Read more at Science Daily
Labels:
Archeology,
Biology,
History,
Human,
Science
Occupy the Neolithic: Social Immobility in the Stone Age
Even the most democratic societies are rife with social and economic inequalities, as the current tension between the poorer “99 percent” and the richest “1 percent” vividly illustrates. But just how early in human events such social hierarchies became entrenched has been a matter of debate. A new study of skeletons from prehistoric farming communities across Europe suggests that hereditary inequality was an early feature, going back more than 7,000 years ago.
Most researchers agree that social hierarchies began with the advent of farming. The earliest known farming communities are found in the Near East, dating back almost 11,000 years. Archaeologists have looked for evidence of social stratification in these societies with mixed results. Some early farming societies show signs that people played different roles and that some were buried with greater ritual — shuffling off this mortal coil with a number of elaborate “grave goods,” including pottery and stone tools. However, there is little evidence that social inequality was hereditary or rigidly defined.
That seems to have changed sometime after farmers moved into Europe from the Near East, beginning about 8,500 years ago during a period known as the European Neolithic. One of the best studied farming cultures is the Linearbandkeramik (LBK), which arose in what is today Hungary about 7,500 years ago and spread as far as modern-day Paris within 500 years, after which it appears to have been superseded by other cultures.
Archaeologists have long noted signs that the LBK culture might have been socially stratified. For example, some, but not all, males were buried with stone tools called adzes, which were thought to be used to build the wooden houses in which the farmers lived. But a few researchers have argued that this stratification took place only gradually over the 500-year period of the LBK.
To get a better handle on the timing and nature of these social inequalities, a team led by Alexander Bentley, an archaeologist at the University of Bristol in the United Kingdom, analyzed the tooth enamel from more than 300 skeletons from seven LBK burial sites across Europe. These cemeteries, located in the Czech Republic, Slovakia, Austria, and France, ranged from 6,900 to 7,400 years in age, and covered most of the LBK’s territorial spread.
Specifically, the team looked for the element strontium in teeth and measured the ratio of two isotopes, or types of the atom with slightly different weights. Strontium atoms enter the body in the water that we drink and the food we eat, and the ratio of the heavier isotope strontium-87 to the lighter isotope strontium-86 reflects the kind of soil and geological formations a person lived on, particularly as a child, when the tooth enamel was laid down. The strontium isotopes are increasingly used by archaeologists to track movements of populations.
Previous studies have shown that the kind of soil favored by European farmers, lowland sediments known as loess, has a slightly lower strontium-87/strontium-86 ratio than less-fertile areas such as upland hills made from granite or sandstone. Yet because of Europe’s variable landscape, in which fertile and non-fertile areas can be as close as several kilometers apart, the team relied more heavily on the degree of variation of strontium ratios among the skeletons in a burial site than on their absolute values.
The results of the study, published online today in the Proceedings of the National Academy of Sciences, suggest that men who were buried with adzes — thought to be an indication of higher social status — were more likely to have grown up on loess soils than men who were buried without adzes. For example, among 310 burials the team analyzed, 62 featured adzes. But only one of the 62 skeletons from the adze burials had a strontium ratio in its teeth typical of a non-loess landscape, whereas all of the others were consistent with growing up on loess. Moreover, the variation in strontium ratios between adze skeletons was significantly lower than the variation between non-adze skeletons, suggesting to Bentley and his co-workers that the adze skeletons came from one kind of landscape, most likely loess, while the others came from a variety of other landscapes.
A similarly striking pattern was seen when the team looked at the female skeletons, which made up 153 of the total 311 individuals analyzed. The variation in strontium ratios for females was significantly greater than for males, suggesting that a greater number of females than males had grown up in non-fertile areas. Moreover, the patterns in the male and female burials appeared in both earlier and later LBK settlements, suggesting that the patterns of social inequality were established from the beginning of the LBK period and did not develop gradually over time.
The team came to two main conclusions: First, some males had greater access to fertile soils than others, probably because they were the sons of farmers who had inherited access to the best land. And second, LBK societies were “patrilocal,” meaning that males tended to stay put in one place while females moved in from other areas to mate with them. A number of recent genetic studies have shown similar patterns among early European farmers. “The signatures from these skeletons reinforce other indications of male-dominated descent and even land inheritance,” Bentley says, adding that such social inequalities “only grew in extent and scale” over the course of history.
Read more at Wired Science
Most researchers agree that social hierarchies began with the advent of farming. The earliest known farming communities are found in the Near East, dating back almost 11,000 years. Archaeologists have looked for evidence of social stratification in these societies with mixed results. Some early farming societies show signs that people played different roles and that some were buried with greater ritual — shuffling off this mortal coil with a number of elaborate “grave goods,” including pottery and stone tools. However, there is little evidence that social inequality was hereditary or rigidly defined.
That seems to have changed sometime after farmers moved into Europe from the Near East, beginning about 8,500 years ago during a period known as the European Neolithic. One of the best studied farming cultures is the Linearbandkeramik (LBK), which arose in what is today Hungary about 7,500 years ago and spread as far as modern-day Paris within 500 years, after which it appears to have been superseded by other cultures.
Archaeologists have long noted signs that the LBK culture might have been socially stratified. For example, some, but not all, males were buried with stone tools called adzes, which were thought to be used to build the wooden houses in which the farmers lived. But a few researchers have argued that this stratification took place only gradually over the 500-year period of the LBK.
To get a better handle on the timing and nature of these social inequalities, a team led by Alexander Bentley, an archaeologist at the University of Bristol in the United Kingdom, analyzed the tooth enamel from more than 300 skeletons from seven LBK burial sites across Europe. These cemeteries, located in the Czech Republic, Slovakia, Austria, and France, ranged from 6,900 to 7,400 years in age, and covered most of the LBK’s territorial spread.
Specifically, the team looked for the element strontium in teeth and measured the ratio of two isotopes, or types of the atom with slightly different weights. Strontium atoms enter the body in the water that we drink and the food we eat, and the ratio of the heavier isotope strontium-87 to the lighter isotope strontium-86 reflects the kind of soil and geological formations a person lived on, particularly as a child, when the tooth enamel was laid down. The strontium isotopes are increasingly used by archaeologists to track movements of populations.
Previous studies have shown that the kind of soil favored by European farmers, lowland sediments known as loess, has a slightly lower strontium-87/strontium-86 ratio than less-fertile areas such as upland hills made from granite or sandstone. Yet because of Europe’s variable landscape, in which fertile and non-fertile areas can be as close as several kilometers apart, the team relied more heavily on the degree of variation of strontium ratios among the skeletons in a burial site than on their absolute values.
The results of the study, published online today in the Proceedings of the National Academy of Sciences, suggest that men who were buried with adzes — thought to be an indication of higher social status — were more likely to have grown up on loess soils than men who were buried without adzes. For example, among 310 burials the team analyzed, 62 featured adzes. But only one of the 62 skeletons from the adze burials had a strontium ratio in its teeth typical of a non-loess landscape, whereas all of the others were consistent with growing up on loess. Moreover, the variation in strontium ratios between adze skeletons was significantly lower than the variation between non-adze skeletons, suggesting to Bentley and his co-workers that the adze skeletons came from one kind of landscape, most likely loess, while the others came from a variety of other landscapes.
A similarly striking pattern was seen when the team looked at the female skeletons, which made up 153 of the total 311 individuals analyzed. The variation in strontium ratios for females was significantly greater than for males, suggesting that a greater number of females than males had grown up in non-fertile areas. Moreover, the patterns in the male and female burials appeared in both earlier and later LBK settlements, suggesting that the patterns of social inequality were established from the beginning of the LBK period and did not develop gradually over time.
The team came to two main conclusions: First, some males had greater access to fertile soils than others, probably because they were the sons of farmers who had inherited access to the best land. And second, LBK societies were “patrilocal,” meaning that males tended to stay put in one place while females moved in from other areas to mate with them. A number of recent genetic studies have shown similar patterns among early European farmers. “The signatures from these skeletons reinforce other indications of male-dominated descent and even land inheritance,” Bentley says, adding that such social inequalities “only grew in extent and scale” over the course of history.
Read more at Wired Science
Huge Ancient Civilization's Collapse Explained
The mysterious fall of the largest of the world's earliest urban civilizations nearly 4,000 years ago in what is now India, Pakistan, Nepal and Bangladesh now appears to have a key culprit — ancient climate change, researchers say.
Ancient Egypt and Mesopotamia may be the best known of the first great urban cultures, but the largest was the Indus or Harappan civilization. This culture once extended over more than 386,000 square miles (1 million square kilometers) across the plains of the Indus River from the Arabian Sea to the Ganges, and at its peak may have accounted for 10 percent of the world population. The civilization developed about 5,200 years ago, and slowly disintegrated between 3,900 and 3,000 years ago — populations largely abandoned cities, migrating toward the east.
"Antiquity knew about Egypt and Mesopotamia, but the Indus civilization, which was bigger than these two, was completely forgotten until the 1920s," said researcher Liviu Giosan, a geologist at Woods Hole Oceanographic Institution in Massachusetts. "There are still many things we don't know about them."
Nearly a century ago, researchers began discovering numerous remains of Harappan settlements along the Indus River and its tributaries, as well as in a vast desert region at the border of India and Pakistan. Evidence was uncovered for sophisticated cities, sea links with Mesopotamia, internal trade routes, arts and crafts, and as-yet undeciphered writing.
"They had cities ordered into grids, with exquisite plumbing, which was not encountered again until the Romans," Giosan told LiveScience. "They seem to have been a more democratic society than Mesopotamia and Egypt — no large structures were built for important personalitiess like kings or pharaohs."
Like their contemporaries in Egypt and Mesopotamia, the Harappans, who were named after one of their largest cities, lived next to rivers.
"Until now, speculations abounded about the links between this mysterious ancient culture and its life-giving mighty rivers," Giosan said.
Now Giosan and his colleagues have reconstructed the landscape of the plain and rivers where this long-forgotten civilization developed. Their findings now shed light on the enigmatic fate of this culture.
"Our research provides one of the clearest examples of climate change leading to the collapse of an entire civilization," Giosan said.
The researchers first analyzed satellite data of the landscape influenced by the Indus and neighboring rivers. From 2003 to 2008, the researchers then collected samples of sediment from the coast of the Arabian Sea into the fertile irrigated valleys of Punjab and the northern Thar Desert to determine the origins and ages of those sediments and develop a timeline of landscape changes.
"It was challenging working in the desert — temperatures were over 110 degrees Fahrenheit all day long (43 degrees C)," Giosan recalled.
After collecting data on geological history, "we could reexamine what we know about settlements, what crops people were planting and when, and how both agriculture and settlement patterns changed," said researcher Dorian Fuller, an archaeologist with University College London. "This brought new insights into the process of eastward population shift, the change towards many more small farming communities, and the decline of cities during late Harappan times."
Some had suggested that the Harappan heartland received its waters from a large glacier-fed Himalayan river, thought by some to be the Sarasvati, a sacred river of Hindu mythology. However, the researchers found that only rivers fed by monsoon rains flowed through the region.
Previous studies suggest the Ghaggar, an intermittent river that flows only during strong monsoons, may best approximate the location of the Sarasvati. Archaeological evidence suggested the river, which dissipates into the desert along the dried course of Hakra valley, was home to intensive settlement during Harappan times.
"We think we settled a long controversy about the mythic Sarasvati River," Giosan said.
Initially, the monsoon-drenched rivers the researchers identified were prone to devastating floods. Over time, monsoons weakened, enabling agriculture and civilization to flourish along flood-fed riverbanks for nearly 2,000 years.
"The insolation — the solar energy received by the Earth from the sun — varies in cycles, which can impact monsoons," Giosan said. "In the last 10,000 years, the Northern Hemisphere had the highest insolation from 7,000 to 5,000 years ago, and since then insolation there decreased. All climate on Earth is driven by the sun, and so the monsoons were affected by the lower insolation, decreasing in force. This meant less rain got into continental regions affected by monsoons over time."
Eventually, these monsoon-based rivers held too little water and dried, making them unfavorable for civilization.
"The Harappans were an enterprising people taking advantage of a window of opportunity — a kind of "Goldilocks civilization," Giosan said.
Eventually, over the course of centuries, Harappans apparently fled along an escape route to the east toward the Ganges basin, where monsoon rains remained reliable.
"We can envision that this eastern shift involved a change to more localized forms of economy — smaller communities supported by local rain-fed farming and dwindling streams," Fuller said. "This may have produced smaller surpluses, and would not have supported large cities, but would have been reliable."
This change would have spelled disaster for the cities of the Indus, which were built on the large surpluses seen during the earlier, wetter era. The dispersal of the population to the east would have meant there was no longer a concentrated workforce to support urbanism.
Read more at Discovery News
Ancient Egypt and Mesopotamia may be the best known of the first great urban cultures, but the largest was the Indus or Harappan civilization. This culture once extended over more than 386,000 square miles (1 million square kilometers) across the plains of the Indus River from the Arabian Sea to the Ganges, and at its peak may have accounted for 10 percent of the world population. The civilization developed about 5,200 years ago, and slowly disintegrated between 3,900 and 3,000 years ago — populations largely abandoned cities, migrating toward the east.
"Antiquity knew about Egypt and Mesopotamia, but the Indus civilization, which was bigger than these two, was completely forgotten until the 1920s," said researcher Liviu Giosan, a geologist at Woods Hole Oceanographic Institution in Massachusetts. "There are still many things we don't know about them."
Nearly a century ago, researchers began discovering numerous remains of Harappan settlements along the Indus River and its tributaries, as well as in a vast desert region at the border of India and Pakistan. Evidence was uncovered for sophisticated cities, sea links with Mesopotamia, internal trade routes, arts and crafts, and as-yet undeciphered writing.
"They had cities ordered into grids, with exquisite plumbing, which was not encountered again until the Romans," Giosan told LiveScience. "They seem to have been a more democratic society than Mesopotamia and Egypt — no large structures were built for important personalitiess like kings or pharaohs."
Like their contemporaries in Egypt and Mesopotamia, the Harappans, who were named after one of their largest cities, lived next to rivers.
"Until now, speculations abounded about the links between this mysterious ancient culture and its life-giving mighty rivers," Giosan said.
Now Giosan and his colleagues have reconstructed the landscape of the plain and rivers where this long-forgotten civilization developed. Their findings now shed light on the enigmatic fate of this culture.
"Our research provides one of the clearest examples of climate change leading to the collapse of an entire civilization," Giosan said.
The researchers first analyzed satellite data of the landscape influenced by the Indus and neighboring rivers. From 2003 to 2008, the researchers then collected samples of sediment from the coast of the Arabian Sea into the fertile irrigated valleys of Punjab and the northern Thar Desert to determine the origins and ages of those sediments and develop a timeline of landscape changes.
"It was challenging working in the desert — temperatures were over 110 degrees Fahrenheit all day long (43 degrees C)," Giosan recalled.
After collecting data on geological history, "we could reexamine what we know about settlements, what crops people were planting and when, and how both agriculture and settlement patterns changed," said researcher Dorian Fuller, an archaeologist with University College London. "This brought new insights into the process of eastward population shift, the change towards many more small farming communities, and the decline of cities during late Harappan times."
Some had suggested that the Harappan heartland received its waters from a large glacier-fed Himalayan river, thought by some to be the Sarasvati, a sacred river of Hindu mythology. However, the researchers found that only rivers fed by monsoon rains flowed through the region.
Previous studies suggest the Ghaggar, an intermittent river that flows only during strong monsoons, may best approximate the location of the Sarasvati. Archaeological evidence suggested the river, which dissipates into the desert along the dried course of Hakra valley, was home to intensive settlement during Harappan times.
"We think we settled a long controversy about the mythic Sarasvati River," Giosan said.
Initially, the monsoon-drenched rivers the researchers identified were prone to devastating floods. Over time, monsoons weakened, enabling agriculture and civilization to flourish along flood-fed riverbanks for nearly 2,000 years.
"The insolation — the solar energy received by the Earth from the sun — varies in cycles, which can impact monsoons," Giosan said. "In the last 10,000 years, the Northern Hemisphere had the highest insolation from 7,000 to 5,000 years ago, and since then insolation there decreased. All climate on Earth is driven by the sun, and so the monsoons were affected by the lower insolation, decreasing in force. This meant less rain got into continental regions affected by monsoons over time."
Eventually, these monsoon-based rivers held too little water and dried, making them unfavorable for civilization.
"The Harappans were an enterprising people taking advantage of a window of opportunity — a kind of "Goldilocks civilization," Giosan said.
Eventually, over the course of centuries, Harappans apparently fled along an escape route to the east toward the Ganges basin, where monsoon rains remained reliable.
"We can envision that this eastern shift involved a change to more localized forms of economy — smaller communities supported by local rain-fed farming and dwindling streams," Fuller said. "This may have produced smaller surpluses, and would not have supported large cities, but would have been reliable."
This change would have spelled disaster for the cities of the Indus, which were built on the large surpluses seen during the earlier, wetter era. The dispersal of the population to the east would have meant there was no longer a concentrated workforce to support urbanism.
Read more at Discovery News
May 28, 2012
It Took Earth Ten Million Years to Recover from Greatest Mass Extinction
It took some 10 million years for Earth to recover from the greatest mass extinction of all time, latest research has revealed.
Life was nearly wiped out 250 million years ago, with only 10 per cent of plants and animals surviving. It is currently much debated how life recovered from this cataclysm, whether quickly or slowly.
Recent evidence for a rapid bounce-back is evaluated in a new review article by Dr Zhong-Qiang Chen, from the China University of Geosciences in Wuhan, and Professor Michael Benton from the University of Bristol. They find that recovery from the crisis lasted some 10 million years, as explained May 27 in Nature Geoscience.
There were apparently two reasons for the delay, the sheer intensity of the crisis, and continuing grim conditions on Earth after the first wave of extinction.
The end-Permian crisis, by far the most dramatic biological crisis to affect life on Earth, was triggered by a number of physical environmental shocks -- global warming, acid rain, ocean acidification and ocean anoxia. These were enough to kill off 90 per cent of living things on land and in the sea.
Dr Chen said: "It is hard to imagine how so much of life could have been killed, but there is no doubt from some of the fantastic rock sections in China and elsewhere round the world that this was the biggest crisis ever faced by life."
Current research shows that the grim conditions continued in bursts for some five to six million years after the initial crisis, with repeated carbon and oxygen crises, warming and other ill effects.
Some groups of animals on the sea and land did recover quickly and began to rebuild their ecosystems, but they suffered further setbacks. Life had not really recovered in these early phases because permanent ecosystems were not established.
Professor Benton, Professor of Vertebrate Palaeontology at the University of Bristol, said: "Life seemed to be getting back to normal when another crisis hit and set it back again. The carbon crises were repeated many times, and then finally conditions became normal again after five million years or so."
Finally, after the environmental crises ceased to be so severe, more complex ecosystems emerged. In the sea, new groups, such as ancestral crabs and lobsters, as well as the first marine reptiles, came on the scene, and they formed the basis of future modern-style ecosystems.
Read more at Science Daily
Life was nearly wiped out 250 million years ago, with only 10 per cent of plants and animals surviving. It is currently much debated how life recovered from this cataclysm, whether quickly or slowly.
Recent evidence for a rapid bounce-back is evaluated in a new review article by Dr Zhong-Qiang Chen, from the China University of Geosciences in Wuhan, and Professor Michael Benton from the University of Bristol. They find that recovery from the crisis lasted some 10 million years, as explained May 27 in Nature Geoscience.
There were apparently two reasons for the delay, the sheer intensity of the crisis, and continuing grim conditions on Earth after the first wave of extinction.
The end-Permian crisis, by far the most dramatic biological crisis to affect life on Earth, was triggered by a number of physical environmental shocks -- global warming, acid rain, ocean acidification and ocean anoxia. These were enough to kill off 90 per cent of living things on land and in the sea.
Dr Chen said: "It is hard to imagine how so much of life could have been killed, but there is no doubt from some of the fantastic rock sections in China and elsewhere round the world that this was the biggest crisis ever faced by life."
Current research shows that the grim conditions continued in bursts for some five to six million years after the initial crisis, with repeated carbon and oxygen crises, warming and other ill effects.
Some groups of animals on the sea and land did recover quickly and began to rebuild their ecosystems, but they suffered further setbacks. Life had not really recovered in these early phases because permanent ecosystems were not established.
Professor Benton, Professor of Vertebrate Palaeontology at the University of Bristol, said: "Life seemed to be getting back to normal when another crisis hit and set it back again. The carbon crises were repeated many times, and then finally conditions became normal again after five million years or so."
Finally, after the environmental crises ceased to be so severe, more complex ecosystems emerged. In the sea, new groups, such as ancestral crabs and lobsters, as well as the first marine reptiles, came on the scene, and they formed the basis of future modern-style ecosystems.
Read more at Science Daily
Hubble Sees a Spiral Within a Spiral
NASA's Hubble Space Telescope captured a new image of the spiral galaxy known as ESO 498-G5. One interesting feature of this galaxy is that its spiral arms wind all the way into the center, so that ESO 498-G5's core looks like a bit like a miniature spiral galaxy. This sort of structure is in contrast to the elliptical star-filled centers (or bulges) of many other spiral galaxies, which instead appear as glowing masses.
Astronomers refer to the distinctive spiral-like bulge of galaxies such as ESO 498-G5 as disc-type bulges, or pseudobulges, while bright elliptical centers are called classical bulges. Observations from the Hubble Space Telescope, which does not have to contend with the distorting effects of Earth's atmosphere, have helped to reveal that these two different types of galactic centers exist. These observations have also shown that star formation is still going on in disc-type bulges and has ceased in classical bulges. This means that galaxies can be a bit like Russian matryoshka dolls: classical bulges look much like a miniature version of an elliptical galaxy, embedded in the center of a spiral, while disc-type bulges look like a second, smaller spiral galaxy located at the heart of the first -- a spiral within a spiral.
The similarities between types of galaxy bulge and types of galaxy go beyond their appearance. Just like giant elliptical galaxies, the classical bulges consist of great swarms of stars moving about in random orbits. Conversely, the structure and movement of stars within disc-type bulges mirror the spiral arms arrayed in a galaxy's disc. These differences suggest different origins for the two types of bulges: while classical bulges are thought to develop through major events, such as mergers with other galaxies, disc-type bulges evolve gradually, developing their spiral pattern as stars and gas migrate to the galaxy's center.
Read more at Science Daily
Astronomers refer to the distinctive spiral-like bulge of galaxies such as ESO 498-G5 as disc-type bulges, or pseudobulges, while bright elliptical centers are called classical bulges. Observations from the Hubble Space Telescope, which does not have to contend with the distorting effects of Earth's atmosphere, have helped to reveal that these two different types of galactic centers exist. These observations have also shown that star formation is still going on in disc-type bulges and has ceased in classical bulges. This means that galaxies can be a bit like Russian matryoshka dolls: classical bulges look much like a miniature version of an elliptical galaxy, embedded in the center of a spiral, while disc-type bulges look like a second, smaller spiral galaxy located at the heart of the first -- a spiral within a spiral.
The similarities between types of galaxy bulge and types of galaxy go beyond their appearance. Just like giant elliptical galaxies, the classical bulges consist of great swarms of stars moving about in random orbits. Conversely, the structure and movement of stars within disc-type bulges mirror the spiral arms arrayed in a galaxy's disc. These differences suggest different origins for the two types of bulges: while classical bulges are thought to develop through major events, such as mergers with other galaxies, disc-type bulges evolve gradually, developing their spiral pattern as stars and gas migrate to the galaxy's center.
Read more at Science Daily
Teen Solves 350-Year-Old Math Problem
A boy math whiz has shocked the world by solving a 350-year-old problem once posed by the great mathematician, Sir Isaac Newton.
Sixteen-year-old Shouryya Ray, a boy of Indian origin attending school in Germany, cracked two particle dynamics theories. Ray's novel solutions can now help scientists calculate the flight path of a thrown ball and predict how it will strike and bounce off a wall, according to the International Business Times.
Ray was told by professors during a school field trip to Dresden University that the problem could not be solved. That notion didn't sit right with the Calcutta-born student.
"I just asked myself, 'Why not?'" Ray told Germany's Welt Online newspaper. "I didn't believe there couldn't be a solution."
According to Welt Online, Ray has been captivated by math since a very early age and was inspired by his father, Subhashis Ray, who works as a research assistant at the Technical University of Freiburg. His father began teaching Ray calculus at the tender age of six.
Ray's family moved to Germany when he was 12. Ray didn't speak German when they first moved to Germany and now he is fluent in the language.
As for his future career, Ray is debating whether to study math or physics when he moves on to college.
From Discovery News
Sixteen-year-old Shouryya Ray, a boy of Indian origin attending school in Germany, cracked two particle dynamics theories. Ray's novel solutions can now help scientists calculate the flight path of a thrown ball and predict how it will strike and bounce off a wall, according to the International Business Times.
Ray was told by professors during a school field trip to Dresden University that the problem could not be solved. That notion didn't sit right with the Calcutta-born student.
"I just asked myself, 'Why not?'" Ray told Germany's Welt Online newspaper. "I didn't believe there couldn't be a solution."
According to Welt Online, Ray has been captivated by math since a very early age and was inspired by his father, Subhashis Ray, who works as a research assistant at the Technical University of Freiburg. His father began teaching Ray calculus at the tender age of six.
Ray's family moved to Germany when he was 12. Ray didn't speak German when they first moved to Germany and now he is fluent in the language.
As for his future career, Ray is debating whether to study math or physics when he moves on to college.
From Discovery News
NASA Wanted Astronauts to View Venus Up-Close
In a little over a week, we’re all going to be looking skyward and focusing our sights (safely) on Venus as it crosses the disk of the sun. It's going to be a fantastic view, especially since most of us only ever see Venus as a tiny dot of light in the sky. But in 1967, NASA considered giving three astronauts a really rare view of Venus by sending them on a flyby around the second planet from the sun.
The mission was developed under the Apollo Applications Program (AAP) that was designed to build on and apply Apollo-era technology to greater goals in space. Out of the AAP NASA hoped to see Earth orbiting laboratories, research stations on the moon, and manned interplanetary missions. In 1967, this was America’s future in space.
One of the interplanetary targets was Venus. After visiting the planet with the unmanned Mariner 2 spacecraft in 1962, NASA learned that the planet lacks a strong magnetic field, has an extremely hot surface generated in the lower atmosphere or surface, and that the cosmic radiation in the interplanetary space was survivable. NASA also learned that it was worth going back. There was undoubtedly more to Venus locked under its thick cloud cover.
To get a crew there, NASA would use a revised Apollo spacecraft. Like the lunar missions, it was a tripartite design composed of a Command and Service Module (CSM), and Environmental Support Module (ESM), and a third habitable section. Here’s how the mission was designed to play out.
A three-man crew, nestled in the CM, would launch on a Saturn V. The CSM would perform the same functions it did during the Apollo lunar missions: its onboard computer would serve as the primary guidance and navigation system, provide the main reaction control, and act as the principle telemetry and communications link with mission control. Really, the mission would be a simple of matter of engineers rewriting the computer’s commands to send the crew to Venus instead of the moon. The hard part is keeping them alive and well during the 400 day mission. This is where the other modules come into play.
With no purpose for a Lunar Module on a Venus flyby, the spidery spacecraft would be swapped out for the larger ESM. Once in Earth orbit, the crew would separate the CSM from the rest of the spacecraft, turn around, and dock with the ESM. Then they could open the hatch and transfer between the vehicles. The ESM was designed as the principle experiment bay on the mission and would provide long term life support and environmental control to the whole spacecraft configuration.
With the CSM and ESM docked, the Saturn V’s upper SIV-B stage would fire and send the whole thing towards Venus. But instead of jettisoning the spent rocket stage, the crew would re-purpose it -- neither of the other two module gave them a comfortable living space. In the ESM the astronauts would have everything they’d need to refurbish the rocket stage and turn it into their main habitable module and recreational space. Solar panels lining the outside would provide power to the whole spacecraft.
The mission planned to launch sometime during the month-long window between Oct. 31 and Nov. 30 1973; the dates offered a quick transit to Venus and the year was expected to be a quiet one for solar activity, minimizing the crew’s exposure to dangerous solar radiation.
The outbound leg of the mission was expected to last 123 days. The crew would arrive at Venus sometime in the month of March 1974 and pass just 3,340 nautical miles -- about 3,834 statute miles -- above the surface as they whipped around to begin the 273 day trip back to Earth. The mission would end in a splashdown sometime in December 1974.
Read more at Discovery News
The mission was developed under the Apollo Applications Program (AAP) that was designed to build on and apply Apollo-era technology to greater goals in space. Out of the AAP NASA hoped to see Earth orbiting laboratories, research stations on the moon, and manned interplanetary missions. In 1967, this was America’s future in space.
One of the interplanetary targets was Venus. After visiting the planet with the unmanned Mariner 2 spacecraft in 1962, NASA learned that the planet lacks a strong magnetic field, has an extremely hot surface generated in the lower atmosphere or surface, and that the cosmic radiation in the interplanetary space was survivable. NASA also learned that it was worth going back. There was undoubtedly more to Venus locked under its thick cloud cover.
To get a crew there, NASA would use a revised Apollo spacecraft. Like the lunar missions, it was a tripartite design composed of a Command and Service Module (CSM), and Environmental Support Module (ESM), and a third habitable section. Here’s how the mission was designed to play out.
A three-man crew, nestled in the CM, would launch on a Saturn V. The CSM would perform the same functions it did during the Apollo lunar missions: its onboard computer would serve as the primary guidance and navigation system, provide the main reaction control, and act as the principle telemetry and communications link with mission control. Really, the mission would be a simple of matter of engineers rewriting the computer’s commands to send the crew to Venus instead of the moon. The hard part is keeping them alive and well during the 400 day mission. This is where the other modules come into play.
With no purpose for a Lunar Module on a Venus flyby, the spidery spacecraft would be swapped out for the larger ESM. Once in Earth orbit, the crew would separate the CSM from the rest of the spacecraft, turn around, and dock with the ESM. Then they could open the hatch and transfer between the vehicles. The ESM was designed as the principle experiment bay on the mission and would provide long term life support and environmental control to the whole spacecraft configuration.
With the CSM and ESM docked, the Saturn V’s upper SIV-B stage would fire and send the whole thing towards Venus. But instead of jettisoning the spent rocket stage, the crew would re-purpose it -- neither of the other two module gave them a comfortable living space. In the ESM the astronauts would have everything they’d need to refurbish the rocket stage and turn it into their main habitable module and recreational space. Solar panels lining the outside would provide power to the whole spacecraft.
The mission planned to launch sometime during the month-long window between Oct. 31 and Nov. 30 1973; the dates offered a quick transit to Venus and the year was expected to be a quiet one for solar activity, minimizing the crew’s exposure to dangerous solar radiation.
The outbound leg of the mission was expected to last 123 days. The crew would arrive at Venus sometime in the month of March 1974 and pass just 3,340 nautical miles -- about 3,834 statute miles -- above the surface as they whipped around to begin the 273 day trip back to Earth. The mission would end in a splashdown sometime in December 1974.
Read more at Discovery News
May 27, 2012
New Genetic Method Developed to Pinpoint Individuals' Geographic Origin
Understanding the genetic diversity within and between populations has important implications for studies of human disease and evolution. This includes identifying associations between genetic variants and disease, detecting genomic regions that have undergone positive selection and highlighting interesting aspects of human population history.
Now, a team of researchers from the UCLA Henry Samueli School of Engineering and Applied Science, UCLA's Department of Ecology and Evolutionary Biology and Israel's Tel Aviv University has developed an innovative approach to the study of genetic diversity called spatial ancestry analysis (SPA), which allows for the modeling of genetic variation in two- or three-dimensional space.
Their study is published online this week in the journal Nature Genetics.
With SPA, researchers can model the spatial distribution of each genetic variant by assigning a genetic variant's frequency as a continuous function in geographic space. By doing this, they show that the explicit modeling of the genetic variant frequency -- the proportion of individuals who carry a specific variant -- allows individuals to be localized on a world map on the basis of their genetic information alone.
"If we know from where each individual in our study originated, what we observe is that some variation is more common in one part of the world and less common in another part of the world," said Eleazar Eskin, an associate professor of computer science at UCLA Engineering. "How common these variants are in a specific location changes gradually as the location changes.
"In this study, we think of the frequency of variation as being defined by a specific location. This gives us a different way to think about populations, which are usually thought of as being discrete. Instead, we think about the variant frequencies changing in different locations. If you think about a person's ancestry, it is no longer about being from a specific population -- but instead, each person's ancestry is defined by the location they're from. Now ancestry is a continuum."
The team reports the development of a simple probabilistic model for the spatial structure of genetic variation, with which they model how the frequency of each genetic variant changes as a function of the location of the individual in geographic space (where the gene frequency is actually a function of the x and y coordinates of an individual on a map).
"If the location of an individual is unknown, our model can actually infer geographic origins for each individual using only their genetic data with surprising accuracy," said Wen-Yun Yang, a UCLA computer science graduate student.
"The model makes it possible to infer the geographic ancestry of an individual's parents, even if those parents differ in ancestry. Existing approaches falter when it comes to this task," said UCLA's John Novembre, an assistant professor in the department of ecology and evolution.
SPA is also able to model genetic variation on a globe.
"We are able to also show how to predict the spatial structure of worldwide populations," said Eskin, who also holds a joint appointment in the department of human genetics at the David Geffen School of Medicine at UCLA. "In just taking genetic information from populations from all over the world, we're able to reconstruct the topology of the global populations only from their genetic information."
Using the framework, SPA can also identify loci showing extreme patterns of spatial differentiation.
"These dramatic changes in the frequency of the variants potentially could be due to natural selection," Eskin said. "It could be that something in the environment is different in different locations. Let's say a mutation arose that has some advantageous property in a certain environment. So you can imagine then that a kind of force for genetic selection would make this mutation more common in that environment."
Read more at Science Daily
Now, a team of researchers from the UCLA Henry Samueli School of Engineering and Applied Science, UCLA's Department of Ecology and Evolutionary Biology and Israel's Tel Aviv University has developed an innovative approach to the study of genetic diversity called spatial ancestry analysis (SPA), which allows for the modeling of genetic variation in two- or three-dimensional space.
Their study is published online this week in the journal Nature Genetics.
With SPA, researchers can model the spatial distribution of each genetic variant by assigning a genetic variant's frequency as a continuous function in geographic space. By doing this, they show that the explicit modeling of the genetic variant frequency -- the proportion of individuals who carry a specific variant -- allows individuals to be localized on a world map on the basis of their genetic information alone.
"If we know from where each individual in our study originated, what we observe is that some variation is more common in one part of the world and less common in another part of the world," said Eleazar Eskin, an associate professor of computer science at UCLA Engineering. "How common these variants are in a specific location changes gradually as the location changes.
"In this study, we think of the frequency of variation as being defined by a specific location. This gives us a different way to think about populations, which are usually thought of as being discrete. Instead, we think about the variant frequencies changing in different locations. If you think about a person's ancestry, it is no longer about being from a specific population -- but instead, each person's ancestry is defined by the location they're from. Now ancestry is a continuum."
The team reports the development of a simple probabilistic model for the spatial structure of genetic variation, with which they model how the frequency of each genetic variant changes as a function of the location of the individual in geographic space (where the gene frequency is actually a function of the x and y coordinates of an individual on a map).
"If the location of an individual is unknown, our model can actually infer geographic origins for each individual using only their genetic data with surprising accuracy," said Wen-Yun Yang, a UCLA computer science graduate student.
"The model makes it possible to infer the geographic ancestry of an individual's parents, even if those parents differ in ancestry. Existing approaches falter when it comes to this task," said UCLA's John Novembre, an assistant professor in the department of ecology and evolution.
SPA is also able to model genetic variation on a globe.
"We are able to also show how to predict the spatial structure of worldwide populations," said Eskin, who also holds a joint appointment in the department of human genetics at the David Geffen School of Medicine at UCLA. "In just taking genetic information from populations from all over the world, we're able to reconstruct the topology of the global populations only from their genetic information."
Using the framework, SPA can also identify loci showing extreme patterns of spatial differentiation.
"These dramatic changes in the frequency of the variants potentially could be due to natural selection," Eskin said. "It could be that something in the environment is different in different locations. Let's say a mutation arose that has some advantageous property in a certain environment. So you can imagine then that a kind of force for genetic selection would make this mutation more common in that environment."
Read more at Science Daily
It's in the Genes: Research Pinpoints How Plants Know When to Flower
Determining the proper time to flower, important if a plant is to reproduce successfully, involves a sequence of molecular events, a plant's circadian clock and sunlight.
Understanding how flowering works in the simple plant used in this study -- Arabidopsis -- should lead to a better understanding of how the same genes work in more complex plants grown as crops such as rice, wheat and barley, according to Takato Imaizumi, a University of Washington assistant professor of biology and corresponding author of a paper in the May 25 issue of the journal Science.
"If we can regulate the timing of flowering, we might be able to increase crop yield by accelerating or delaying this. Knowing the mechanism gives us the tools to manipulate this," Imaizumi said. Along with food crops, the work might also lead to higher yields of plants grown for biofuels.
At specific times of year, flowering plants produce a protein known as FLOWERING LOCUS T in their leaves that induces flowering. Once this protein is made, it travels from the leaves to the shoot apex, a part of the plant where cells are undifferentiated, meaning they can either become leaves or flowers. At the shoot apex, this protein starts the molecular changes that send cells on the path to becoming flowers.
Changes in day length tell many organisms that the seasons are changing. It has long been known that plants use an internal time-keeping mechanism known as the circadian clock to measure changes in day length. Circadian clocks synchronize biological processes during 24-hour periods in people, animals, insects, plants and other organisms.
Imaizumi and the paper's co-authors investigated what's called the FKF1 protein, which they suspected was a key player in the mechanism by which plants recognize seasonal change and know when to flower. FKF1 protein is a photoreceptor, meaning it is activated by sunlight.
"The FKF1 photoreceptor protein we've been working on is expressed in the late afternoon every day, and is very tightly regulated by the plant's circadian clock," Imaizumi said. "When this protein is expressed during days that are short, this protein cannot be activated, as there is no daylight in the late afternoon. When this protein is expressed during a longer day, this photoreceptor makes use of the light and activates the flowering mechanisms involving FLOWERING LOCUS T. The circadian clock regulates the timing of the specific photoreceptor for flowering. That is how plants sense differences in day length."
This system keeps plants from flowering when it's a poor time to reproduce, such as the dead of winter when days are short and nights are long.
The new findings come from work with the plant Arabidopsis, a small plant in the mustard family that's often used in genetic research. They validate predictions from a mathematical model of the mechanism that causes Arabidopsis to flower that was developed by Andrew Millar, a University of Edinburgh professor of biology and co-author of the paper.
"Our mathematical model helped us to understand the operating principles of the plants' day-length sensor," Millar said. "Those principles will hold true in other plants, like rice, where the crop's day-length response is one of the factors that limits where farmers can obtain good harvests. It's that same day-length response that needs controlled lighting for laying chickens and fish farms, so it's just as important to understand this response in animals.
"The proteins involved in animals are not yet so well understood as they are in plants but we expect the same principles that we've learned from these studies to apply."
Read more at Science Daily
Subscribe to:
Posts (Atom)