One of the big mysteries in biology is why cells age. Now scientists at the Salk Institute for Biological Studies report that they have discovered a weakness in a component of brain cells that may explain how the aging process occurs in the brain.
The scientists discovered that certain proteins, called extremely long-lived proteins (ELLPs), which are found on the surface of the nucleus of neurons, have a remarkably long lifespan.
While the lifespan of most proteins totals two days or less, the Salk Institute researchers identified ELLPs in the rat brain that were as old as the organism, a finding they reported February 3 in Science.
The Salk scientists are the first to discover an essential intracellular machine whose components include proteins of this age. Their results suggest the proteins last an entire lifetime, without being replaced.
ELLPs make up the transport channels on the surface of the nucleus; gates that control what materials enter and exit. Their long lifespan might be an advantage if not for the wear-and-tear that these proteins experience over time. Unlike other proteins in the body, ELLPs are not replaced when they incur aberrant chemical modifications and other damage.
Damage to the ELLPs weakens the ability of the three-dimensional transport channels that are composed of these proteins to safeguard the cell's nucleus from toxins, says Martin Hetzer, a professor in Salk's Molecular and Cell Biology Laboratory, who headed the research. These toxins may alter the cell's DNA and thereby the activity of genes, resulting in cellular aging.
Funded by the Ellison Medical Foundation and the Glenn Foundation for Medical Research, Hetzer's research group is the only lab in the world that is investigating the role of these transport channels, called the nuclear pore complex (NPC), in the aging process.
Previous studies have revealed that alterations in gene expression underlie the aging process. But, until the Hetzer lab's discovery that mammals' NPCs possess an Achilles' heel that allows DNA-damaging toxins to enter the nucleus, the scientific community has had few solid clues about how these gene alterations occur.
"The fundamental defining feature of aging is an overall decline in the functional capacity of various organs such as the heart and the brain," says Hetzer. "This decline results from deterioration of the homeostasis, or internal stability, within the constituent cells of those organs. Recent research in several laboratories has linked breakdown of protein homeostasis to declining cell function."
The results that Hetzer and his team just report suggest that declining neuron function may originate in ELLPs that deteriorate as a result of damage over time.
"Most cells, but not neurons, combat functional deterioration of their protein components through the process of protein turnover, in which the potentially impaired parts of the proteins are replaced with new functional copies," says Hetzer.
"Our results also suggest that nuclear pore deterioration might be a general aging mechanism leading to age-related defects in nuclear function, such as the loss of youthful gene expression programs," he adds.
The findings may prove relevant to understanding the molecular origins of aging and such neurodegenerative disorders as Alzheimer's disease and Parkinson's disease.
In previous studies, Hetzer and his team discovered large filaments in the nuclei of neurons of old mice and rats, whose origins they traced to the cytoplasm. Such filaments have been linked to various neurological disorders including Parkinson's disease. Whether the misplaced molecules are a cause, or a result, of the disease has not yet been determined.
Also in previous studies, Hetzer and his team documented age-dependent declines in the functioning of NPCs in the neurons of healthy aging rats, which are laboratory models of human biology.
Hetzer's team includes his colleagues at the Salk Institute as well as John Yates III, a professor in the Department of Chemical Physiology of The Scripps Research Institute.
Read more at Science Daily
Feb 4, 2012
Super Bowl Science Blitz: Watch Out!
With the big game, the halftime show, the ads, the crowds and of course the food, there are a lot of reasons to be excited for Super Bowl Sunday.
But if you're find yourself a little overheated in the rush of the activity on game day, why not cool off with a splash of science? It probably won't make the game more fun, but it might make you sound more interesting at your Super Bowl party.
Nobody likes to lose. But for some people, the Super Bowl may literally be a matter of life and death.
Watching a team lose in the Super Bowl can trigger heart attacks in anxiety-prone fans as well as those vulnerable to cardiac issues, according to a study published in the journal Clinical Cardiology in 2011.
Consulting a doctor before the game may be a good idea for those prone to heart problems. Avoiding excessive consumption of fatty foods and salt could also decrease the likelihood of an attack.
Children can become more aggressive watching Super Bowl -- commercials?
These days, Super Bowl commercials can draw as much attention as the game itself. But according to a study that came out of Iowa State University last year, some commercials may promote more than just a product.
Children who watched violent commercials during the Super Bowl were more prone to aggressive thoughts. The researchers believe that the violence projected from the game itself and reinforced in the commercials.
Researchers recommend that parents limit how much exposure children have to violent imagery and talk with their children about media messages.
Watching the Super Bowl makes you hungry.
The Super Bowl is the second largest food consumption holiday of the year, second only to Thanksgiving.
Given the buffet-style setup of most Super Bowl, the portions may be one of the big reasons behind that trend. According to a study out of Cornell University in 2006, larger serving bowls led to an increase in food consumption of more than 50 percent.
So the best way to avoid overeating this Sunday, according to the researchers, is to use smaller bowls.
Read more at Discovery News
But if you're find yourself a little overheated in the rush of the activity on game day, why not cool off with a splash of science? It probably won't make the game more fun, but it might make you sound more interesting at your Super Bowl party.
Nobody likes to lose. But for some people, the Super Bowl may literally be a matter of life and death.
Watching a team lose in the Super Bowl can trigger heart attacks in anxiety-prone fans as well as those vulnerable to cardiac issues, according to a study published in the journal Clinical Cardiology in 2011.
Consulting a doctor before the game may be a good idea for those prone to heart problems. Avoiding excessive consumption of fatty foods and salt could also decrease the likelihood of an attack.
Children can become more aggressive watching Super Bowl -- commercials?
These days, Super Bowl commercials can draw as much attention as the game itself. But according to a study that came out of Iowa State University last year, some commercials may promote more than just a product.
Children who watched violent commercials during the Super Bowl were more prone to aggressive thoughts. The researchers believe that the violence projected from the game itself and reinforced in the commercials.
Researchers recommend that parents limit how much exposure children have to violent imagery and talk with their children about media messages.
Watching the Super Bowl makes you hungry.
The Super Bowl is the second largest food consumption holiday of the year, second only to Thanksgiving.
Given the buffet-style setup of most Super Bowl, the portions may be one of the big reasons behind that trend. According to a study out of Cornell University in 2006, larger serving bowls led to an increase in food consumption of more than 50 percent.
So the best way to avoid overeating this Sunday, according to the researchers, is to use smaller bowls.
Read more at Discovery News
Feb 3, 2012
Classic Portrait of a Barred Spiral Galaxy
The NASA/ESA Hubble Space Telescope has taken a picture of the barred spiral galaxy NGC 1073, which is found in the constellation of Cetus (The Sea Monster). Our own galaxy, the Milky Way, is a similar barred spiral, and the study of galaxies such as NGC 1073 helps astronomers learn more about our celestial home.
Most spiral galaxies in the Universe have a bar structure in their centre, and Hubble's image of NGC 1073 offers a particularly clear view of one of these. Galaxies' star-filled bars are thought to emerge as gravitational density waves funnel gas toward the galactic centre, supplying the material to create new stars. The transport of gas can also feed the supermassive black holes that lurk in the centres of almost every galaxy.
Some astronomers have suggested that the formation of a central bar-like structure might signal a spiral galaxy's passage from intense star-formation into adulthood, as the bars turn up more often in galaxies full of older, red stars than younger, blue stars. This storyline would also account for the observation that in the early Universe, only around a fifth of spiral galaxies contained bars, while more than two thirds do in the more modern cosmos.
While Hubble's image of NGC 1073 is in some respects an archetypal portrait of a barred spiral, there are a couple of quirks worth pointing out.
One, ironically, is almost -- but not quite -- invisible to optical telescopes like Hubble. In the upper left part of the image, a rough ring-like structure of recent star formation hides a bright source of X-rays. Called IXO 5, this X-ray source is likely to be a binary system featuring a black hole and a star orbiting each other. Comparing X-ray observations from the Chandra spacecraft with this Hubble image, astronomers have narrowed the position of IXO 5 down to one of two faint stars visible here. However, X-ray observations with current instruments are not precise enough to conclusively determine which of the two it is.
Hubble's image does not only tell us about a galaxy in our own cosmic neighbourhood, however. We can also discern glimpses of objects much further away, whose light tells us about earlier eras in cosmic history.
Right across Hubble's field of view, more distant galaxies are peering through NGC 1073, with several reddish examples appearing clearly in the top left part of the frame.
Read more at Science Daily
Most spiral galaxies in the Universe have a bar structure in their centre, and Hubble's image of NGC 1073 offers a particularly clear view of one of these. Galaxies' star-filled bars are thought to emerge as gravitational density waves funnel gas toward the galactic centre, supplying the material to create new stars. The transport of gas can also feed the supermassive black holes that lurk in the centres of almost every galaxy.
Some astronomers have suggested that the formation of a central bar-like structure might signal a spiral galaxy's passage from intense star-formation into adulthood, as the bars turn up more often in galaxies full of older, red stars than younger, blue stars. This storyline would also account for the observation that in the early Universe, only around a fifth of spiral galaxies contained bars, while more than two thirds do in the more modern cosmos.
While Hubble's image of NGC 1073 is in some respects an archetypal portrait of a barred spiral, there are a couple of quirks worth pointing out.
One, ironically, is almost -- but not quite -- invisible to optical telescopes like Hubble. In the upper left part of the image, a rough ring-like structure of recent star formation hides a bright source of X-rays. Called IXO 5, this X-ray source is likely to be a binary system featuring a black hole and a star orbiting each other. Comparing X-ray observations from the Chandra spacecraft with this Hubble image, astronomers have narrowed the position of IXO 5 down to one of two faint stars visible here. However, X-ray observations with current instruments are not precise enough to conclusively determine which of the two it is.
Hubble's image does not only tell us about a galaxy in our own cosmic neighbourhood, however. We can also discern glimpses of objects much further away, whose light tells us about earlier eras in cosmic history.
Right across Hubble's field of view, more distant galaxies are peering through NGC 1073, with several reddish examples appearing clearly in the top left part of the frame.
Read more at Science Daily
The Complex Relationship Between Memory and Silence
People who suffer a traumatic experience often don't talk about it, and many forget it over time. But not talking about something doesn't always mean you'll forget it; if you try to force yourself not to think about white bears, soon you'll be imagining polar bears doing the polka. A group of psychological scientists explore the relationship between silence and memories in a new paper published in Perspectives on Psychological Science, a journal of the Association for Psychological Science.
"There's this idea, with silence, that if we don't talk about something, it starts fading," says Charles B. Stone of Université Catholique de Louvain in Belgium, an author of the paper. But that belief isn't necessarily backed up by empirical psychological research -- a lot of it comes from a Freudian belief that everyone has deep-seated issues we're repressing and ought to talk about. The real relationship between silence and memory is much more complicated, Stone says.
"We are trying to understand how people remember the past in a very basic way," Stone says. He cowrote the paper with Alin Coman of the University of Pittsburgh, Adam D. Brown of New York University, Jonathan Koppel of the University of Aarhus, and William Hirst of the New School for Social Research.
"Silence is everywhere," Stone says. He and his coauthors divide silence about memories into several categories. You might not mention something you're thinking about on purpose -- or because it just doesn't come up in conversation. And some memories aren't talked about because they simply don't come to mind. Sometimes people actively try not to remember something.
One well-studied example used by Stone and his colleagues to demonstrate how subtle the effects of silence can be, establishes that silences about the past occurring within a conversation do not uniformly promote forgetting. Some silences are more likely to lead to forgetting than others. People have more trouble remembering silenced memories related to what they or others talk about than silenced memories unrelated to the topic at hand. If President Bush wanted the public to forget that weapons of mass destruction figured in the build-up to the Iraq War, he should not avoid talking about the war and its build-up. Rather he should talk about the build-up and avoid any discussion of WMDs. And at a more personal level, when people talk to each other about the events of their lives, talking about happy memories may leave the unhappy memories unmentioned, but in the future, people may have more trouble remembering the unmentioned happy memories than the unmentioned sad memories.
Or to supply another example of the subtle relation between memory and silence: If your mother is asking you about your boyfriend and you tell her about yesterday's date, while thinking -- but not talking -- about the exciting ending of the date, that romantic finish may linger longer in your memory than if you just answered her questions without thinking about the later part of the evening.
Read more at Science Daily
"There's this idea, with silence, that if we don't talk about something, it starts fading," says Charles B. Stone of Université Catholique de Louvain in Belgium, an author of the paper. But that belief isn't necessarily backed up by empirical psychological research -- a lot of it comes from a Freudian belief that everyone has deep-seated issues we're repressing and ought to talk about. The real relationship between silence and memory is much more complicated, Stone says.
"We are trying to understand how people remember the past in a very basic way," Stone says. He cowrote the paper with Alin Coman of the University of Pittsburgh, Adam D. Brown of New York University, Jonathan Koppel of the University of Aarhus, and William Hirst of the New School for Social Research.
"Silence is everywhere," Stone says. He and his coauthors divide silence about memories into several categories. You might not mention something you're thinking about on purpose -- or because it just doesn't come up in conversation. And some memories aren't talked about because they simply don't come to mind. Sometimes people actively try not to remember something.
One well-studied example used by Stone and his colleagues to demonstrate how subtle the effects of silence can be, establishes that silences about the past occurring within a conversation do not uniformly promote forgetting. Some silences are more likely to lead to forgetting than others. People have more trouble remembering silenced memories related to what they or others talk about than silenced memories unrelated to the topic at hand. If President Bush wanted the public to forget that weapons of mass destruction figured in the build-up to the Iraq War, he should not avoid talking about the war and its build-up. Rather he should talk about the build-up and avoid any discussion of WMDs. And at a more personal level, when people talk to each other about the events of their lives, talking about happy memories may leave the unhappy memories unmentioned, but in the future, people may have more trouble remembering the unmentioned happy memories than the unmentioned sad memories.
Or to supply another example of the subtle relation between memory and silence: If your mother is asking you about your boyfriend and you tell her about yesterday's date, while thinking -- but not talking -- about the exciting ending of the date, that romantic finish may linger longer in your memory than if you just answered her questions without thinking about the later part of the evening.
Read more at Science Daily
First 'Vampire' Bat Fly Fossil Discovered
A one-of-a-kind fossil shows that so-called bat flies — tiny vampire insects that survive on the blood of bats — have been parasitizing the winged mammals and spreading bat malaria for at least 20 million years, scientists report in a pair of studies today (Feb. 3).
"Bat flies are a remarkable case of specific evolution, animals that have co-evolved with bats and are found nowhere else," George Poinar, a zoologist at Oregon State University who led the studies, said in statement.
The highly specialized parasites, some of which only dine on specific bat species, spend most of their lives crawling through the animal's fur or on its wing membranes. They often have flattened, flea-like bodies with long legs, and can be winged or wingless, depending on the species.
Bat flies fall into one of two families: streblidae and nycteribiidae, which are mostly found in the Eastern and Western Hemispheres, respectively. Currently, scientists have only identified nycteribiid flies as vectors, or transmitters, for bat malaria, but researchers have now learned that streblids may also be spreading the disease.
In the La Búcara mine, located in the Cordillera Septentrional mountain range of the Dominican Republic, Poinar and his colleagues uncovered an ancient malaria-carrying streblid fly entombed in amber.
"While no malaria parasites have been found in extant streblids, they probably occur and it is possible that streblids were the earliest lineage of flies that transmitted bat malaria to Chiroptera [bats]," Poinar writes in one of his studies, published in December in the journal Parasites & Vectors.
Based on other fossils encased in amber from the La Búcara mine, Poinar estimates that the bat fly got trapped 20 million to 30 million years ago, though the range could be as large as 15 million to 45 million years ago. Given that bats go back about 50 million years, the find means that the flies have been attacking bats for at least half the time they've existed, Poinar said.
The fossil is the first ever found of streblid flies, possibly because insects in general don't preserve well, unless they are trapped in some kind of preservative-like substance such as amber; and these bat flies only leave their bat to mate — this is probably what the bat fly was doing when it got trapped in sap, Poinar said.
Read more at Discovery News
"Bat flies are a remarkable case of specific evolution, animals that have co-evolved with bats and are found nowhere else," George Poinar, a zoologist at Oregon State University who led the studies, said in statement.
The highly specialized parasites, some of which only dine on specific bat species, spend most of their lives crawling through the animal's fur or on its wing membranes. They often have flattened, flea-like bodies with long legs, and can be winged or wingless, depending on the species.
Bat flies fall into one of two families: streblidae and nycteribiidae, which are mostly found in the Eastern and Western Hemispheres, respectively. Currently, scientists have only identified nycteribiid flies as vectors, or transmitters, for bat malaria, but researchers have now learned that streblids may also be spreading the disease.
In the La Búcara mine, located in the Cordillera Septentrional mountain range of the Dominican Republic, Poinar and his colleagues uncovered an ancient malaria-carrying streblid fly entombed in amber.
"While no malaria parasites have been found in extant streblids, they probably occur and it is possible that streblids were the earliest lineage of flies that transmitted bat malaria to Chiroptera [bats]," Poinar writes in one of his studies, published in December in the journal Parasites & Vectors.
Based on other fossils encased in amber from the La Búcara mine, Poinar estimates that the bat fly got trapped 20 million to 30 million years ago, though the range could be as large as 15 million to 45 million years ago. Given that bats go back about 50 million years, the find means that the flies have been attacking bats for at least half the time they've existed, Poinar said.
The fossil is the first ever found of streblid flies, possibly because insects in general don't preserve well, unless they are trapped in some kind of preservative-like substance such as amber; and these bat flies only leave their bat to mate — this is probably what the bat fly was doing when it got trapped in sap, Poinar said.
Read more at Discovery News
Little Ice Age Started With Volcanoes
In the late 13th century, massive volcanic explosions in the tropics destroyed mountain villages in northern Europe—not by burying them in lava and mudflows, mind you, but by triggering a cold spell that engulfed the towns in ice.
So goes the latest explanation for the underlying cause of a celebrated cold snap known as the Little Ice Age. Famous paintings from that period depict ice skaters on the Thames River in London and canals in the Netherlands, two places that were ice-free before then and have been so ever since.
Scientists had long known that the Little Ice Age started sometime after the Middle Ages and lasted for centuries. But estimates of its onset have ranged from the 13th to the 16th century, and arguments have raged over the cause.
Now a team of geologists led by Gifford Miller of the University of Colorado, Boulder, has identified an abrupt start for the cool spell, sometime between 1275 and 1300 A.D. Repeated, explosive volcanism cooled the climate and set off a self-perpetuating feedback cycle involving sea ice in the North Atlantic Ocean that sustained the cool spell into the 19th century, they reported this week in the journal Geophysical Research Letters.
“This is the first time anyone has clearly identified the specific onset of the cold times marking the start of the Little Ice Age,” Miller said in a press release. “We also have provided an understandable climate feedback system that explains how this cold period could be sustained for a long period of time.”
In the first part of the study, Miller and his colleagues collected roughly 150 dead plants from the receding ice margins of Baffin Island in the Canadian Arctic (see photograph above). Radiocarbon dating back in the lab reveled a large cluster of “kill dates” between 1275 and 1300 A.D., indicating the plants had been frozen and engulfed by ice during a relatively sudden event.
Only major volcanic eruptions could cool the climate that quickly, Miller’s team surmised, by kicking up particles that block some of the sun’s incoming energy. The timing jibed with a period of intense volcanic activity already known from the rock record, but they knew those ejected particles would have dissipated too quickly to sustain cooler temperatures for the full duration of the Little Ice Age.
The team used climate simulations to see what else might have been going on, combining feedback patterns known to have occurred in the ocean. Think of it as a step-by-step guide to how volcanoes in the tropics could engulf northern European towns in ice:
Read more at Discovery News
So goes the latest explanation for the underlying cause of a celebrated cold snap known as the Little Ice Age. Famous paintings from that period depict ice skaters on the Thames River in London and canals in the Netherlands, two places that were ice-free before then and have been so ever since.
Scientists had long known that the Little Ice Age started sometime after the Middle Ages and lasted for centuries. But estimates of its onset have ranged from the 13th to the 16th century, and arguments have raged over the cause.
Now a team of geologists led by Gifford Miller of the University of Colorado, Boulder, has identified an abrupt start for the cool spell, sometime between 1275 and 1300 A.D. Repeated, explosive volcanism cooled the climate and set off a self-perpetuating feedback cycle involving sea ice in the North Atlantic Ocean that sustained the cool spell into the 19th century, they reported this week in the journal Geophysical Research Letters.
“This is the first time anyone has clearly identified the specific onset of the cold times marking the start of the Little Ice Age,” Miller said in a press release. “We also have provided an understandable climate feedback system that explains how this cold period could be sustained for a long period of time.”
In the first part of the study, Miller and his colleagues collected roughly 150 dead plants from the receding ice margins of Baffin Island in the Canadian Arctic (see photograph above). Radiocarbon dating back in the lab reveled a large cluster of “kill dates” between 1275 and 1300 A.D., indicating the plants had been frozen and engulfed by ice during a relatively sudden event.
Only major volcanic eruptions could cool the climate that quickly, Miller’s team surmised, by kicking up particles that block some of the sun’s incoming energy. The timing jibed with a period of intense volcanic activity already known from the rock record, but they knew those ejected particles would have dissipated too quickly to sustain cooler temperatures for the full duration of the Little Ice Age.
The team used climate simulations to see what else might have been going on, combining feedback patterns known to have occurred in the ocean. Think of it as a step-by-step guide to how volcanoes in the tropics could engulf northern European towns in ice:
- A massive volcanic eruption rocks the tropics between 1275 and 1300 A.D.
- The eruptions cloud the skies across the northern hemisphere with shiny particles, called aerosols, that block some of the sun’s incoming energy.
- A cold snap ensues, killing off low-lying and higher elevation Arctic plants in one fell swoop.
- The volcano-induced cooling generates extra sea ice in the Arctic Ocean.
- Some of that sea ice makes its way south along the eastern coast of Greenland, melts in the North Atlantic Ocean, and stalls the ocean circulation patterns that usually send warmer waters back north.
- Water up north stays cold instead, sustaining the enlarged areas of sea ice.
- Within a 50-year period, three more massive eruptions intensify the cooling trend.
- The feedback cycle that sustains the sea ice perpetuates the colder regional climate for decades after the last of the volcanic aerosols rain out of the sky.
- With a volcano-induced cold spell now persisting for centuries, mountain glaciers in Norway and the Alps advance into inhabited valleys, destroying towns.
Read more at Discovery News
Feb 2, 2012
Russian Drill Nears 14-Million-Year-Old Antarctic Lake
After 20 years of drilling, a team of Russian researchers is close to breaching the prehistoric Lake Vostok, which has been trapped deep beneath Antarctica for the last 14 million years.
Vostok is the largest in a sub-glacial web of more than 200 lakes that are hidden 4 km beneath the ice. Some of the lakes formed when the continent was much warmer and still connected to Australia.
The lakes are rich in oxygen (making them oligotrophic), with levels of the element some 50 times higher than what would be found in your typical freshwater lake. The high gas concentration is thought to be because of the enormous weight and pressure of the continental ice cap.
If life exists in Vostok, it will have to be an extremophile — a life form that has adapted to survive in extreme environments. The organism would have to withstand high pressure, constant cold, low nutrient input, high oxygen concentration and an absence of sunlight.
The conditions in Lake Vostok are thought to be similar to the conditions on Jupiter’s moon Europa and Saturn’s tiny moon Enceladus. In June, NASA probe Cassini found the best evidence yet for a massive saltwater reservoir beneath the icy surface of Enceladus. This all means that finding life in the inhospitable depths of Vostok would strengthen the case for life in the outer solar system.
Back on planet Earth, the team at Vostok are running short on time. Antarctica’s summer will soon end and the researchers need to leave their remote base while they still can. Temperatures will drop as low as -80C, grounding planes and trapping the team.
They missed their chance last year. “Time is short, however. It’s possible that the drillers won’t be able to reach the water before the end of the current Antarctic summer, and they’ll need to wait another year before the process can continue,” we wrote in January 2011. The drill halted in February.
Read more at Wired Science
Vostok is the largest in a sub-glacial web of more than 200 lakes that are hidden 4 km beneath the ice. Some of the lakes formed when the continent was much warmer and still connected to Australia.
The lakes are rich in oxygen (making them oligotrophic), with levels of the element some 50 times higher than what would be found in your typical freshwater lake. The high gas concentration is thought to be because of the enormous weight and pressure of the continental ice cap.
If life exists in Vostok, it will have to be an extremophile — a life form that has adapted to survive in extreme environments. The organism would have to withstand high pressure, constant cold, low nutrient input, high oxygen concentration and an absence of sunlight.
The conditions in Lake Vostok are thought to be similar to the conditions on Jupiter’s moon Europa and Saturn’s tiny moon Enceladus. In June, NASA probe Cassini found the best evidence yet for a massive saltwater reservoir beneath the icy surface of Enceladus. This all means that finding life in the inhospitable depths of Vostok would strengthen the case for life in the outer solar system.
Back on planet Earth, the team at Vostok are running short on time. Antarctica’s summer will soon end and the researchers need to leave their remote base while they still can. Temperatures will drop as low as -80C, grounding planes and trapping the team.
They missed their chance last year. “Time is short, however. It’s possible that the drillers won’t be able to reach the water before the end of the current Antarctic summer, and they’ll need to wait another year before the process can continue,” we wrote in January 2011. The drill halted in February.
Read more at Wired Science
The Earliest Copy Of Mona Lisa Found
Conservators at Madrid's Prado museum have identified what they believe is the earliest copy of Leonardo da Vinci's Mona Lisa.
Brighter faced and younger than the original which hangs in the Louvre in Paris, the lady in the portrait has long been standing against a black background.
Art historians thought it was just one of dozens of replicas produced in the centuries after Leonardo's death.
But as paint layers were stripped away during recent restoration work, a landscape much similar to the original backdrop in Da Vinci's masterpiece, emerged.
Intrigued, the curators turned to infrared reflectography, a technology which enables to see beneath the painted surface. They compared images obtained in 2004 from the original Mona Lisa with the Madrid copy.
"In the under-drawing you can see changes which are only apparent underneath the surface of the Louvre painting," said Gabriele Finaldi, deputy director of conservation at the Prado Museum.
The discovery suggests the picture was being produced at the same time as Leonardo was painting his masterpiece.
"The artist of this picture was making the same changes Leonardo was introducing into the original," Finaldi said.
Listed in the 1666 inventory of Madrid’s Alcazar Palace, the painting, which is close in size to the original Mona Lisa, enjoyed much attention in the past. Attributed to Leonardo himself, it was copied by various artists, such as Gaspare Sensi, known as Gaspar Sensi y Baldachi (1794-1880).
According to Alessandro Vezzosi, director of the Museo Ideale in Vinci, where Leonardo was born in 1452, the discovery is extremely important.
The curator of the exhibition Leonardo Da Vinci and His Idea of Beauty, which will open in March in Tokyo, Japan, with more than 20 Mona Lisa inspired works, Vezzosi believes that the author is a Spanish artist.
Read more at Discovery News
Brighter faced and younger than the original which hangs in the Louvre in Paris, the lady in the portrait has long been standing against a black background.
Art historians thought it was just one of dozens of replicas produced in the centuries after Leonardo's death.
But as paint layers were stripped away during recent restoration work, a landscape much similar to the original backdrop in Da Vinci's masterpiece, emerged.
Intrigued, the curators turned to infrared reflectography, a technology which enables to see beneath the painted surface. They compared images obtained in 2004 from the original Mona Lisa with the Madrid copy.
"In the under-drawing you can see changes which are only apparent underneath the surface of the Louvre painting," said Gabriele Finaldi, deputy director of conservation at the Prado Museum.
The discovery suggests the picture was being produced at the same time as Leonardo was painting his masterpiece.
"The artist of this picture was making the same changes Leonardo was introducing into the original," Finaldi said.
Listed in the 1666 inventory of Madrid’s Alcazar Palace, the painting, which is close in size to the original Mona Lisa, enjoyed much attention in the past. Attributed to Leonardo himself, it was copied by various artists, such as Gaspare Sensi, known as Gaspar Sensi y Baldachi (1794-1880).
According to Alessandro Vezzosi, director of the Museo Ideale in Vinci, where Leonardo was born in 1452, the discovery is extremely important.
The curator of the exhibition Leonardo Da Vinci and His Idea of Beauty, which will open in March in Tokyo, Japan, with more than 20 Mona Lisa inspired works, Vezzosi believes that the author is a Spanish artist.
Read more at Discovery News
New Alien Planet Ripe for Life?
Scientists have discovered a planet about five times bigger than Earth flying the right distance from its parent star for liquid water to exist on its surface, a condition believed to be necessary for life.
The newly found planet circles a star dimmer than the sun that is located 22 light-years away in the constellation Scorpius (also known as Scorpio.) It passes around its parent star in 28.15 days.
While far closer to its star than Earth is to the sun, the planet’s parent star, known as GJ 667C, is a small dwarf star that emits most of its light in infrared radiation. That means GJ 667C's so-called "habitable zone" -- the region where surface water can exist in liquid form -- is closer than our sun's region.
Scientists don't know if the newly discovered world, called GJ 667Cc, is solid or not.
"It is possible to determine, but we have to be lucky. The planet would have to transit in front of the star. We haven't checked yet if that happens," lead researcher Guillem Anglada-Escudé, formerly with the Carnegie Institution for Science, told Discovery News.
GJ 667Cc was discovered after Anglada-Escudé and his team rechecked data collected by a rival planet-hunting group in Europe called HARPS.
That team earlier announced a discovery of a super-Earth around the same star, but one that orbits in just 7.2 days -- too close to be in the star’s habitable zone.
Anglada-Escudé and colleagues developed new software to recheck that planet’s orbital information and came up with its more fortuitously positioned sibling. The system also may include a gas-giant planet and a third super-Earth with an orbital period of 75 days.
Read more at Discovery News
The newly found planet circles a star dimmer than the sun that is located 22 light-years away in the constellation Scorpius (also known as Scorpio.) It passes around its parent star in 28.15 days.
While far closer to its star than Earth is to the sun, the planet’s parent star, known as GJ 667C, is a small dwarf star that emits most of its light in infrared radiation. That means GJ 667C's so-called "habitable zone" -- the region where surface water can exist in liquid form -- is closer than our sun's region.
Scientists don't know if the newly discovered world, called GJ 667Cc, is solid or not.
"It is possible to determine, but we have to be lucky. The planet would have to transit in front of the star. We haven't checked yet if that happens," lead researcher Guillem Anglada-Escudé, formerly with the Carnegie Institution for Science, told Discovery News.
GJ 667Cc was discovered after Anglada-Escudé and his team rechecked data collected by a rival planet-hunting group in Europe called HARPS.
That team earlier announced a discovery of a super-Earth around the same star, but one that orbits in just 7.2 days -- too close to be in the star’s habitable zone.
Anglada-Escudé and colleagues developed new software to recheck that planet’s orbital information and came up with its more fortuitously positioned sibling. The system also may include a gas-giant planet and a third super-Earth with an orbital period of 75 days.
Read more at Discovery News
Groundhog Day 2012
And the verdict from Punxsutawney Phil after seeing his shadow this morning during the 126th annual Groundhog Day festivities is: six more weeks of winter!
Ah, Groundhog Day. This U.S. and Canadian tradition comes every year on Feb. 2. It has its roots in astronomy, in the sense that it’s a seasonal festival, tied to the movement of Earth around the sun. In the U.S. and Canada, we call it Groundhog Day – a great excuse to go outside and enjoy some revelry during the winter months.
We all know the rules of Groundhog Day. On Feb. 2, a groundhog is said to forecast weather by looking for his shadow. If it’s sunny out, and he sees it, we’re in for six more weeks of winter. On the other hand, a cloudy Groundhog Day is supposed to forecast an early spring.
Of course, it can’t be cloudy, or sunny, everywhere. And many towns in the U.S. and Canada have their own local groundhogs and local traditions for Groundhog Day. But by far the most famous of the February 2 shadow-seeking groundhogs is still Punxsutawney Phil. He’s in Punxsutawney, in western Pennsylvania, which calls itself the “original home of the great weather prognosticator, His Majesty, the Punxsutawney Groundhog.”
Since 1887, members of the Punxsutawney Groundhog Club have held public celebrations of Groundhog Day. Punxsutawney is where Bill Murray was in the movie Groundhog Day. From the looks of things … a good time is had by all.
What you might not know is that Groundhog Day is really an astronomical holiday. It’s an event that takes place in Earth’s orbit around the sun, as we move between the solstices and equinoxes. In other words, Groundhog Day falls more or less midway between the December solstice and the March equinox. Each cross-quarter day is actually a collection of dates, and various traditions celebrate various holidays at this time. Feb. 2 is the year’s first cross-quarter day.
Of course, the division of the year into segments is common to many cultures. Our ancestors were more aware of the sun’s movements across the sky than we are, since their plantings and harvests depended on it.
In the ancient Celtic calendar, the year is also divided into quarter days (equinoxes and solstices) and cross-quarter days on a great neopagan wheel of the year. Thus, just as February 2 is marked by the celebration of Candlemas by some Christians, such as the Roman Catholics, in contemporary paganism, this day is called Imbolc and is considered a traditional time for initiations.
The celebration of Groundhog Day came to America along with immigrants from Great Britain and Germany. The tradition can be traced to early Christians in Europe, when a hedgehog was said to look for his shadow on Candlemas Day.
Try this old English rhyme: If Candlemas Day be fair and bright, winter will have another flight. But if it be dark with clouds and rain, winter is gone and will not come again.
Or here’s another old saying: Half your wood and half your hay, you should have on Candlemas Day.
In Germany it used to be said: A shepherd would rather see a wolf enter his stable on Candlemas Day than see the sun shine. There, a badger was said to watch for his shadow.
A friend on Facebook said that, in Portugal, people have a poem about February 2 related to the Lady of Candles. Here is the poem: Quando a Senhora das Candeias está a rir está o inverno para vir, quando está a chorar está o inverno a acabar. Here is the translation: If she smiles (Sun) the winter is yet to come, if she cries (Rain) the winter is over.
One final note. It’s supposed to be bad luck to leave your Christmas decorations up after Groundhog Day.
Read more at Discovery News
Ah, Groundhog Day. This U.S. and Canadian tradition comes every year on Feb. 2. It has its roots in astronomy, in the sense that it’s a seasonal festival, tied to the movement of Earth around the sun. In the U.S. and Canada, we call it Groundhog Day – a great excuse to go outside and enjoy some revelry during the winter months.
We all know the rules of Groundhog Day. On Feb. 2, a groundhog is said to forecast weather by looking for his shadow. If it’s sunny out, and he sees it, we’re in for six more weeks of winter. On the other hand, a cloudy Groundhog Day is supposed to forecast an early spring.
Of course, it can’t be cloudy, or sunny, everywhere. And many towns in the U.S. and Canada have their own local groundhogs and local traditions for Groundhog Day. But by far the most famous of the February 2 shadow-seeking groundhogs is still Punxsutawney Phil. He’s in Punxsutawney, in western Pennsylvania, which calls itself the “original home of the great weather prognosticator, His Majesty, the Punxsutawney Groundhog.”
Since 1887, members of the Punxsutawney Groundhog Club have held public celebrations of Groundhog Day. Punxsutawney is where Bill Murray was in the movie Groundhog Day. From the looks of things … a good time is had by all.
What you might not know is that Groundhog Day is really an astronomical holiday. It’s an event that takes place in Earth’s orbit around the sun, as we move between the solstices and equinoxes. In other words, Groundhog Day falls more or less midway between the December solstice and the March equinox. Each cross-quarter day is actually a collection of dates, and various traditions celebrate various holidays at this time. Feb. 2 is the year’s first cross-quarter day.
Of course, the division of the year into segments is common to many cultures. Our ancestors were more aware of the sun’s movements across the sky than we are, since their plantings and harvests depended on it.
In the ancient Celtic calendar, the year is also divided into quarter days (equinoxes and solstices) and cross-quarter days on a great neopagan wheel of the year. Thus, just as February 2 is marked by the celebration of Candlemas by some Christians, such as the Roman Catholics, in contemporary paganism, this day is called Imbolc and is considered a traditional time for initiations.
The celebration of Groundhog Day came to America along with immigrants from Great Britain and Germany. The tradition can be traced to early Christians in Europe, when a hedgehog was said to look for his shadow on Candlemas Day.
Try this old English rhyme: If Candlemas Day be fair and bright, winter will have another flight. But if it be dark with clouds and rain, winter is gone and will not come again.
Or here’s another old saying: Half your wood and half your hay, you should have on Candlemas Day.
In Germany it used to be said: A shepherd would rather see a wolf enter his stable on Candlemas Day than see the sun shine. There, a badger was said to watch for his shadow.
A friend on Facebook said that, in Portugal, people have a poem about February 2 related to the Lady of Candles. Here is the poem: Quando a Senhora das Candeias está a rir está o inverno para vir, quando está a chorar está o inverno a acabar. Here is the translation: If she smiles (Sun) the winter is yet to come, if she cries (Rain) the winter is over.
One final note. It’s supposed to be bad luck to leave your Christmas decorations up after Groundhog Day.
Read more at Discovery News
Feb 1, 2012
A Spider Web's Strength Lies in More Than Its Silk
While researchers have long known of the incredible strength of spider silk, the robust nature of the tiny filaments cannot alone explain how webs survive multiple tears and winds that exceed hurricane strength.
Now, a study that combines experimental observations of spider webs with complex computer simulations shows that web durability depends not only on silk strength, but on how the overall web design compensates for damage and the response of individual strands to continuously varying stresses.
Reporting in the cover story of the Feb. 2, 2012, issue of Nature, researchers from the Massachusetts Institute of Technology (MIT) and the Politecnico di Torino in Italy show how spider web-design localizes strain and damage, preserving the web as a whole.
"Multiple research groups have investigated the complex, hierarchical structure of spider silk and its amazing strength, extensibility and toughness," says Markus Buehler, associate professor of civil and environmental engineering at MIT. "But, while we understand the peculiar behavior of dragline silk from the 'nanoscale up'--initially stiff, then softening, then stiffening again--we have little insight into how the molecular structure of silk uniquely improves the performance of a web."
The spider webs found in gardens and garages are made from multiple silk types, but viscid silk and dragline silk are most critical to the integrity of the web. Viscid silk is stretchy, wet and sticky, and it is the silk that winds out in increasing spirals from the web center. Its primary function is to capture prey. Dragline silk is stiff and dry, and it serves as the threads that radiate out from a web's center, providing structural support. Dragline silk is crucial to the mechanical behavior of the web.
Some of Buehler's earlier work showed that dragline silk is composed of a suite of proteins with a unique molecular structure that lends both strength and flexibility. "While the strength and toughness of silk has been touted before--it is stronger than steel and tougher than Kevlar by weight--the advantages of silk within a web, beyond such measures, has been unknown," Buehler adds.
The common spiders represented in the recent study, including orb weavers (Nephila clavipes), garden spiders (Araneus diadematus) and others, craft familiar, spiraling web patterns atop a scaffolding of radiating filaments. Building each web takes energy the spider cannot afford to expend often, so durability is key to the arachnid's survival.
Through a series of computer models matched to laboratory experiments with spider webs, the researchers were able to tease apart what factors play what role in helping a web endure natural threats that are either localized, such as a twig falling on a filament, or distributed, such as high winds.
"For our models, we used a molecular dynamics framework in which we scaled up the molecular behavior of silk threads to the macroscopic world. This allowed us to investigate different load cases on the web, but more importantly, it also allowed us to trace and visualize how the web fractured under extreme loading conditions," says Anna Tarakanova, who developed the computer models along with Steven Cranford, both graduate students in Buehler's laboratory.
"Through computer modeling of the web," Cranford adds, "we were able to efficiently create 'synthetic' webs, constructed out of virtual silks that resembled more typical engineering materials such as those that are linear elastic, like many ceramics, and elastic-plastic materials, which behave like many metals. With the models, we could make comparisons between the modeled web's performance and the performance seen in the webs made from natural silk. In addition, we could analyze the web in terms of energy, and details of the local stress and strain," which are traits experiments were able to reveal.
The study showed that, as one might expect, when any part of a web is perturbed, the whole web reacts. Such sensitivity is what alerts a spider to the struggling of a trapped insect. However, the radial and spiral filaments each play different roles in attenuating motion, and when stresses are particularly harsh, they are sacrificed so that the entire web may survive.
"The concept of selective, localized failure for spider webs is interesting since it is a distinct departure from the structural principles that seem to be in play for many biological materials and components," adds Dennis Carter, the NSF program director for biomechanics and mechanobiology who helped support the study.
"For example, the distributed material components in bone spread stress broadly, adding strength. There is no 'wasted' material, minimizing the weight of the structure. While all of the bone is being used to resist force, bone everywhere along the structure tends to be damaged prior to failure."
In contrast, a spider's web is organized to sacrifice local areas so that failure will not prevent the remaining web from functioning, even if in a diminished capacity, says Carter. "This is a clever strategy when the alternative is having to make an entire, new web," he adds. "As Buehlersuggests, engineers can learn from nature and adapt the design strategies that are most appropriate for specific applications."
Specifically, when a radial filament in a web is snagged, the web deforms more than when a relatively compliant spiral filament is caught. However, when either type fails--under great stress--it is the only filament to fail.
The unique nature of the spider-silk proteins enhances that effect. When a filament is pulled, the silk's unique molecular structure--a combination of amorphous proteins and ordered, nanoscale crystals--unfurls as stress increases, leading to a stretching effect that has four distinct phases: an initial, linear tugging; a drawn out stretching as the proteins unfold; a stiffening phase that absorbs the greatest amount of force; and then a final, stick-slip phase before the silk breaks.
According to the researchers' findings, the failure of silk threads occurs at points where the filament is disturbed by that external force, but after failure, the web returns to stability--even in simulations using broad forces, like hurricane-force winds.
Read more at Science Daily
Now, a study that combines experimental observations of spider webs with complex computer simulations shows that web durability depends not only on silk strength, but on how the overall web design compensates for damage and the response of individual strands to continuously varying stresses.
Reporting in the cover story of the Feb. 2, 2012, issue of Nature, researchers from the Massachusetts Institute of Technology (MIT) and the Politecnico di Torino in Italy show how spider web-design localizes strain and damage, preserving the web as a whole.
"Multiple research groups have investigated the complex, hierarchical structure of spider silk and its amazing strength, extensibility and toughness," says Markus Buehler, associate professor of civil and environmental engineering at MIT. "But, while we understand the peculiar behavior of dragline silk from the 'nanoscale up'--initially stiff, then softening, then stiffening again--we have little insight into how the molecular structure of silk uniquely improves the performance of a web."
The spider webs found in gardens and garages are made from multiple silk types, but viscid silk and dragline silk are most critical to the integrity of the web. Viscid silk is stretchy, wet and sticky, and it is the silk that winds out in increasing spirals from the web center. Its primary function is to capture prey. Dragline silk is stiff and dry, and it serves as the threads that radiate out from a web's center, providing structural support. Dragline silk is crucial to the mechanical behavior of the web.
Some of Buehler's earlier work showed that dragline silk is composed of a suite of proteins with a unique molecular structure that lends both strength and flexibility. "While the strength and toughness of silk has been touted before--it is stronger than steel and tougher than Kevlar by weight--the advantages of silk within a web, beyond such measures, has been unknown," Buehler adds.
The common spiders represented in the recent study, including orb weavers (Nephila clavipes), garden spiders (Araneus diadematus) and others, craft familiar, spiraling web patterns atop a scaffolding of radiating filaments. Building each web takes energy the spider cannot afford to expend often, so durability is key to the arachnid's survival.
Through a series of computer models matched to laboratory experiments with spider webs, the researchers were able to tease apart what factors play what role in helping a web endure natural threats that are either localized, such as a twig falling on a filament, or distributed, such as high winds.
"For our models, we used a molecular dynamics framework in which we scaled up the molecular behavior of silk threads to the macroscopic world. This allowed us to investigate different load cases on the web, but more importantly, it also allowed us to trace and visualize how the web fractured under extreme loading conditions," says Anna Tarakanova, who developed the computer models along with Steven Cranford, both graduate students in Buehler's laboratory.
"Through computer modeling of the web," Cranford adds, "we were able to efficiently create 'synthetic' webs, constructed out of virtual silks that resembled more typical engineering materials such as those that are linear elastic, like many ceramics, and elastic-plastic materials, which behave like many metals. With the models, we could make comparisons between the modeled web's performance and the performance seen in the webs made from natural silk. In addition, we could analyze the web in terms of energy, and details of the local stress and strain," which are traits experiments were able to reveal.
The study showed that, as one might expect, when any part of a web is perturbed, the whole web reacts. Such sensitivity is what alerts a spider to the struggling of a trapped insect. However, the radial and spiral filaments each play different roles in attenuating motion, and when stresses are particularly harsh, they are sacrificed so that the entire web may survive.
"The concept of selective, localized failure for spider webs is interesting since it is a distinct departure from the structural principles that seem to be in play for many biological materials and components," adds Dennis Carter, the NSF program director for biomechanics and mechanobiology who helped support the study.
"For example, the distributed material components in bone spread stress broadly, adding strength. There is no 'wasted' material, minimizing the weight of the structure. While all of the bone is being used to resist force, bone everywhere along the structure tends to be damaged prior to failure."
In contrast, a spider's web is organized to sacrifice local areas so that failure will not prevent the remaining web from functioning, even if in a diminished capacity, says Carter. "This is a clever strategy when the alternative is having to make an entire, new web," he adds. "As Buehlersuggests, engineers can learn from nature and adapt the design strategies that are most appropriate for specific applications."
Specifically, when a radial filament in a web is snagged, the web deforms more than when a relatively compliant spiral filament is caught. However, when either type fails--under great stress--it is the only filament to fail.
The unique nature of the spider-silk proteins enhances that effect. When a filament is pulled, the silk's unique molecular structure--a combination of amorphous proteins and ordered, nanoscale crystals--unfurls as stress increases, leading to a stretching effect that has four distinct phases: an initial, linear tugging; a drawn out stretching as the proteins unfold; a stiffening phase that absorbs the greatest amount of force; and then a final, stick-slip phase before the silk breaks.
According to the researchers' findings, the failure of silk threads occurs at points where the filament is disturbed by that external force, but after failure, the web returns to stability--even in simulations using broad forces, like hurricane-force winds.
Read more at Science Daily
Sex Is Scary and Castrating for Spider
Sex is fast and scary for some male spiders. And the outcome isn't the most comfortable for the females, either.
Copulation is so treacherous for the males, in fact, that some castrate themselves during the act, leaving behind their sexual organ that in turn, plugs up the cannibalistic female as they run for their lives.
New research shows that the males may win out in the end, however, since their severed part actually increases the amount of transferred sperm, heightening the now-sterile male's chances of paternity.
The study, published in the Royal Society journal Biology Letters, is the first to demonstrate that the sexual phenomenon known as "remote copulation," when the male's sexual organ works without being attached to the male, has not evolved to benefit the female. In fact, the painful-sounding process turns out to be a win-win for males.
"He achieves continuous sperm transfer after having been removed by the aggressive female, or has moved away himself," co-author Matjaz Kuntner told Discovery News. "At the same time, his palp (sexual organ) plugs the female, thereby monopolizing her."
Kuntner, an evolutionary biologist at the Slovenian Academy of Sciences, and his colleagues studied the highly sexually cannibalistic orb-web spider Nephilengys malabarensis. The findings, however, likely apply to other spiders, such as those in the genus Herennia and Tidarren, and could even apply to other non-spider species with similar mating habits.
For the study, the scientists collected numerous N. malabarensis spiders in the field. They then chose virgins for their experiments.
The researchers began by introducing a virgin male onto a virgin female web. They recorded what happened next, and then counted the amount of sperm under a compound microscope for each spider pairing.
"The copulation is very short, 3-35 seconds," lead author Daiqin Li of the National University of Singapore told Discovery News. "Copulation duration (mean: 7 seconds) resulting from a female-initiated break off is even shorter than that caused by a male-initiated break off (mean: 12 seconds). Males try to escape from females very fast, and then will guard the female if they can manage to escape."
If they don't escape, she eats them. If they do get away, chances are that they severed the joint attaching their sexual organ before running. Close to 90 percent of all spiders studied used cut-and-run tactic.
The male is left sterile, but seems to gain agility and testiness in his new eunuch state.
"He lives for at least weeks longer," Kuntner said, explaining that males of this species don’t live all that long anyway. "The male benefits from being more aggressive in order to secure his paternity, that is, he defends the female from subsequent rivals."
The discovery helps to explain how mating succeeds for some amazingly different males and females. For many cannibalistic spiders, such as black widows and those of N. malabarensis, the females are enormous and deadly when compared to the smaller, less toxic males. But remote copulation and other evolved tactics keep their sexual conflicts in check.
For example, females of the Australian redback spider, one of the world's most poisonous spiders and a close relative to the black widow, demand 100 minutes of courting or else they usually cannibalize their male suitors. But scrawny males of this species can win at love without exerting much effort.
Read more at Discovery News
Copulation is so treacherous for the males, in fact, that some castrate themselves during the act, leaving behind their sexual organ that in turn, plugs up the cannibalistic female as they run for their lives.
New research shows that the males may win out in the end, however, since their severed part actually increases the amount of transferred sperm, heightening the now-sterile male's chances of paternity.
The study, published in the Royal Society journal Biology Letters, is the first to demonstrate that the sexual phenomenon known as "remote copulation," when the male's sexual organ works without being attached to the male, has not evolved to benefit the female. In fact, the painful-sounding process turns out to be a win-win for males.
"He achieves continuous sperm transfer after having been removed by the aggressive female, or has moved away himself," co-author Matjaz Kuntner told Discovery News. "At the same time, his palp (sexual organ) plugs the female, thereby monopolizing her."
Kuntner, an evolutionary biologist at the Slovenian Academy of Sciences, and his colleagues studied the highly sexually cannibalistic orb-web spider Nephilengys malabarensis. The findings, however, likely apply to other spiders, such as those in the genus Herennia and Tidarren, and could even apply to other non-spider species with similar mating habits.
For the study, the scientists collected numerous N. malabarensis spiders in the field. They then chose virgins for their experiments.
The researchers began by introducing a virgin male onto a virgin female web. They recorded what happened next, and then counted the amount of sperm under a compound microscope for each spider pairing.
"The copulation is very short, 3-35 seconds," lead author Daiqin Li of the National University of Singapore told Discovery News. "Copulation duration (mean: 7 seconds) resulting from a female-initiated break off is even shorter than that caused by a male-initiated break off (mean: 12 seconds). Males try to escape from females very fast, and then will guard the female if they can manage to escape."
If they don't escape, she eats them. If they do get away, chances are that they severed the joint attaching their sexual organ before running. Close to 90 percent of all spiders studied used cut-and-run tactic.
The male is left sterile, but seems to gain agility and testiness in his new eunuch state.
"He lives for at least weeks longer," Kuntner said, explaining that males of this species don’t live all that long anyway. "The male benefits from being more aggressive in order to secure his paternity, that is, he defends the female from subsequent rivals."
The discovery helps to explain how mating succeeds for some amazingly different males and females. For many cannibalistic spiders, such as black widows and those of N. malabarensis, the females are enormous and deadly when compared to the smaller, less toxic males. But remote copulation and other evolved tactics keep their sexual conflicts in check.
For example, females of the Australian redback spider, one of the world's most poisonous spiders and a close relative to the black widow, demand 100 minutes of courting or else they usually cannibalize their male suitors. But scrawny males of this species can win at love without exerting much effort.
Read more at Discovery News
Why Dinosaurs Were So Huge
How did some dinosaurs reach such soaring heights -- up to 100 feet high in some cases? Efficient lungs and respiration, along with egg laying, might have given dinos a growth edge when compared to other animals, suggests new research.
The study also negates a popular theory that animals tended to become bigger over the course of their evolution.
While some dinosaurs grew ever larger over subsequent generations, not all did.
"We look at the early history of archosaurs, including some early dinosaurs," said Roger Benson who co-authored the study published in the Proceedings of the Royal Society B. "We can see that some lineages obtained gigantic body sizes, but others remained small and a few showed evolutionary size reductions."
Benson, a vertebrate paleontologist at the University of Cambridge, explained that "pterosaurs, the flying reptiles, are a good example of a lineage that remained small during our study interval. There were also many small herbivores, like the dinosaur Heterodontosaurus, and small predators like the dinosaur Coelophysis."
Benson and colleagues Roland Sookias and Richard Butler analyzed more than 400 species spanning the Late Permian to Middle Jurassic periods. The animals' pattern of growth during 100 million years supports a theory called "passive diffusion." This just means that various evolutionary lineages did a bunch of different things, from growing larger to growing smaller.
The findings counter a theory known as "Cope's rule," which claims that some groups, such as dinosaurs, tended to always evolve bigger bodies over time.
There is no question, however, that many dinosaurs were mega huge, at least when compared to today's land animals.
"Several aspects of dinosaurian biology may have allowed them to obtain larger maximum sizes than any other land animals," Benson said.
"For example, in many dinosaurs, parts of the skeleton contained air, and we think they had an efficient bird-like lung. These features helped them to support their weight on land more easily, and made their respiration and heat exchange more effective than in mammals."
Benson adds that since larger animals can lay more eggs and reproduce more quickly, there may have been a reproductive advantage to being big.
Brian McNab, a professor of zoology at the University of Florida, has also studied dinosaur growth trends. He thinks the biggest dinos ate often and moved little.
Read more at Discovery News
The study also negates a popular theory that animals tended to become bigger over the course of their evolution.
While some dinosaurs grew ever larger over subsequent generations, not all did.
"We look at the early history of archosaurs, including some early dinosaurs," said Roger Benson who co-authored the study published in the Proceedings of the Royal Society B. "We can see that some lineages obtained gigantic body sizes, but others remained small and a few showed evolutionary size reductions."
Benson, a vertebrate paleontologist at the University of Cambridge, explained that "pterosaurs, the flying reptiles, are a good example of a lineage that remained small during our study interval. There were also many small herbivores, like the dinosaur Heterodontosaurus, and small predators like the dinosaur Coelophysis."
Benson and colleagues Roland Sookias and Richard Butler analyzed more than 400 species spanning the Late Permian to Middle Jurassic periods. The animals' pattern of growth during 100 million years supports a theory called "passive diffusion." This just means that various evolutionary lineages did a bunch of different things, from growing larger to growing smaller.
The findings counter a theory known as "Cope's rule," which claims that some groups, such as dinosaurs, tended to always evolve bigger bodies over time.
There is no question, however, that many dinosaurs were mega huge, at least when compared to today's land animals.
"Several aspects of dinosaurian biology may have allowed them to obtain larger maximum sizes than any other land animals," Benson said.
"For example, in many dinosaurs, parts of the skeleton contained air, and we think they had an efficient bird-like lung. These features helped them to support their weight on land more easily, and made their respiration and heat exchange more effective than in mammals."
Benson adds that since larger animals can lay more eggs and reproduce more quickly, there may have been a reproductive advantage to being big.
Brian McNab, a professor of zoology at the University of Florida, has also studied dinosaur growth trends. He thinks the biggest dinos ate often and moved little.
Read more at Discovery News
We're Living in a Space Cloud
A NASA robotic probe sampling particles flowing into our solar system from the galactic neighborhood shows we're living in a cloud -- and likely to stay that way for hundreds or even thousands of years.
The measurements from NASA's Interstellar Boundary Explorer, or IBEX, spacecraft include the first direct samplings of hydrogen, oxygen and neon that didn't come from the sun or anywhere else in the solar system.
Instead, the gases, along with helium, which was previously detected by NASA's Ulysses spacecraft, streamed into our solar system from the galactic neighborhood, which right now includes a tenuous wispy cloud.
The flow of intergalactic particles is slower than expected, though still zipping along at a hearty 52,000 mph, and coming from a slightly different direction than previously thought.
That has several implications, including a new assessment that the sun and its brood of planets, asteroids, comets and everything else within the sun's protective, pressurized cocoon -- a structure known as the heliosphere -- is not going to be leaving the cloud that has been our home for the past 45,000 years or so anytime soon.
The slower flow also means that the heliosphere faces less pressure from the outside, making it more susceptible to external magnetic forces. Consequently, it has a different shape than previously thought, more like a squashed beach ball than a speeding bullet.
"The heliosphere is essentially the balance between outward-moving solar wind and the compression from the gas and dust that surround it, so if you're in a different interstellar medium environment, you're going to create a different heliospheric structure," said astronomer Seth Redfield with Wesleyan University.
That in turn impacts how effectively the heliosphere shields the solar system from galactic cosmic rays and other high-energy radiation.
"As the sun moves through space and moves in and out of interstellar clouds, the flux of galactic cosmic rays at the Earth really changes. Someday maybe we'll be able to link the sun's motion through interstellar clouds with the geologic history of Earth. I think that would be really exciting," added University of Chicago senior scientist Priscilla Frisch.
Read more at Discovery News
The measurements from NASA's Interstellar Boundary Explorer, or IBEX, spacecraft include the first direct samplings of hydrogen, oxygen and neon that didn't come from the sun or anywhere else in the solar system.
Instead, the gases, along with helium, which was previously detected by NASA's Ulysses spacecraft, streamed into our solar system from the galactic neighborhood, which right now includes a tenuous wispy cloud.
The flow of intergalactic particles is slower than expected, though still zipping along at a hearty 52,000 mph, and coming from a slightly different direction than previously thought.
That has several implications, including a new assessment that the sun and its brood of planets, asteroids, comets and everything else within the sun's protective, pressurized cocoon -- a structure known as the heliosphere -- is not going to be leaving the cloud that has been our home for the past 45,000 years or so anytime soon.
The slower flow also means that the heliosphere faces less pressure from the outside, making it more susceptible to external magnetic forces. Consequently, it has a different shape than previously thought, more like a squashed beach ball than a speeding bullet.
"The heliosphere is essentially the balance between outward-moving solar wind and the compression from the gas and dust that surround it, so if you're in a different interstellar medium environment, you're going to create a different heliospheric structure," said astronomer Seth Redfield with Wesleyan University.
That in turn impacts how effectively the heliosphere shields the solar system from galactic cosmic rays and other high-energy radiation.
"As the sun moves through space and moves in and out of interstellar clouds, the flux of galactic cosmic rays at the Earth really changes. Someday maybe we'll be able to link the sun's motion through interstellar clouds with the geologic history of Earth. I think that would be really exciting," added University of Chicago senior scientist Priscilla Frisch.
Read more at Discovery News
Jan 31, 2012
Evolutionary Geneticist Helps to Find Butterfly Gene, Clue to Age-Old Question
Years after sleeping in hammocks in the wilds of Peru and Panama, collecting hundreds of thousands of samples of colorful insects, Mississippi State assistant professor Brian Counterman now is helping unlock a very difficult puzzle.
The more-than-century-long challenge has involved a secret of the Heliconius butterfly, the orange, black, yellow, and red insect that hasn't easily communicated how all its radiant colors came to be.
For evolutionary biologists, and especially geneticists like Counterman, the butterflies--commonly called passion vine butterflies--make perfect research subjects for better understanding the important scientific question: How do organisms change to survive?
Over the past decade, the researcher in the university's biological sciences department has been part of an international team using field experiments, genetic mapping, population genetics, and phylogenetics to study the butterflies' biology and history.
A Duke University doctoral graduate in biology and evolutionary genetics, Counterman studied genetics of adaptation as part of his post-doctoral research at North Carolina State University. He joined the MSU faculty in 2009.
Passion vine butterflies are found throughout South and Central America. Through the years, scientists observed that Heliconius butterflies with certain red patterns survived in certain areas, while others didn't.
"There are very few cases that we know what traits determine if an organism will survive in nature," Counterman said, adding that he and a team of researchers recently uncovered the gene responsible for the different red wing patterns.
Their finds were featured in the July issue of Science.
Counterman said the butterflies use red as a warning signal to birds and other predators that they are poisonous and should not be consumed.
"This is one of the first examples where we've found the genetic change that allowed (an organism) to live or die in nature," he observed, adding that finding the red gene was just the first step in understanding how they have survived.
Counterman and his team further analyzed the red gene to reconstruct when the different red patterns evolved, providing important clues into how rapidly new adaptations can arise and spread in populations that nearly encompass entire continents.
This research was showcased on the cover in a December issue of the Proceedings of the National Academy of Sciences.
For scientists like Counterman, finding answers to these questions may give insight about how and why the diversity in the world evolved. And, there is still more to come.
Counterman now is part of a team sequencing the entire Heliconius genome--one of the first butterfly genomes--that should open the door to a new level of questioning into the biological causes for one of the most charismatic groups of organisms on earth.
Read more at Science Daily
The more-than-century-long challenge has involved a secret of the Heliconius butterfly, the orange, black, yellow, and red insect that hasn't easily communicated how all its radiant colors came to be.
For evolutionary biologists, and especially geneticists like Counterman, the butterflies--commonly called passion vine butterflies--make perfect research subjects for better understanding the important scientific question: How do organisms change to survive?
Over the past decade, the researcher in the university's biological sciences department has been part of an international team using field experiments, genetic mapping, population genetics, and phylogenetics to study the butterflies' biology and history.
A Duke University doctoral graduate in biology and evolutionary genetics, Counterman studied genetics of adaptation as part of his post-doctoral research at North Carolina State University. He joined the MSU faculty in 2009.
Passion vine butterflies are found throughout South and Central America. Through the years, scientists observed that Heliconius butterflies with certain red patterns survived in certain areas, while others didn't.
"There are very few cases that we know what traits determine if an organism will survive in nature," Counterman said, adding that he and a team of researchers recently uncovered the gene responsible for the different red wing patterns.
Their finds were featured in the July issue of Science.
Counterman said the butterflies use red as a warning signal to birds and other predators that they are poisonous and should not be consumed.
"This is one of the first examples where we've found the genetic change that allowed (an organism) to live or die in nature," he observed, adding that finding the red gene was just the first step in understanding how they have survived.
Counterman and his team further analyzed the red gene to reconstruct when the different red patterns evolved, providing important clues into how rapidly new adaptations can arise and spread in populations that nearly encompass entire continents.
This research was showcased on the cover in a December issue of the Proceedings of the National Academy of Sciences.
For scientists like Counterman, finding answers to these questions may give insight about how and why the diversity in the world evolved. And, there is still more to come.
Counterman now is part of a team sequencing the entire Heliconius genome--one of the first butterfly genomes--that should open the door to a new level of questioning into the biological causes for one of the most charismatic groups of organisms on earth.
Read more at Science Daily
Online News Portals Get Credibility Boost from Trusted Sources
People who read news on the Web tend to trust the gate even if there is no gatekeeper, according to Penn State researchers. When readers access a story from a credible news source they trust through an online portal, they also tend to trust the portal, said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory. Most of these portals use computers, not people, to automatically sort and post stories.
Sundar said this transfer of credibility provides online news portals -- Yahoo News and Google News -- with most of the benefits, but with little of the costs associated with online publishing.
"A news portal that uses stories from a credible source gets a boost in credibility and might even make money through advertising," said Sundar. "However, if there is a lawsuit for spreading false information, for example, it's unlikely that the portal will be named in the suit."
Sundar said the flow of credibility did not go both ways. He said that reading a low-credibility story on a high-credibility portal did not make the original source more trustworthy.
The researchers, who reported their findings in Journalism and Mass Communication Quarterly, asked a group of 231 students to read online news stories. After reading the stories, the students rated the credibility of the original source and the portal.
The researchers placed banners from Google News, which served as a high credibility portal, and the Drudge Report, which served as a low-credibility portal, on the pages. They also added banners to identify the New York Times -- the high-credibility source -- and the National Enquirer -- the low-credibility source.
The students were significantly more likely to consider a portal credible if the source of the story was trustworthy. The credibility of the portal suffered if the source lacked trustworthiness.
Sundar said that attention to sources depended on the involvement of the reader. When readers were particularly interested in the story, they tended to more thoroughly evaluate all the sources involved in the production and distribution of that news. People who are not interested in the story base their judgments on the credibility of the portal, which is the most immediately visible source.
Sundar, who worked with Hyunjin Kang and Keunmin Bae, both doctoral students in communications, and Shaoke Zhang, doctoral student in information sciences and technology, said that the way credibility is transferred from site to site shows the complexity of the relationship between online news readers and sources.
Evaluating credibility is difficult on the web because there are often chains of news sources for a story, Sundar said. For example, a person may find a story on an online news portal, forward the information to another friend through email, who then posts it on a social network. The identity of the original source may or may not be carried along this chain to the final reader.
Read more at Science Daily
Sundar said this transfer of credibility provides online news portals -- Yahoo News and Google News -- with most of the benefits, but with little of the costs associated with online publishing.
"A news portal that uses stories from a credible source gets a boost in credibility and might even make money through advertising," said Sundar. "However, if there is a lawsuit for spreading false information, for example, it's unlikely that the portal will be named in the suit."
Sundar said the flow of credibility did not go both ways. He said that reading a low-credibility story on a high-credibility portal did not make the original source more trustworthy.
The researchers, who reported their findings in Journalism and Mass Communication Quarterly, asked a group of 231 students to read online news stories. After reading the stories, the students rated the credibility of the original source and the portal.
The researchers placed banners from Google News, which served as a high credibility portal, and the Drudge Report, which served as a low-credibility portal, on the pages. They also added banners to identify the New York Times -- the high-credibility source -- and the National Enquirer -- the low-credibility source.
The students were significantly more likely to consider a portal credible if the source of the story was trustworthy. The credibility of the portal suffered if the source lacked trustworthiness.
Sundar said that attention to sources depended on the involvement of the reader. When readers were particularly interested in the story, they tended to more thoroughly evaluate all the sources involved in the production and distribution of that news. People who are not interested in the story base their judgments on the credibility of the portal, which is the most immediately visible source.
Sundar, who worked with Hyunjin Kang and Keunmin Bae, both doctoral students in communications, and Shaoke Zhang, doctoral student in information sciences and technology, said that the way credibility is transferred from site to site shows the complexity of the relationship between online news readers and sources.
Evaluating credibility is difficult on the web because there are often chains of news sources for a story, Sundar said. For example, a person may find a story on an online news portal, forward the information to another friend through email, who then posts it on a social network. The identity of the original source may or may not be carried along this chain to the final reader.
Read more at Science Daily
Gorilla Grins Hint at Origin of Human Smiles
Psychologists from the University of Portsmouth have published a paper suggesting gorillas use human-like facial expressions to communicate moods with one another. Not only that, but two of the expressions, both of which resemble grinning, could show the origins of the human smile.
However, the findings published in the American Journal of Primatology show their smiles mean different things. The Portsmouth researchers found these expressions, observed in Western Lowland gorillas, expressed a number of emotions.
One, a “play face”, featuring an open mouth and showing no teeth, denotes a playful mood, usually accompanied with physical contact. Another, which is open-mouthed and displaying top teeth, could be a submissive smile — as it mixes the play face and a bared-teeth expression, which indicates appeasement.
“Many primate species also show their teeth when they scream,” Bridget Waller, the lead researcher told Wired.co.uk in an e-mail. “These expressions tend to look different to the expressions I studied in gorillas, as the upper and lower teeth are both exposed, and the mouth widely open. The expression is more tense, and accompanied by very different vocalisations. The vocalised element of the scream can differ depending on whether the screamer is an aggressor or a victim.”
In short: subtle differences in facial expression and vocals mean quite different things in primate posturing — one is obedient and appeasing, the other screaming and aggressive. But does this mean that our own smile is inherently passive and submissive?
“In some primate species the bared-teeth display (the expression similar to the human smile) is used only by subordinates, but these species have a very different social organisation to humans,” says Waller. “They tend to have very strict dominance hierarchies, whereas we have a more relaxed social structure. So, in some circumstances humans might use smiling as a subordinate signal, but is can also be used as a genuine signal of friendliness.”
Read more at Wired Science
However, the findings published in the American Journal of Primatology show their smiles mean different things. The Portsmouth researchers found these expressions, observed in Western Lowland gorillas, expressed a number of emotions.
One, a “play face”, featuring an open mouth and showing no teeth, denotes a playful mood, usually accompanied with physical contact. Another, which is open-mouthed and displaying top teeth, could be a submissive smile — as it mixes the play face and a bared-teeth expression, which indicates appeasement.
“Many primate species also show their teeth when they scream,” Bridget Waller, the lead researcher told Wired.co.uk in an e-mail. “These expressions tend to look different to the expressions I studied in gorillas, as the upper and lower teeth are both exposed, and the mouth widely open. The expression is more tense, and accompanied by very different vocalisations. The vocalised element of the scream can differ depending on whether the screamer is an aggressor or a victim.”
In short: subtle differences in facial expression and vocals mean quite different things in primate posturing — one is obedient and appeasing, the other screaming and aggressive. But does this mean that our own smile is inherently passive and submissive?
“In some primate species the bared-teeth display (the expression similar to the human smile) is used only by subordinates, but these species have a very different social organisation to humans,” says Waller. “They tend to have very strict dominance hierarchies, whereas we have a more relaxed social structure. So, in some circumstances humans might use smiling as a subordinate signal, but is can also be used as a genuine signal of friendliness.”
Read more at Wired Science
Drug-Resistant Bugs Found in Organic Meat
If you’re paying premium prices for pesticide- and antibiotic-free meat, you might expect that it’s also free of antibiotic-resistant bacteria. Not so, according to a new study. The prevalence of one of the world’s most dangerous drug-resistant microbe strains is similar in retail pork products labeled “raised without antibiotics” and in meat from conventionally raised pigs, researchers have found.
Methicillin-resistant Staphylococcus aureus (MRSA), a drug-resistant form of the normally harmless S. aureus bacterium, kills 18,000 people in the United States every year and sickens 76,000 more. The majority of cases are linked to a hospital stay, where the combination of other sick people and surgical procedures puts patients at risk. But transmission also can happen in schools, jails, and locker rooms (and an estimated 1.5% of Americans carry MRSA in their noses). All of this has led to a growing concern about antibiotic use in agriculture, which may be creating a reservoir of drug-resistant organisms in billions of food animals around the world.
Tara Smith, an epidemiologist at the University of Iowa College of Public Health in Iowa City who studies the movement of staph bacteria between animals and people, wondered whether meat products might be another mode of transmission. For the new study, published this month in PLoS ONE, she and colleagues bought a variety of pork products—395 packages in all—from 36 different stores in two big pig farming states, Iowa and Minnesota, and one of the most densely populated, New Jersey.
In the laboratory, the team mixed meat samples “vigorously” with a bacterial growth medium and allowed any microbes present to grow. MRSA, which appears as mauve-colored colonies on agar plates, was genetically typed and tested for antibiotic susceptibility.
The researchers found that 64.8% of the samples were positive for staph bacteria and 6.6% were positive for MRSA. Rates of contamination were similar for conventionally raised pigs (19 of 300 samples) and those labeled antibiotic-free (seven of 95 samples). Results of genetic typing identified several well-known strains, including the so-called livestock-associated MRSA (ST398) as well as common human strains; all were found in conventional and antibiotic-free meat.
Smith says she was surprised by the results. In a related investigation, which has not been published, her group tested pigs living on farms and found that antibiotic-free pigs were free from MRSA, whereas the resistant bug is often found on conventional pig farms.
The study reveals an important data point on the path from farm to fork, yet the source of the MRSA on meat products is unknown, Smith says. “It’s difficult to figure out.” Transmission of resistant bugs might occur between antibiotic-using and antibiotic-free operations, especially if they’re near each other, or it could come from farm workers themselves. Another possibility is that contamination occurs at processing plants. “Processing plants are supposed to be cleaned between conventional and organic animals,” she says. “But how well does that actually happen?”
In another recent study, researchers from Purdue University in West Lafayette, Indiana, found that beef products from conventionally raised and grass-fed animals were equally likely to be contaminated by antibiotic-resistant Escherichia coli. In a second study by the same group, poultry products labeled “no antibiotics added” carried antibiotic-resistant E. coli and Enterococcus (another bacteria that causes invasive disease in humans), although the microbes were less prevalent than on conventionally raised birds.
“The real question is, where is it coming from, on the farm or post-farm?” says Paul Ebner, a food safety expert who led the Purdue studies. And the biggest question of all, he says, “Is it impacting human health?”
“There’s a tremendous amount of interest in this issue—feeding antibiotics to food animals,” says Ellen Silbergeld, an expert on health and environmental impacts of industrial food animal production at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “Thus, determining when amending that practice makes a difference is important.”
Read more at Wired Science
Methicillin-resistant Staphylococcus aureus (MRSA), a drug-resistant form of the normally harmless S. aureus bacterium, kills 18,000 people in the United States every year and sickens 76,000 more. The majority of cases are linked to a hospital stay, where the combination of other sick people and surgical procedures puts patients at risk. But transmission also can happen in schools, jails, and locker rooms (and an estimated 1.5% of Americans carry MRSA in their noses). All of this has led to a growing concern about antibiotic use in agriculture, which may be creating a reservoir of drug-resistant organisms in billions of food animals around the world.
Tara Smith, an epidemiologist at the University of Iowa College of Public Health in Iowa City who studies the movement of staph bacteria between animals and people, wondered whether meat products might be another mode of transmission. For the new study, published this month in PLoS ONE, she and colleagues bought a variety of pork products—395 packages in all—from 36 different stores in two big pig farming states, Iowa and Minnesota, and one of the most densely populated, New Jersey.
In the laboratory, the team mixed meat samples “vigorously” with a bacterial growth medium and allowed any microbes present to grow. MRSA, which appears as mauve-colored colonies on agar plates, was genetically typed and tested for antibiotic susceptibility.
The researchers found that 64.8% of the samples were positive for staph bacteria and 6.6% were positive for MRSA. Rates of contamination were similar for conventionally raised pigs (19 of 300 samples) and those labeled antibiotic-free (seven of 95 samples). Results of genetic typing identified several well-known strains, including the so-called livestock-associated MRSA (ST398) as well as common human strains; all were found in conventional and antibiotic-free meat.
Smith says she was surprised by the results. In a related investigation, which has not been published, her group tested pigs living on farms and found that antibiotic-free pigs were free from MRSA, whereas the resistant bug is often found on conventional pig farms.
The study reveals an important data point on the path from farm to fork, yet the source of the MRSA on meat products is unknown, Smith says. “It’s difficult to figure out.” Transmission of resistant bugs might occur between antibiotic-using and antibiotic-free operations, especially if they’re near each other, or it could come from farm workers themselves. Another possibility is that contamination occurs at processing plants. “Processing plants are supposed to be cleaned between conventional and organic animals,” she says. “But how well does that actually happen?”
In another recent study, researchers from Purdue University in West Lafayette, Indiana, found that beef products from conventionally raised and grass-fed animals were equally likely to be contaminated by antibiotic-resistant Escherichia coli. In a second study by the same group, poultry products labeled “no antibiotics added” carried antibiotic-resistant E. coli and Enterococcus (another bacteria that causes invasive disease in humans), although the microbes were less prevalent than on conventionally raised birds.
“The real question is, where is it coming from, on the farm or post-farm?” says Paul Ebner, a food safety expert who led the Purdue studies. And the biggest question of all, he says, “Is it impacting human health?”
“There’s a tremendous amount of interest in this issue—feeding antibiotics to food animals,” says Ellen Silbergeld, an expert on health and environmental impacts of industrial food animal production at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “Thus, determining when amending that practice makes a difference is important.”
Read more at Wired Science
Jan 30, 2012
Cutting Off the Oxygen Supply to Serious Diseases
A new family of proteins which regulate the human body’s ‘hypoxic response’ to low levels of oxygen has been discovered by scientists at Barts Cancer Institute at Queen Mary, University of London and The University of Nottingham.
The discovery has been published in the international journal Nature Cell Biology. It marks a significant step towards understanding the complex processes involved in the hypoxic response which, when it malfunctions, can cause and affect the progress of many types of serious disease, including cancer.
The researchers have uncovered a previously unknown level of hypoxic regulation at a molecular level in human cells which could provide a novel pathway for the development of new drug therapeutics to fight disease. The cutting-edge work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).
Proteins are biochemical compounds which carry out specific duties within the living cell. Every cell in our body has the ability to recognise and respond to changes in the availability of oxygen. The best example of this is when we climb to high altitudes where the air contains less oxygen. The cells recognise the decrease in oxygen via the bloodstream and are able to react, using the ‘hypoxic response’, to produce a protein called EPO. This protein in turn stimulates the body to produce more red blood cells to absorb as much of the reduced levels of oxygen as possible.
This response is essential for a normal healthy physiology but when the hypoxic response in cells malfunctions, diseases like cancer can develop and spread. Cancer cells have a faulty hypoxic response which means that as the cells multiply they highjack the response to create their own rogue blood supply. In this way the cells can form large tumours. The new blood supply also helps the cancer cells spread to other parts of the body, called ‘metastasis’, which is how ultimately cancer kills patients.
The scientists have identified a new family of hypoxic regulator proteins called ‘LIM domain containing proteins’ which function as molecular scaffolds or ‘adapters’ bringing together or bridging two key enzymes in the hypoxic response pathway, namely PHD2 and VHL. Both of these are involved in down-regulating the master regulator protein called Hypoxia-inducible factors (HIF1). The research has shown that loss of LIMD1 breaks down the bridge it creates between PHD2 and VHL and this then enables the master regulator to function out of control and thus contribute to cancer formation.
Read more at Science Daily
The discovery has been published in the international journal Nature Cell Biology. It marks a significant step towards understanding the complex processes involved in the hypoxic response which, when it malfunctions, can cause and affect the progress of many types of serious disease, including cancer.
The researchers have uncovered a previously unknown level of hypoxic regulation at a molecular level in human cells which could provide a novel pathway for the development of new drug therapeutics to fight disease. The cutting-edge work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).
Proteins are biochemical compounds which carry out specific duties within the living cell. Every cell in our body has the ability to recognise and respond to changes in the availability of oxygen. The best example of this is when we climb to high altitudes where the air contains less oxygen. The cells recognise the decrease in oxygen via the bloodstream and are able to react, using the ‘hypoxic response’, to produce a protein called EPO. This protein in turn stimulates the body to produce more red blood cells to absorb as much of the reduced levels of oxygen as possible.
This response is essential for a normal healthy physiology but when the hypoxic response in cells malfunctions, diseases like cancer can develop and spread. Cancer cells have a faulty hypoxic response which means that as the cells multiply they highjack the response to create their own rogue blood supply. In this way the cells can form large tumours. The new blood supply also helps the cancer cells spread to other parts of the body, called ‘metastasis’, which is how ultimately cancer kills patients.
The scientists have identified a new family of hypoxic regulator proteins called ‘LIM domain containing proteins’ which function as molecular scaffolds or ‘adapters’ bringing together or bridging two key enzymes in the hypoxic response pathway, namely PHD2 and VHL. Both of these are involved in down-regulating the master regulator protein called Hypoxia-inducible factors (HIF1). The research has shown that loss of LIMD1 breaks down the bridge it creates between PHD2 and VHL and this then enables the master regulator to function out of control and thus contribute to cancer formation.
Read more at Science Daily
Was the Little Ice Age Triggered by Massive Volcanic Eruptions?
A new international study may answer contentious questions about the onset and persistence of Earth's Little Ice Age, a period of widespread cooling that lasted for hundreds of years until the late 19th century.
The study, led by the University of Colorado Boulder with co-authors at the National Center for Atmospheric Research (NCAR) and other organizations, suggests that an unusual, 50-year-long episode of four massive tropical volcanic eruptions triggered the Little Ice Age between 1275 and 1300 A.D. The persistence of cold summers following the eruptions is best explained by a subsequent expansion of sea ice and a related weakening of Atlantic currents, according to computer simulations conducted for the study.
The study, which used analyses of patterns of dead vegetation, ice and sediment core data, and powerful computer climate models, provides new evidence in a longstanding scientific debate over the onset of the Little Ice Age. Scientists have theorized that the Little Ice Age was caused by decreased summer solar radiation, erupting volcanoes that cooled the planet by ejecting sulfates and other aerosol particles that reflected sunlight back into space, or a combination of the two.
"This is the first time anyone has clearly identified the specific onset of the cold times marking the start of the Little Ice Age," says lead author Gifford Miller of the University of Colorado Boulder. "We also have provided an understandable climate feedback system that explains how this cold period could be sustained for a long period of time. If the climate system is hit again and again by cold conditions over a relatively short period -- in this case, from volcanic eruptions -- there appears to be a cumulative cooling effect."
"Our simulations showed that the volcanic eruptions may have had a profound cooling effect," says NCAR scientist Bette Otto-Bliesner, a co-author of the study. "The eruptions could have triggered a chain reaction, affecting sea ice and ocean currents in a way that lowered temperatures for centuries."
The study appears this week in Geophysical Research Letters. The research team includes co-authors from the University of Iceland, the University of California Irvine, and the University of Edinburgh in Scotland. The study was funded in part by the National Science Foundation, NCAR's sponsor, and the Icelandic Science Foundation.
Far-flung regions of ice
Scientific estimates regarding the onset of the Little Ice Age range from the 13th century to the 16th century, but there is little consensus, Miller says. Although the cooling temperatures may have affected places as far away as South America and China, they were particularly evident in northern Europe. Advancing glaciers in mountain valleys destroyed towns, and paintings from the period depict people ice-skating on the Thames River in London and canals in the Netherlands, places that were ice-free before and after the Little Ice Age.
"The dominant way scientists have defined the Little Ice Age is by the expansion of big valley glaciers in the Alps and in Norway," says Miller, a fellow at CU's Institute of Arctic and Alpine Research. "But the time in which European glaciers advanced far enough to demolish villages would have been long after the onset of the cold period."
Miller and his colleagues radiocarbon-dated roughly 150 samples of dead plant material with roots intact, collected from beneath receding margins of ice caps on Baffin Island in the Canadian Arctic. They found a large cluster of "kill dates" between 1275 and 1300 A.D., indicating the plants had been frozen and engulfed by ice during a relatively sudden event.
The team saw a second spike in plant kill dates at about 1450 A.D., indicating the quick onset of a second major cooling event.
To broaden the study, the researchers analyzed sediment cores from a glacial lake linked to the 367-square-mile Langjökullice cap in the central highlands of Iceland that reaches nearly a mile high. The annual layers in the cores -- which can be reliably dated by using tephra deposits from known historic volcanic eruptions on Iceland going back more than 1,000 years -- suddenly became thicker in the late 13th century and again in the 15th century due to increased erosion caused by the expansion of the ice cap as the climate cooled.
"That showed us the signal we got from Baffin Island was not just a local signal, it was a North Atlantic signal," Miller says. "This gave us a great deal more confidence that there was a major perturbation to the Northern Hemisphere climate near the end of the 13th century."
The team used the Community Climate System Model, which was developed by scientists at NCAR and the Department of Energy with colleagues at other organizations, to test the effects of volcanic cooling on Arctic sea ice extent and mass. The model, which simulated various sea ice conditions from about 1150 to 1700 A.D., showed several large, closely spaced eruptions could have cooled the Northern Hemisphere enough to trigger the expansion of Arctic sea ice.
Read more at Science Daily
The study, led by the University of Colorado Boulder with co-authors at the National Center for Atmospheric Research (NCAR) and other organizations, suggests that an unusual, 50-year-long episode of four massive tropical volcanic eruptions triggered the Little Ice Age between 1275 and 1300 A.D. The persistence of cold summers following the eruptions is best explained by a subsequent expansion of sea ice and a related weakening of Atlantic currents, according to computer simulations conducted for the study.
The study, which used analyses of patterns of dead vegetation, ice and sediment core data, and powerful computer climate models, provides new evidence in a longstanding scientific debate over the onset of the Little Ice Age. Scientists have theorized that the Little Ice Age was caused by decreased summer solar radiation, erupting volcanoes that cooled the planet by ejecting sulfates and other aerosol particles that reflected sunlight back into space, or a combination of the two.
"This is the first time anyone has clearly identified the specific onset of the cold times marking the start of the Little Ice Age," says lead author Gifford Miller of the University of Colorado Boulder. "We also have provided an understandable climate feedback system that explains how this cold period could be sustained for a long period of time. If the climate system is hit again and again by cold conditions over a relatively short period -- in this case, from volcanic eruptions -- there appears to be a cumulative cooling effect."
"Our simulations showed that the volcanic eruptions may have had a profound cooling effect," says NCAR scientist Bette Otto-Bliesner, a co-author of the study. "The eruptions could have triggered a chain reaction, affecting sea ice and ocean currents in a way that lowered temperatures for centuries."
The study appears this week in Geophysical Research Letters. The research team includes co-authors from the University of Iceland, the University of California Irvine, and the University of Edinburgh in Scotland. The study was funded in part by the National Science Foundation, NCAR's sponsor, and the Icelandic Science Foundation.
Far-flung regions of ice
Scientific estimates regarding the onset of the Little Ice Age range from the 13th century to the 16th century, but there is little consensus, Miller says. Although the cooling temperatures may have affected places as far away as South America and China, they were particularly evident in northern Europe. Advancing glaciers in mountain valleys destroyed towns, and paintings from the period depict people ice-skating on the Thames River in London and canals in the Netherlands, places that were ice-free before and after the Little Ice Age.
"The dominant way scientists have defined the Little Ice Age is by the expansion of big valley glaciers in the Alps and in Norway," says Miller, a fellow at CU's Institute of Arctic and Alpine Research. "But the time in which European glaciers advanced far enough to demolish villages would have been long after the onset of the cold period."
Miller and his colleagues radiocarbon-dated roughly 150 samples of dead plant material with roots intact, collected from beneath receding margins of ice caps on Baffin Island in the Canadian Arctic. They found a large cluster of "kill dates" between 1275 and 1300 A.D., indicating the plants had been frozen and engulfed by ice during a relatively sudden event.
The team saw a second spike in plant kill dates at about 1450 A.D., indicating the quick onset of a second major cooling event.
To broaden the study, the researchers analyzed sediment cores from a glacial lake linked to the 367-square-mile Langjökullice cap in the central highlands of Iceland that reaches nearly a mile high. The annual layers in the cores -- which can be reliably dated by using tephra deposits from known historic volcanic eruptions on Iceland going back more than 1,000 years -- suddenly became thicker in the late 13th century and again in the 15th century due to increased erosion caused by the expansion of the ice cap as the climate cooled.
"That showed us the signal we got from Baffin Island was not just a local signal, it was a North Atlantic signal," Miller says. "This gave us a great deal more confidence that there was a major perturbation to the Northern Hemisphere climate near the end of the 13th century."
The team used the Community Climate System Model, which was developed by scientists at NCAR and the Department of Energy with colleagues at other organizations, to test the effects of volcanic cooling on Arctic sea ice extent and mass. The model, which simulated various sea ice conditions from about 1150 to 1700 A.D., showed several large, closely spaced eruptions could have cooled the Northern Hemisphere enough to trigger the expansion of Arctic sea ice.
Read more at Science Daily
Speed Limits on the Evolution of Enormousness
If you’ve ever wondered whether mammalian evolution has a speed limit, here’s a number for you: 24 million.
That’s how many generations a new study estimates it would take to go from mouse- to elephant-sized while operating on land at the maximum velocity of change. The figure underscores just how special a trait sheer bigness can be.
“Big animals represent the accumulation of evolutionary change, and change takes time,” said evolutionary biologist Alistair Evans of Australia’s Monash University.
Evans and co-authors revisit a fossil record dataset of mammal body size during the last 70 million years, in a study published Jan. 31 in Proceedings of the National Academy of Sciences. The data was originally used to describe the evolutionary growth spurts experienced by mammals soon after dinosaurs ceased to be Earth’s dominant animals.
For the previous 140 million years, mammals had been rat-sized or smaller. With dinosaurs significantly reduced, mammals had a chance to fill newly vacant ecological niches, particularly that of the large-bodied plant-eater.
In this context, size isn’t simply a visible sign of change, but a proxy for modifications to diet, metabolism and body structure. To become big is to change, radically and fundamentally.
“How fast can all of these interconnected changes be made? This to me is the main question that drives why maximum evolutionary rates are fascinating,” said Evans.
In the new study, Evans’ team measures the time taken, in total years and likely number of generations, for 28 mammal lineages to become larger and smaller over the fossil record.
Odd-toed ungulates, including horses and rhinoceroses, had the highest maximum rates of growth. (The largest land mammal ever, the now-extinct Paraceratherium, was part of this group.) Rodents placed in the middle of the pack, while carnivores changed quite slowly, and primates even more slowly.
At the fastest observed terrestrial rates, going from rabbit- to elephant-sized takes roughly 10 million generations, while the aforementioned mouse- to elephant-sized jump takes 24 million generations. In the oceans, however, body size could change twice as fast, perhaps because water’s support of body weight lessened physiological constraints.
The researchers also found that mammals shrink more rapidly than they grow, with size lost 100 times faster than it’s gained. An implicit conservation message: Treasure bigness, because it’s difficult to achieve, and won’t likely happen again so long as humans remain Earth’s dominant species.
Read more at Wired Science
That’s how many generations a new study estimates it would take to go from mouse- to elephant-sized while operating on land at the maximum velocity of change. The figure underscores just how special a trait sheer bigness can be.
“Big animals represent the accumulation of evolutionary change, and change takes time,” said evolutionary biologist Alistair Evans of Australia’s Monash University.
Evans and co-authors revisit a fossil record dataset of mammal body size during the last 70 million years, in a study published Jan. 31 in Proceedings of the National Academy of Sciences. The data was originally used to describe the evolutionary growth spurts experienced by mammals soon after dinosaurs ceased to be Earth’s dominant animals.
For the previous 140 million years, mammals had been rat-sized or smaller. With dinosaurs significantly reduced, mammals had a chance to fill newly vacant ecological niches, particularly that of the large-bodied plant-eater.
In this context, size isn’t simply a visible sign of change, but a proxy for modifications to diet, metabolism and body structure. To become big is to change, radically and fundamentally.
“How fast can all of these interconnected changes be made? This to me is the main question that drives why maximum evolutionary rates are fascinating,” said Evans.
In the new study, Evans’ team measures the time taken, in total years and likely number of generations, for 28 mammal lineages to become larger and smaller over the fossil record.
Odd-toed ungulates, including horses and rhinoceroses, had the highest maximum rates of growth. (The largest land mammal ever, the now-extinct Paraceratherium, was part of this group.) Rodents placed in the middle of the pack, while carnivores changed quite slowly, and primates even more slowly.
At the fastest observed terrestrial rates, going from rabbit- to elephant-sized takes roughly 10 million generations, while the aforementioned mouse- to elephant-sized jump takes 24 million generations. In the oceans, however, body size could change twice as fast, perhaps because water’s support of body weight lessened physiological constraints.
The researchers also found that mammals shrink more rapidly than they grow, with size lost 100 times faster than it’s gained. An implicit conservation message: Treasure bigness, because it’s difficult to achieve, and won’t likely happen again so long as humans remain Earth’s dominant species.
Read more at Wired Science
Humans Tamed Horses All Over the World
The domestication of wild horses had a profound effect on human history -- offering nutrition, transportation and a leg up in warfare, among other advantages. But there are still many unanswered questions about when and where our species began its long love affair with horses.
A new genetic study offers some clues. Through the first complete analysis of equestrian mitochondrial DNA -- a kind of genetic material that is passed directly from mother to offspring -- an international group of scientists was able to trace all modern horses to an ancestor that lived about 140,000 years ago.
After horse domestication began about 10,000 years ago, the study also discovered, horses diverged into at least 18 distinct genetic lines. Those findings suggest that, unlike cows and other animals, horses may have been tamed independently in many different places around Europe and Asia.
The new research could help scientists decode the genetic secrets of modern horse breeds and top racehorses.
“Horse domestication had major cultural, socioeconomic, and even genetic implications for the numerous prehistoric and historic human populations that at different times adopted horse breeding,” said Alessandro Achilli, a geneticist at the University of Perugia in Italy. “Thus, our results will have a major impact in many areas of biological science, ranging from the field of animal and conservation genetics to zoology, veterinary science, paleontology, human genetics and anthropology.”
Cows, sheep, and goats had simple beginnings as livestock, with evidence suggesting that a small number of animals of each species were domesticated in just a few places between about 8,000 and 10,000 years ago. Today, genetic diversity among these creatures remains low.
Horse DNA tells a different story, according to a new paper published today in the Proceedings of the National Academy of Sciences. After analyzing mitochondrial DNA from a wide range of horse breeds across Asia, Europe, the Middle East and the Americas, and then using the known mutation rate of this kind of DNA as a sort of clock, Achilli and colleagues were able to connect all modern horses to a common ancestor that lived between 130,000 and 160,000 years ago. By comparison, modern humans first evolved about 200,000 years ago.
Previous research focused only on limited regions of mitochondrial DNA in horses. But by looking at the entire mitochondrial genome, the new study was able to categorize horses into at least 18 different groups that evolved independently.
One possible explanation for those findings is that many different groups of people independently discovered the dramatic benefits of taming wild horses thousands of years ago.
“The very fact that many wild mares have been independently domesticated in different places testifies to how significant horses have been to humankind,” Achilli said. “It means that the ability of taming these animals was badly needed by different groups of people in different regions of Eurasia, from the Asian steppes to Western Europe, since they could generate the food surplus necessary to support the growth of human populations and the capability to expand and adapt into new environments or facilitate transportation.”
Results also showed that horses managed to survive in modern-day Spain and Portugal during a glacial period more than 13,000 years ago, when horses, humans and other mammals disappeared north of the Pyrenees. The area has shown to be an important refuge during that time for people, who later went on to repopulate Europe when conditions improved. The new study suggests that horses may have followed a similar pattern.
The new findings offer another potential explanation for the origins of domesticated horses, said Alan Outram, an archaeologist at the University of Exeter in the United Kingdom. Horses may have been originally domesticated in one area, he said, such as the central Asian steppe. Then, people could have transported tamed stallions to other cultures in other places, where they were bred with local, wild mares. That scenario would also create multiple distinct female genetic lines.
Read more at Discovery News
A new genetic study offers some clues. Through the first complete analysis of equestrian mitochondrial DNA -- a kind of genetic material that is passed directly from mother to offspring -- an international group of scientists was able to trace all modern horses to an ancestor that lived about 140,000 years ago.
After horse domestication began about 10,000 years ago, the study also discovered, horses diverged into at least 18 distinct genetic lines. Those findings suggest that, unlike cows and other animals, horses may have been tamed independently in many different places around Europe and Asia.
The new research could help scientists decode the genetic secrets of modern horse breeds and top racehorses.
“Horse domestication had major cultural, socioeconomic, and even genetic implications for the numerous prehistoric and historic human populations that at different times adopted horse breeding,” said Alessandro Achilli, a geneticist at the University of Perugia in Italy. “Thus, our results will have a major impact in many areas of biological science, ranging from the field of animal and conservation genetics to zoology, veterinary science, paleontology, human genetics and anthropology.”
Cows, sheep, and goats had simple beginnings as livestock, with evidence suggesting that a small number of animals of each species were domesticated in just a few places between about 8,000 and 10,000 years ago. Today, genetic diversity among these creatures remains low.
Horse DNA tells a different story, according to a new paper published today in the Proceedings of the National Academy of Sciences. After analyzing mitochondrial DNA from a wide range of horse breeds across Asia, Europe, the Middle East and the Americas, and then using the known mutation rate of this kind of DNA as a sort of clock, Achilli and colleagues were able to connect all modern horses to a common ancestor that lived between 130,000 and 160,000 years ago. By comparison, modern humans first evolved about 200,000 years ago.
Previous research focused only on limited regions of mitochondrial DNA in horses. But by looking at the entire mitochondrial genome, the new study was able to categorize horses into at least 18 different groups that evolved independently.
One possible explanation for those findings is that many different groups of people independently discovered the dramatic benefits of taming wild horses thousands of years ago.
“The very fact that many wild mares have been independently domesticated in different places testifies to how significant horses have been to humankind,” Achilli said. “It means that the ability of taming these animals was badly needed by different groups of people in different regions of Eurasia, from the Asian steppes to Western Europe, since they could generate the food surplus necessary to support the growth of human populations and the capability to expand and adapt into new environments or facilitate transportation.”
Results also showed that horses managed to survive in modern-day Spain and Portugal during a glacial period more than 13,000 years ago, when horses, humans and other mammals disappeared north of the Pyrenees. The area has shown to be an important refuge during that time for people, who later went on to repopulate Europe when conditions improved. The new study suggests that horses may have followed a similar pattern.
The new findings offer another potential explanation for the origins of domesticated horses, said Alan Outram, an archaeologist at the University of Exeter in the United Kingdom. Horses may have been originally domesticated in one area, he said, such as the central Asian steppe. Then, people could have transported tamed stallions to other cultures in other places, where they were bred with local, wild mares. That scenario would also create multiple distinct female genetic lines.
Read more at Discovery News
Jan 29, 2012
Astronomers Solve Mystery of Vanishing Electrons in Earth's Outer Radiation Belt
UCLA researchers have explained the puzzling disappearing act of energetic electrons in Earth's outer radiation belt, using data collected from a fleet of orbiting spacecraft.
In a paper published Jan. 29 in the advance online edition of the journal Nature Physics, the team shows that the missing electrons are swept away from the planet by a tide of solar wind particles during periods of heightened solar activity.
"This is an important milestone in understanding Earth's space environment," said lead study author Drew Turner, an assistant researcher in the UCLA Department of Earth and Space Sciences and a member of UCLA's Institute for Geophysics and Planetary Physics (IGPP). "We are one step closer towards understanding and predicting space weather phenomena."
During powerful solar events such as coronal mass ejections, parts of the magnetized outer layers of sun's atmosphere crash onto Earth's magnetic field, triggering geomagnetic storms capable of damaging the electronics of orbiting spacecraft. These cosmic squalls have a peculiar effect on Earth's outer radiation belt, a doughnut-shaped region of space filled with electrons so energetic that they move at nearly the speed of light.
"During the onset of a geomagnetic storm, nearly all the electrons trapped within the radiation belt vanish, only to come back with a vengeance a few hours later," said Vassilis Angelopoulos, a UCLA professor of Earth and space sciences and IGPP researcher.
The missing electrons surprised scientists when the trend was first measured in the 1960s by instruments onboard the earliest spacecraft sent into orbit, said study co-author Yuri Shprits, a research geophysicist with the IGPP and the departments of Earth and space sciences, and atmospheric and oceanic sciences.
"It's a puzzling effect," he said. "Oceans on Earth do not suddenly lose most of their water, yet radiation belts filled with electrons can be rapidly depopulated."
Even stranger, the electrons go missing during the peak of a geomagnetic storm, a time when one might expect the radiation belt to be filled with energetic particles because of the extreme bombardment by the solar wind.
Where do the electrons go? This question has remained unresolved since the early 1960s. Some believed the electrons were lost to Earth's atmosphere, while others hypothesized that the electrons were not permanently lost at all but merely temporarily drained of energy so that they appeared absent.
"Our study in 2006 suggested that electrons may be, in fact, lost to the interplanetary medium and decelerated by moving outwards," Shprits said. "However, until recently, there was no definitive proof for this theory."
To resolve the mystery, Turner and his team used data from three networks of orbiting spacecraft positioned at different distances from Earth to catch the escaping electrons in the act. The data show that while a small amount of the missing energetic electrons did fall into the atmosphere, the vast majority were pushed away from the planet, stripped away from the radiation belt by the onslaught of solar wind particles during the heightened solar activity that generated the magnetic storm itself.
A greater understanding of Earth's radiation belts is vital for protecting the satellites we rely on for global positioning, communications and weather monitoring, Turner said. Earth's outer radiation belt is a harsh radiation environment for spacecraft and astronauts; the high-energy electrons can penetrate a spacecraft's shielding and wreak havoc on its delicate electronics. Geomagnetic storms triggered when the oncoming particles smash into Earth's magnetosphere can cause partial or total spacecraft failure.
"While most satellites are designed with some level of radiation protection in mind, spacecraft engineers must rely on approximations and statistics because they lack the data needed to model and predict the behavior of high-energy electrons in the outer radiation belt," Turner said.
During the 2003 "Halloween Storm," more than 30 satellites reported malfunctions, and one was a total loss, said Angelopoulos, a co-author of the current research. As the solar maximum approaches in 2013, marking the sun's peak activity over a roughly 11-year cycle, geomagnetic storms may occur as often as several times per month.
"High-energy electrons can cut down the lifetime of a spacecraft significantly," Turner said. "Satellites that spend a prolonged period within the active radiation belt might stop functioning years early."
While a mechanized spacecraft might include multiple redundant circuits to reduce the risk of total failure during a solar event, human explorers in orbit do not have the same luxury. High-energy electrons can punch through astronauts' spacesuits and pose serious health risks, Turner said.
"As a society, we've become incredibly dependent on space-based technology," he said. "Understanding this population of energetic electrons and their extreme variations will help create more accurate models to predict the effect of geomagnetic storms on the radiation belts."
Read more at Science Daily
In a paper published Jan. 29 in the advance online edition of the journal Nature Physics, the team shows that the missing electrons are swept away from the planet by a tide of solar wind particles during periods of heightened solar activity.
"This is an important milestone in understanding Earth's space environment," said lead study author Drew Turner, an assistant researcher in the UCLA Department of Earth and Space Sciences and a member of UCLA's Institute for Geophysics and Planetary Physics (IGPP). "We are one step closer towards understanding and predicting space weather phenomena."
During powerful solar events such as coronal mass ejections, parts of the magnetized outer layers of sun's atmosphere crash onto Earth's magnetic field, triggering geomagnetic storms capable of damaging the electronics of orbiting spacecraft. These cosmic squalls have a peculiar effect on Earth's outer radiation belt, a doughnut-shaped region of space filled with electrons so energetic that they move at nearly the speed of light.
"During the onset of a geomagnetic storm, nearly all the electrons trapped within the radiation belt vanish, only to come back with a vengeance a few hours later," said Vassilis Angelopoulos, a UCLA professor of Earth and space sciences and IGPP researcher.
The missing electrons surprised scientists when the trend was first measured in the 1960s by instruments onboard the earliest spacecraft sent into orbit, said study co-author Yuri Shprits, a research geophysicist with the IGPP and the departments of Earth and space sciences, and atmospheric and oceanic sciences.
"It's a puzzling effect," he said. "Oceans on Earth do not suddenly lose most of their water, yet radiation belts filled with electrons can be rapidly depopulated."
Even stranger, the electrons go missing during the peak of a geomagnetic storm, a time when one might expect the radiation belt to be filled with energetic particles because of the extreme bombardment by the solar wind.
Where do the electrons go? This question has remained unresolved since the early 1960s. Some believed the electrons were lost to Earth's atmosphere, while others hypothesized that the electrons were not permanently lost at all but merely temporarily drained of energy so that they appeared absent.
"Our study in 2006 suggested that electrons may be, in fact, lost to the interplanetary medium and decelerated by moving outwards," Shprits said. "However, until recently, there was no definitive proof for this theory."
To resolve the mystery, Turner and his team used data from three networks of orbiting spacecraft positioned at different distances from Earth to catch the escaping electrons in the act. The data show that while a small amount of the missing energetic electrons did fall into the atmosphere, the vast majority were pushed away from the planet, stripped away from the radiation belt by the onslaught of solar wind particles during the heightened solar activity that generated the magnetic storm itself.
A greater understanding of Earth's radiation belts is vital for protecting the satellites we rely on for global positioning, communications and weather monitoring, Turner said. Earth's outer radiation belt is a harsh radiation environment for spacecraft and astronauts; the high-energy electrons can penetrate a spacecraft's shielding and wreak havoc on its delicate electronics. Geomagnetic storms triggered when the oncoming particles smash into Earth's magnetosphere can cause partial or total spacecraft failure.
"While most satellites are designed with some level of radiation protection in mind, spacecraft engineers must rely on approximations and statistics because they lack the data needed to model and predict the behavior of high-energy electrons in the outer radiation belt," Turner said.
During the 2003 "Halloween Storm," more than 30 satellites reported malfunctions, and one was a total loss, said Angelopoulos, a co-author of the current research. As the solar maximum approaches in 2013, marking the sun's peak activity over a roughly 11-year cycle, geomagnetic storms may occur as often as several times per month.
"High-energy electrons can cut down the lifetime of a spacecraft significantly," Turner said. "Satellites that spend a prolonged period within the active radiation belt might stop functioning years early."
While a mechanized spacecraft might include multiple redundant circuits to reduce the risk of total failure during a solar event, human explorers in orbit do not have the same luxury. High-energy electrons can punch through astronauts' spacesuits and pose serious health risks, Turner said.
"As a society, we've become incredibly dependent on space-based technology," he said. "Understanding this population of energetic electrons and their extreme variations will help create more accurate models to predict the effect of geomagnetic storms on the radiation belts."
Read more at Science Daily
That Which Does Not Kill Yeast Makes It Stronger
Cells trying to keep pace with constantly changing environmental conditions need to strike a fine balance between maintaining their genomic integrity and allowing enough genetic flexibility to adapt to inhospitable conditions. In their latest study, researchers at the Stowers Institute for Medical Research were able to show that under stressful conditions yeast genomes become unstable, readily acquiring or losing whole chromosomes to enable rapid adaption.
The research, published in the January 29, 2012, advance online issue of Nature, demonstrates that stress itself can increase the pace of evolution by increasing the rate of chromosomal instability or aneuploidy. The observation of stress-induced chromosome instability casts the molecular mechanisms driving cellular evolution into a new perspective and may help explain how cancer cells elude the body's natural defense mechanisms or the toxic effects of chemotherapy drugs.
"Cells employ intricate control mechanisms to maintain genomic stability and prevent abnormal chromosome numbers," says the study's leader, Stowers investigator Rong Li, Ph.D. "We found that under stress cellular mechanisms ensuring chromosome transmission fidelity are relaxed to allow the emergence of progeny cells with diverse aneuploid chromosome numbers, producing a population with large genetic variation."
Known as adaptive genetic change, the concept of stress-induced genetic variation first emerged in bacteria and departs from a long-held basic tenet of evolutionary theory, which holds that genetic diversity -- evolution's raw material from which natural selection picks the best choice under any given circumstance -- arises independently of hostile environmental conditions.
"From an evolutionary standpoint it is a very interesting finding," says graduate student and first author Guangbo Chen. "It shows how stress itself can help cells adapt to stress by inducing chromosomal instability."
Aneuploidy is most often associated with cancer and developmental defects and has recently been shown to reduce cellular fitness. Yet, an abnormal number of chromosomes is not necessarily a bad thing. Many wild yeast strains and their commercial cousins used to make bread or brew beer have adapted to their living environs by rejiggering the number of chromosomes they carry. "Euploid cells are optimized to thrive under 'normal' conditions," says Li. "In stressful environments aneuploid cells can quickly gain the upper hand when it comes to finding creative solutions to roadblocks they encounter in their environment."
After Li and her team had shown in an earlier Nature study that aneuploidy can confer a growth advantage on cells when they are exposed to many different types of stress conditions, the Stowers researchers wondered whether stress itself could increase the chromosome segregation error rate.
To find out, Chen exposed yeast cells to different chemicals that induce various types of general stress and assessed the loss of an artificial chromosome. This initial screen revealed that many stress conditions, including oxidative stress, increased the rate of chromosome loss ten to 20-fold, a rate typically observed when cells are treated with benomyl, a microtubule inhibitor that directly affects chromosome segregation.
The real surprise was radicicol, a drug that induces proteotoxic stress by inhibiting a chaperone protein, recalls Chen. "Even at a concentration that barely slows down growth, radicicol induced extremely high levels of chromosome instability within a very short period of time," he says.
Continued growth of yeast cells in the presence of radicicol led to the emergence of drug-resistant colonies that had acquired an additional copy of chromosome XV. Yeast cells pretreated briefly with radicicol to induce genomic instability also adapted more efficiently to the presence of other drugs including fluconazole, tunicamycin, or benomyl, when compared to euploid cells.
Interestingly, certain chromosome combinations dominated in colonies that were resistant to a specific drug. Fluconazole-resistant colonies typically gained an extra copy of chromosome VIII, tunicamycin-resistant colonies tended to lose chromosome XVI, while a majority of benomyl-resistant colonies got rid of chromosome XII. "This suggested to us that specific karyotypes are associated with resistance to certain drugs," says Chen.
Digging deeper, Chen grew tunicamycin-resistant yeast cells, which had adapted to the presence of the antibiotic by losing one copy of chromosome XVI, under drug-free conditions. Before long, colonies of two distinct sizes emerged. He quickly discovered that the faster growing colonies had regained the missing chromosome. By returning to a normal chromosome XVI number, these newly arisen euploid cells had acquired a distinctive growth advantage over their aneuploid neighbors. But most importantly, the fast growing yeast cells were no longer resistant to tunicamycin and thus clearly linking tunicamycin resistance to the loss of chromosome XVI.
Read more at Science Daily
The research, published in the January 29, 2012, advance online issue of Nature, demonstrates that stress itself can increase the pace of evolution by increasing the rate of chromosomal instability or aneuploidy. The observation of stress-induced chromosome instability casts the molecular mechanisms driving cellular evolution into a new perspective and may help explain how cancer cells elude the body's natural defense mechanisms or the toxic effects of chemotherapy drugs.
"Cells employ intricate control mechanisms to maintain genomic stability and prevent abnormal chromosome numbers," says the study's leader, Stowers investigator Rong Li, Ph.D. "We found that under stress cellular mechanisms ensuring chromosome transmission fidelity are relaxed to allow the emergence of progeny cells with diverse aneuploid chromosome numbers, producing a population with large genetic variation."
Known as adaptive genetic change, the concept of stress-induced genetic variation first emerged in bacteria and departs from a long-held basic tenet of evolutionary theory, which holds that genetic diversity -- evolution's raw material from which natural selection picks the best choice under any given circumstance -- arises independently of hostile environmental conditions.
"From an evolutionary standpoint it is a very interesting finding," says graduate student and first author Guangbo Chen. "It shows how stress itself can help cells adapt to stress by inducing chromosomal instability."
Aneuploidy is most often associated with cancer and developmental defects and has recently been shown to reduce cellular fitness. Yet, an abnormal number of chromosomes is not necessarily a bad thing. Many wild yeast strains and their commercial cousins used to make bread or brew beer have adapted to their living environs by rejiggering the number of chromosomes they carry. "Euploid cells are optimized to thrive under 'normal' conditions," says Li. "In stressful environments aneuploid cells can quickly gain the upper hand when it comes to finding creative solutions to roadblocks they encounter in their environment."
After Li and her team had shown in an earlier Nature study that aneuploidy can confer a growth advantage on cells when they are exposed to many different types of stress conditions, the Stowers researchers wondered whether stress itself could increase the chromosome segregation error rate.
To find out, Chen exposed yeast cells to different chemicals that induce various types of general stress and assessed the loss of an artificial chromosome. This initial screen revealed that many stress conditions, including oxidative stress, increased the rate of chromosome loss ten to 20-fold, a rate typically observed when cells are treated with benomyl, a microtubule inhibitor that directly affects chromosome segregation.
The real surprise was radicicol, a drug that induces proteotoxic stress by inhibiting a chaperone protein, recalls Chen. "Even at a concentration that barely slows down growth, radicicol induced extremely high levels of chromosome instability within a very short period of time," he says.
Continued growth of yeast cells in the presence of radicicol led to the emergence of drug-resistant colonies that had acquired an additional copy of chromosome XV. Yeast cells pretreated briefly with radicicol to induce genomic instability also adapted more efficiently to the presence of other drugs including fluconazole, tunicamycin, or benomyl, when compared to euploid cells.
Interestingly, certain chromosome combinations dominated in colonies that were resistant to a specific drug. Fluconazole-resistant colonies typically gained an extra copy of chromosome VIII, tunicamycin-resistant colonies tended to lose chromosome XVI, while a majority of benomyl-resistant colonies got rid of chromosome XII. "This suggested to us that specific karyotypes are associated with resistance to certain drugs," says Chen.
Digging deeper, Chen grew tunicamycin-resistant yeast cells, which had adapted to the presence of the antibiotic by losing one copy of chromosome XVI, under drug-free conditions. Before long, colonies of two distinct sizes emerged. He quickly discovered that the faster growing colonies had regained the missing chromosome. By returning to a normal chromosome XVI number, these newly arisen euploid cells had acquired a distinctive growth advantage over their aneuploid neighbors. But most importantly, the fast growing yeast cells were no longer resistant to tunicamycin and thus clearly linking tunicamycin resistance to the loss of chromosome XVI.
Read more at Science Daily
Subscribe to:
Posts (Atom)