Scientists at Duke University Medical Center have shown the ability to turn scar tissue that forms after a heart attack into heart muscle cells using a new process that eliminates the need for stem cell transplant.
The study, published online April 26 in the journal Circulation Research, used molecules called microRNAs to trigger the cardiac tissue conversion in a lab dish and, for the first time, in a living mouse, demonstrating the potential of a simpler process for tissue regeneration.
If additional studies confirm the approach in human cells, it could lead to a new way for treating many of the 23 million people worldwide who suffer heart failure, which is often caused by scar tissue that develops after a heart attack. The approach could also have benefit beyond heart disease.
"This is a significant finding with many therapeutic implications," said Victor J. Dzau, MD, a senior author on the study who is James B. Duke professor of medicine and chancellor of health affairs at Duke University. "If you can do this in the heart, you can do it in the brain, the kidneys, and other tissues. This is a whole new way of regenerating tissue."
To initiate the regeneration, Dzau's team at Duke used microRNAs, which are molecules that serve as master regulators controlling the activity of multiple genes. Tailored in a specific combination, the microRNAs were delivered into scar tissue cells called fibroblasts, which develop after a heart attack and impair the organ's ability to pump blood.
Once deployed, the microRNAs reprogrammed fibroblasts to become cells resembling the cardiomyocytes that make up heart muscle. The Duke team not only proved this concept in the laboratory, but also demonstrated that the cell conversion could occur inside the body of a mouse -- a major requirement for regenerative medicine to become a potential therapy.
"This is one of the exciting things about our study," said Maria Mirotsou, PhD, assistant professor of cardiology at Duke and a senior author of the study. "We were able to achieve this tissue conversion in the heart with these microRNAs, which may be more practical for direct delivery into cells and allow for possible development of therapies without using genetic methods or transplantation of stem cells."
The researchers said using microRNA for tissue regeneration has several potential advantages over genetic methods or transplantation of stem cells, which have been difficult to manage inside the body. Notably, the microRNA process eliminates technical problems such as genetic alterations, while also avoiding the ethical dilemmas posed by stem cells.
"It's an exciting stage for reprogramming science," said Tilanthi M. Jayawardena, PhD, first author of the study. "It's a very young field, and we're all learning what it means to switch a cell's fate. We believe we've uncovered a way for it to be done, and that it has a lot of potential."
The approach will now be tested in larger animals. Dzau said therapies could be developed within a decade if additional studies advance in larger animals and humans.
"We have proven the concept," Dzau said. "This is the very early stage, and we have only shown that is it doable in an animal model. Although that's a very big step, we're not there yet for humans."
Read more at Science Daily
Apr 28, 2012
New Particle Discovered at CERN
Physicists from the University of Zurich have discovered a previously unknown particle composed of three quarks in the Large Hadron Collider (LHC) particle accelerator. A new baryon could thus be detected for the first time at the LHC. The baryon known as Xi_b^* confirms fundamental assumptions of physics regarding the binding of quarks.
In particle physics, the baryon family refers to particles that are made up of three quarks. Quarks form a group of six particles that differ in their masses and charges. The two lightest quarks, the so-called "up" and "down" quarks, form the two atomic components, protons and neutrons. All baryons that are composed of the three lightest quarks ("up," "down" and "strange" quarks) are known. Only very few baryons with heavy quarks have been observed to date. They can only be generated artificially in particle accelerators as they are heavy and very unstable.
In the course of proton collisions in the LHC at CERN, physicists Claude Amsler, Vincenzo Chiochia and Ernest Aguiló from the University of Zurich's Physics Institute managed to detect a baryon with one light and two heavy quarks. The particle Xi_b^* comprises one "up," one "strange" and one "bottom" quark (usb), is electrically neutral and has a spin of 3/2 (1.5). Its mass is comparable to that of a lithium atom. The new discovery means that two of the three baryons predicted in the usb composition by theory have now been observed.
The discovery was based on data gathered in the CMS detector, which the University of Zurich was involved in developing. The new particle cannot be detected directly as it is too unstable to be registered by the detector. However, Xi_b^* breaks up in a known cascade of decay products. Ernest Aguiló, a postdoctoral student from Professor Amsler's group, identified traces of the respective decay products in the measurement data and was able to reconstruct the decay cascades starting from Xi_b^* decays.
The calculations are based on data from proton-proton collisions at an energy of seven Tera electron volts (TeV) collected by the CMS detector between April and November 2011. A total of 21 Xi_b^* baryon decays were discovered -- statistically sufficient to rule out a statistical fluctuation.
The discovery of the new particle confirms the theory of how quarks bind and therefore helps to understand the strong interaction, one of the four basic forces of physics which determines the structure of matter.
The University of Zurich is involved in the LHC at CERN with three research groups. Professor Amsler's and Professor Chiochia's groups are working on the CMS experiment; Professor Straumann's group is involved in the LHCb experiment.
Read more at Science Daily
In particle physics, the baryon family refers to particles that are made up of three quarks. Quarks form a group of six particles that differ in their masses and charges. The two lightest quarks, the so-called "up" and "down" quarks, form the two atomic components, protons and neutrons. All baryons that are composed of the three lightest quarks ("up," "down" and "strange" quarks) are known. Only very few baryons with heavy quarks have been observed to date. They can only be generated artificially in particle accelerators as they are heavy and very unstable.
In the course of proton collisions in the LHC at CERN, physicists Claude Amsler, Vincenzo Chiochia and Ernest Aguiló from the University of Zurich's Physics Institute managed to detect a baryon with one light and two heavy quarks. The particle Xi_b^* comprises one "up," one "strange" and one "bottom" quark (usb), is electrically neutral and has a spin of 3/2 (1.5). Its mass is comparable to that of a lithium atom. The new discovery means that two of the three baryons predicted in the usb composition by theory have now been observed.
The discovery was based on data gathered in the CMS detector, which the University of Zurich was involved in developing. The new particle cannot be detected directly as it is too unstable to be registered by the detector. However, Xi_b^* breaks up in a known cascade of decay products. Ernest Aguiló, a postdoctoral student from Professor Amsler's group, identified traces of the respective decay products in the measurement data and was able to reconstruct the decay cascades starting from Xi_b^* decays.
The calculations are based on data from proton-proton collisions at an energy of seven Tera electron volts (TeV) collected by the CMS detector between April and November 2011. A total of 21 Xi_b^* baryon decays were discovered -- statistically sufficient to rule out a statistical fluctuation.
The discovery of the new particle confirms the theory of how quarks bind and therefore helps to understand the strong interaction, one of the four basic forces of physics which determines the structure of matter.
The University of Zurich is involved in the LHC at CERN with three research groups. Professor Amsler's and Professor Chiochia's groups are working on the CMS experiment; Professor Straumann's group is involved in the LHCb experiment.
Read more at Science Daily
Mummy Suffered Rare and Painful Disease
Around 2,900 years ago, an ancient Egyptian man, likely in his 20s, passed away after suffering from a rare, cancerlike disease that may also have left him with a type of diabetes.
When he died he was mummified, following the procedure of the time. The embalmers removed his brain (through the nose it appears), poured resin-like fluid into his head and pelvis, took out some of his organs and inserted four linen “packets” into his body. At some point the mummy was transferred to the 2,300 year-old sarcophagus of a woman named Kareset, an artifact that is now in the Archaeological Museum in Zagreb, Croatia.
The mummy transfer may have been the work of 19th-century antiquity traders keen on selling Kareset's coffin but wanting to have a mummy inside to raise the price.
Until now, scientists had assumed a female mummy was inside the Egyptian coffin. The new research reveals not only that the body does not belong to Kareset, but the male mummy inside was sick. His body showed telltale signs that he suffered from Hand-Schuller-Christian disease, an enigmatic condition in which Langerhans cells, a type of immune cell found in the skin, multiply rapidly.
"They tend to replace normal structure of the bone and all other soft tissues," Dr. Mislav ?avka, a medical doctor at the University of Zagreb who is one of the study's leaders, said in an interview with LiveScience. "We could say it is one sort of cancer."
Scientists still aren't sure what causes the disease, but it is very rare, affecting about one in 560,000 young adults, more often males. "In ancient times it was lethal, always," said ?avka, who added that today it can be treated.
?avkaand colleagues examined the mummy using X-rays, a CT scan and a newly developed technique for magnetic resonance imaging (MRI) scans.
The disease seems to have taken a terrible toll on the ancient man's body, with images revealing it destroyed parts of his skeleton, leaving lytic lesions throughout his spine and skull. The scans also showed what looks like a giant hole in his skull's frontal-parietal bone, and destruction of a section of one of his eye sockets, known as the "orbital wall."
The mummy-embalming procedure may have worsened some of the disease-caused damage, ?avka said.
Even so,the effects of the disease would have been "very, very painful," and would have affected the man's appearance, particularly in the final stage, ?avka told LiveScience.
In addition, it may have led him to suffer from a form of diabetes. The scans show that his sella turcica, part of the skull that holds the pituitary gland, is shallow, which suggests that this gland was also affected by the disease.
"That could have lead to diabetes insipidus," the researchers write in their paper. The condition would have made it difficult for his kidneys to conserve water, something that would have worsened the man's predicament. "Probably he was all the time thirsty, hungry and had to urinate," ?avka said.
Perhaps cold comfort for him now, but his death does offer clues to the ancient world. Scientists have long debated whether or not cancer was common in ancient times.
Read more at Discovery News
When he died he was mummified, following the procedure of the time. The embalmers removed his brain (through the nose it appears), poured resin-like fluid into his head and pelvis, took out some of his organs and inserted four linen “packets” into his body. At some point the mummy was transferred to the 2,300 year-old sarcophagus of a woman named Kareset, an artifact that is now in the Archaeological Museum in Zagreb, Croatia.
The mummy transfer may have been the work of 19th-century antiquity traders keen on selling Kareset's coffin but wanting to have a mummy inside to raise the price.
Until now, scientists had assumed a female mummy was inside the Egyptian coffin. The new research reveals not only that the body does not belong to Kareset, but the male mummy inside was sick. His body showed telltale signs that he suffered from Hand-Schuller-Christian disease, an enigmatic condition in which Langerhans cells, a type of immune cell found in the skin, multiply rapidly.
"They tend to replace normal structure of the bone and all other soft tissues," Dr. Mislav ?avka, a medical doctor at the University of Zagreb who is one of the study's leaders, said in an interview with LiveScience. "We could say it is one sort of cancer."
Scientists still aren't sure what causes the disease, but it is very rare, affecting about one in 560,000 young adults, more often males. "In ancient times it was lethal, always," said ?avka, who added that today it can be treated.
?avkaand colleagues examined the mummy using X-rays, a CT scan and a newly developed technique for magnetic resonance imaging (MRI) scans.
The disease seems to have taken a terrible toll on the ancient man's body, with images revealing it destroyed parts of his skeleton, leaving lytic lesions throughout his spine and skull. The scans also showed what looks like a giant hole in his skull's frontal-parietal bone, and destruction of a section of one of his eye sockets, known as the "orbital wall."
The mummy-embalming procedure may have worsened some of the disease-caused damage, ?avka said.
Even so,the effects of the disease would have been "very, very painful," and would have affected the man's appearance, particularly in the final stage, ?avka told LiveScience.
In addition, it may have led him to suffer from a form of diabetes. The scans show that his sella turcica, part of the skull that holds the pituitary gland, is shallow, which suggests that this gland was also affected by the disease.
"That could have lead to diabetes insipidus," the researchers write in their paper. The condition would have made it difficult for his kidneys to conserve water, something that would have worsened the man's predicament. "Probably he was all the time thirsty, hungry and had to urinate," ?avka said.
Perhaps cold comfort for him now, but his death does offer clues to the ancient world. Scientists have long debated whether or not cancer was common in ancient times.
Read more at Discovery News
Oldest Human Ancestor Found in Lake Sludge
After two decades of examining a microscopic algae-eater that lives in a lake in Norway, scientists have declared it to be one of the world's oldest living organisms and human's remotest relative.
The elusive, single-cell creature evolved about a billion years ago and did not fit in any of the known categories of living organisms - it was not an animal, plant, parasite, fungus or alga, they say.
"We have found an unknown branch of the tree of life that lives in this lake. It is unique," says University of Oslo researcher Dr Kamran Shalchian-Tabrizi.
"So far we know of no other group of organisms that descends from closer to the roots of the tree of life than this species", which has been declared a new genus called Collodictyon.
Scientists believe the discovery may provide insight into what life looked like on Earth hundreds of millions of years ago.
Collodictyon lives in the sludge of a small lake called Ås, 30 kilometres south of Oslo.
It has four flagella - tail-like propellers it uses to move around, and can only be seen with a microscope. It is 30 to 50 micrometers long.
Like plants, fungi, algae and animals, including humans, Collodictyon are members of the eukaryote family that possess cell nuclei enclosed by membranes, unlike bacteria.
Not social creatures
Using the characteristics of Collodictyon, scientists can now infer what prehistoric eukaryotes looked like, says Tabrizi - probably a single-cell organism with finger-like structures that it used to catch microscopic prey.
"They are not sociable creatures," says co-researcher Professor Dag Klaveness, who bred millions of the tiny organisms for the study.
"They flourish best alone. Once they have eaten the food, cannibalism is the order of the day."
They have not been found anywhere but in Lake Ås.
Read more at Discovery News
The elusive, single-cell creature evolved about a billion years ago and did not fit in any of the known categories of living organisms - it was not an animal, plant, parasite, fungus or alga, they say.
"We have found an unknown branch of the tree of life that lives in this lake. It is unique," says University of Oslo researcher Dr Kamran Shalchian-Tabrizi.
"So far we know of no other group of organisms that descends from closer to the roots of the tree of life than this species", which has been declared a new genus called Collodictyon.
Scientists believe the discovery may provide insight into what life looked like on Earth hundreds of millions of years ago.
Collodictyon lives in the sludge of a small lake called Ås, 30 kilometres south of Oslo.
It has four flagella - tail-like propellers it uses to move around, and can only be seen with a microscope. It is 30 to 50 micrometers long.
Like plants, fungi, algae and animals, including humans, Collodictyon are members of the eukaryote family that possess cell nuclei enclosed by membranes, unlike bacteria.
Not social creatures
Using the characteristics of Collodictyon, scientists can now infer what prehistoric eukaryotes looked like, says Tabrizi - probably a single-cell organism with finger-like structures that it used to catch microscopic prey.
"They are not sociable creatures," says co-researcher Professor Dag Klaveness, who bred millions of the tiny organisms for the study.
"They flourish best alone. Once they have eaten the food, cannibalism is the order of the day."
They have not been found anywhere but in Lake Ås.
Read more at Discovery News
Apr 27, 2012
Rats Have the Best Bite
If rats could high five each other and say, "I'm number one!" they might have reason to do so now, since a new study has just named them the best biters in the rodent world.
The findings, published in the latest PLoS ONE, reveal that rats bite more effectively and efficiently than other rodents do. They have evolved to gnaw with their front teeth and chew with their back teeth better than any other rodent.
That's one reason why these animals are so successful worldwide.
Co-author Nathan Jeffery of the University of LIverpool's Institute of Aging and Chronic Disease, was quoted as saying in a press release, “Mice and rats belong to a group of rodents called the myomorphs, which are amongst the most successful of all mammals. With over 1000 species, comprising nearly a quarter of all known mammal species, they live in a wide variety of habitats on every continent, except Antarctica."
Jeffery and his colleagues designed a computer model to simulate the bites of rodents, plugging in data about squirrels, guinea pigs and more. Rat muscles turn out to have evolved the most efficient biting mechanisms, with mice a close second. Some rodents gnaw really well. Others specialize in chewing. Rats do it all.
Co-author Philip Cox explained, "Since the Eocene era, approximately 56 to 34 million years ago, rodents have been adapting their skulls and jaw muscles in, what we might call an evolutionary race. A group of rodents called sciuromorphs, which includes the squirrel, began to specialise in gnawing adaptations, and the hystricomorphs, including the guinea pig, chose chewing. The myomorphs, the rats and the mice, however, adapted to both chewing and gnawing."
Read more at Discovery News
The findings, published in the latest PLoS ONE, reveal that rats bite more effectively and efficiently than other rodents do. They have evolved to gnaw with their front teeth and chew with their back teeth better than any other rodent.
That's one reason why these animals are so successful worldwide.
Co-author Nathan Jeffery of the University of LIverpool's Institute of Aging and Chronic Disease, was quoted as saying in a press release, “Mice and rats belong to a group of rodents called the myomorphs, which are amongst the most successful of all mammals. With over 1000 species, comprising nearly a quarter of all known mammal species, they live in a wide variety of habitats on every continent, except Antarctica."
Jeffery and his colleagues designed a computer model to simulate the bites of rodents, plugging in data about squirrels, guinea pigs and more. Rat muscles turn out to have evolved the most efficient biting mechanisms, with mice a close second. Some rodents gnaw really well. Others specialize in chewing. Rats do it all.
Co-author Philip Cox explained, "Since the Eocene era, approximately 56 to 34 million years ago, rodents have been adapting their skulls and jaw muscles in, what we might call an evolutionary race. A group of rodents called sciuromorphs, which includes the squirrel, began to specialise in gnawing adaptations, and the hystricomorphs, including the guinea pig, chose chewing. The myomorphs, the rats and the mice, however, adapted to both chewing and gnawing."
Read more at Discovery News
Tiny Shark Has Glowing Belly
Tiny sharks about the size of a human hand have a superpower of sorts: their bellies glow, according to new research that also showed these smalleye pygmy sharks use the glow to hide from predators lurking below.
Scientists had proposed the smalleye pygmy shark (Squaliolus aliae) sported light-emitting organs called photophores for use in camouflage, but that was never really tested, said study researcher Julien Claes of the Université catholique de Louvain in Belgium. "It wasn't even known if these organs were really functional, able to produce light," Claes added.
The small shark, which reaches a maximum length of just 8.7 inches (22 centimeters), lives well below the water surface in the Indian and western Pacific Oceans. The new research, detailed this week in The Journal of Experimental Biology, suggests their glowing bellies (a type of bioluminescence) would replace the downwelling light from the sun, or the moon and stars, that is otherwise absorbed by their bodies.
For the study, Claes and his colleague Jérôme Mallefet, along with Hsuan-Ching Ho from the National Dong Hwa University, Taiwan, captured 27 adult, smalleye pygmy sharks off the coast of Taiwan and brought them to the National Museum of Marine Biology and Aquarium. In the lab, the scientists took skin samples from the sharks and tested how they responded to various chemicals known to trigger biological processes such as light production. Sure enough, melatonin caused the shark's skin to glow; neurotransmitters known to regulate light production in deep-sea bony fish had no effect on the pygmy's skin. When the team added the hormone prolactin to the samples, the glow faded.
In lantern sharks, prolactin triggers 30-minute-long bursts of light, which the sharks likely use for various means of communication. Lantern sharks use melatonin to trigger a constant belly glow used in camouflage.
The difference between the two sharks, with the pygmy shark only able to produce a constant glow, Claes said, suggests the smalleye pygmy relies on its glow for camouflage, but not communication as the lantern shark does.
The researchers also suggest both sharks evolved this ability from an ancient organism that would have used these hormones to change their skin pigmentation from light to dark (or vice versa) as a form of camouflage. So while melatonin would've lightened the skin of this predecessor, prolactin would have darkened it. Today, these hormones would work as a type of pigment shade, either moving the pigment cells in front of the light-emitting organs (covering them up) or retracting them to expose the glow. Essentially, the sharks now regulate their bioluminescence by changing the degree of pigmentation covering the photophores.
Read more at Discovery News
Scientists had proposed the smalleye pygmy shark (Squaliolus aliae) sported light-emitting organs called photophores for use in camouflage, but that was never really tested, said study researcher Julien Claes of the Université catholique de Louvain in Belgium. "It wasn't even known if these organs were really functional, able to produce light," Claes added.
The small shark, which reaches a maximum length of just 8.7 inches (22 centimeters), lives well below the water surface in the Indian and western Pacific Oceans. The new research, detailed this week in The Journal of Experimental Biology, suggests their glowing bellies (a type of bioluminescence) would replace the downwelling light from the sun, or the moon and stars, that is otherwise absorbed by their bodies.
For the study, Claes and his colleague Jérôme Mallefet, along with Hsuan-Ching Ho from the National Dong Hwa University, Taiwan, captured 27 adult, smalleye pygmy sharks off the coast of Taiwan and brought them to the National Museum of Marine Biology and Aquarium. In the lab, the scientists took skin samples from the sharks and tested how they responded to various chemicals known to trigger biological processes such as light production. Sure enough, melatonin caused the shark's skin to glow; neurotransmitters known to regulate light production in deep-sea bony fish had no effect on the pygmy's skin. When the team added the hormone prolactin to the samples, the glow faded.
In lantern sharks, prolactin triggers 30-minute-long bursts of light, which the sharks likely use for various means of communication. Lantern sharks use melatonin to trigger a constant belly glow used in camouflage.
The difference between the two sharks, with the pygmy shark only able to produce a constant glow, Claes said, suggests the smalleye pygmy relies on its glow for camouflage, but not communication as the lantern shark does.
The researchers also suggest both sharks evolved this ability from an ancient organism that would have used these hormones to change their skin pigmentation from light to dark (or vice versa) as a form of camouflage. So while melatonin would've lightened the skin of this predecessor, prolactin would have darkened it. Today, these hormones would work as a type of pigment shade, either moving the pigment cells in front of the light-emitting organs (covering them up) or retracting them to expose the glow. Essentially, the sharks now regulate their bioluminescence by changing the degree of pigmentation covering the photophores.
Read more at Discovery News
Why Pygmies of Africa Are So Short
Why the Pygmies of West Africa have such short stature, while neighboring groups don't, has been somewhat of a mystery. Now new research suggests unique changes in the Pygmy's genome have both led to adaptations for living in the forest as well as kept them short.
Researchers analyzed the genomes, the "building code" that directs how an organism is put together, of Western African Pygmies in Cameroon, whose men average 4 feet, 11 inches tall, and compared them with their neighboring relatives, the Bantus, who average 5 feet, 6 inches, to see whether these differences were genetic or a factor of their environment.
"There's been a long-standing debate about why Pygmies are so short and whether it is an adaptation to living in a tropical environment," study researcher Sarah Tishkoff of the University of Pennsylvania said in a statement. "Our findings are telling us that the genetic basis of complex traits like height may be very different in globally diverse populations."
Short population
The Pygmy and Bantu populations separated genetically about 60,000 to 70,000 years ago; then roughly 4,000 to 5,000 years ago, they started interbreeding.
Some Pygmy women, after having sex with a Bantu man, have given birth to half-Bantu babies, a phenomenon that integrates Bantu genes into the Pygmy population. These women and their offspring stay in the Pygmy village, and so don't mix with the Bantu. However, offspring resulting from mating between a Pygmy man and Bantu woman are rare, so the Bantus don't have many Pygmy genes.
The researchers analyzed the genomes of 67 Pygmies and 58 Bantus for changes that would provide information about an individual's ancestry. These changes are small, nonharmful misspellings in the code (the chemical bases A, C, T and G) that makes up the genome. For example, a Bantu might have an A where a Pygmy has a T.
By analyzing large numbers of these changes, researchers can tell how much of an individual's genome is Bantu and how much is Pygmy.
Selected for statue
The researchers also used this letter-change data to look for areas of the genome associated with height and those that were "naturally selected" for — parts of the genome that are passed down through the generations because they provide some sort of survival advantage.
The data revealed height had a genetic component related to Bantu ancestry: The more Bantu ancestry an individual from the Pygmy tribe had, the taller that individual tended to be. One part of the genome, on chromosome 3, was especially important in this trait, the researchers said.
"We kept seeing a lot of them [these single-letter differences] highlight that region in chromosome 3," Tishkoff said. "It just seemed like a hot spot for selection and for very high differentiation and, as it turns out, very strong association with height as well."
Read more at Discovery News
Researchers analyzed the genomes, the "building code" that directs how an organism is put together, of Western African Pygmies in Cameroon, whose men average 4 feet, 11 inches tall, and compared them with their neighboring relatives, the Bantus, who average 5 feet, 6 inches, to see whether these differences were genetic or a factor of their environment.
"There's been a long-standing debate about why Pygmies are so short and whether it is an adaptation to living in a tropical environment," study researcher Sarah Tishkoff of the University of Pennsylvania said in a statement. "Our findings are telling us that the genetic basis of complex traits like height may be very different in globally diverse populations."
Short population
The Pygmy and Bantu populations separated genetically about 60,000 to 70,000 years ago; then roughly 4,000 to 5,000 years ago, they started interbreeding.
Some Pygmy women, after having sex with a Bantu man, have given birth to half-Bantu babies, a phenomenon that integrates Bantu genes into the Pygmy population. These women and their offspring stay in the Pygmy village, and so don't mix with the Bantu. However, offspring resulting from mating between a Pygmy man and Bantu woman are rare, so the Bantus don't have many Pygmy genes.
The researchers analyzed the genomes of 67 Pygmies and 58 Bantus for changes that would provide information about an individual's ancestry. These changes are small, nonharmful misspellings in the code (the chemical bases A, C, T and G) that makes up the genome. For example, a Bantu might have an A where a Pygmy has a T.
By analyzing large numbers of these changes, researchers can tell how much of an individual's genome is Bantu and how much is Pygmy.
Selected for statue
The researchers also used this letter-change data to look for areas of the genome associated with height and those that were "naturally selected" for — parts of the genome that are passed down through the generations because they provide some sort of survival advantage.
The data revealed height had a genetic component related to Bantu ancestry: The more Bantu ancestry an individual from the Pygmy tribe had, the taller that individual tended to be. One part of the genome, on chromosome 3, was especially important in this trait, the researchers said.
"We kept seeing a lot of them [these single-letter differences] highlight that region in chromosome 3," Tishkoff said. "It just seemed like a hot spot for selection and for very high differentiation and, as it turns out, very strong association with height as well."
Read more at Discovery News
How Do You Hack Into a Phone?
This week, Rupert Murdoch testified before a British judicial inquiry on media ethics that he was unaware that his employees at the now-defunct British tabloid News of the World allegedly hacked into an estimated 4,000 victims’ voicemail systems. The hacking occurred between 2003 and 2007, and as the investigation widens to other news-gathering organizations, that number may continue to rise.
Murdoch has admitted that he didn't dig deeper into the problem when evidence of the hacking began to surface in 2006. It wasn't until the scandal exploded last summer with proof that journalists had tapped into the cellphone of a kidnapped girl, who had been murdered, that he finally shut down the paper, which had been in business for 168 years.
So, just how did these reporters-turned-hackers break into the cellphones and voice mail boxes of celebrities, politicians and ordinary citizens?
They likely used the low-tech approach of merely guessing someone’s four-digit voice mail PIN number or password. To access that PIN, some reporters may have employed pretexting (or blagging in British parlance), which involves contacting mobile operators and impersonating victims to obtain their information.
The invention of caller ID more than 20 years ago also opened up another common avenue for phone hacking: caller ID spoofing.
First discovered in the ‘90s, caller ID spoofing allows an unscrupulous sort to choose whatever number he would like for his caller ID. For instance, pioneer caller ID spoofer Lucky255 used it to switch his caller ID to 867-5309/Jenny from the bygone pop band Tommy Tutone.
If you call your own cellphone number from your cellphone, the mobile service provider will typically route you straight to your voice mail. By using caller ID SpoofCard apps and programs, voice mail thiefs can switch their caller ID display numbers to a victim's display number, dial up the victim’s phone number and gain access to the voice mail. All they have to do next is decipher the four-digit PIN.
The easiest way to hack a smartphone without having physical access to it is much more reminiscent of computer hacking, says Darren Kitchen, hacking expert and co-host of the tech show, "Hak5."
“Send an enticing link via SMS, email, Twitter; if the target follows from their phone, you've got a chance at using one of many remote exploits for iPhone and Android to install a rootkit,” which is software designed to grant internal access to a device, Kitchen explained.
“From there, you can have phone book data, voice mail, text message logs, browser history or anything covertly sent to you.”
Which smartphone you have does make a difference, since some operating systems are more vulnerable to hacks than others.
“Older versions of Android are easiest to hack,” Kitchen said. “Recent versions of iOS [are easy to hack] too, though both Apple and Google have been quick to release patches.”
Read more at Discovery News
Murdoch has admitted that he didn't dig deeper into the problem when evidence of the hacking began to surface in 2006. It wasn't until the scandal exploded last summer with proof that journalists had tapped into the cellphone of a kidnapped girl, who had been murdered, that he finally shut down the paper, which had been in business for 168 years.
So, just how did these reporters-turned-hackers break into the cellphones and voice mail boxes of celebrities, politicians and ordinary citizens?
They likely used the low-tech approach of merely guessing someone’s four-digit voice mail PIN number or password. To access that PIN, some reporters may have employed pretexting (or blagging in British parlance), which involves contacting mobile operators and impersonating victims to obtain their information.
The invention of caller ID more than 20 years ago also opened up another common avenue for phone hacking: caller ID spoofing.
First discovered in the ‘90s, caller ID spoofing allows an unscrupulous sort to choose whatever number he would like for his caller ID. For instance, pioneer caller ID spoofer Lucky255 used it to switch his caller ID to 867-5309/Jenny from the bygone pop band Tommy Tutone.
If you call your own cellphone number from your cellphone, the mobile service provider will typically route you straight to your voice mail. By using caller ID SpoofCard apps and programs, voice mail thiefs can switch their caller ID display numbers to a victim's display number, dial up the victim’s phone number and gain access to the voice mail. All they have to do next is decipher the four-digit PIN.
The easiest way to hack a smartphone without having physical access to it is much more reminiscent of computer hacking, says Darren Kitchen, hacking expert and co-host of the tech show, "Hak5."
“Send an enticing link via SMS, email, Twitter; if the target follows from their phone, you've got a chance at using one of many remote exploits for iPhone and Android to install a rootkit,” which is software designed to grant internal access to a device, Kitchen explained.
“From there, you can have phone book data, voice mail, text message logs, browser history or anything covertly sent to you.”
Which smartphone you have does make a difference, since some operating systems are more vulnerable to hacks than others.
“Older versions of Android are easiest to hack,” Kitchen said. “Recent versions of iOS [are easy to hack] too, though both Apple and Google have been quick to release patches.”
Read more at Discovery News
How Long Has Titan Been a Hazy Methane Moon?
Saturn's moon Titan is one of the most scientifically interesting spots in the solar system. The second-largest moon after Jupiter's Gannymede and bigger than the planet Mercury, it's shrouded beneath a thick, smoggy atmosphere rich in methane creating a greenhouse effect and constantly unloads complex hydrocarbons that rain down on the surface.
Now, scientists have figured out just how long Titan has had its signature hazy atmosphere.
Discovered in 1655 by Dutch astronomer Christopher Huygens, it wasn't until 2004 that astronomers managed to peer through the dense atmosphere to uncover the moon underneath. The Cassini spacecraft in orbit around Saturn aimed its instruments at Titan and in 2005 the small Huygens probe actually landed and gathered data from its surface.
Titan's atmosphere is heavy. In fact, it's about 60 percent heavier than Earth's; standing on Titan's surface would feel like standing under 20 feet of water. It's mostly composed of nitrogen with just a touch of methane. The methane on Titan acts a lot like water on Earth -- it creates a greenhouse effect to keep the moon's temperature steady at about -180 degrees Celsius (-292 degrees Fahrenheit).
The moon has a methane cycle similar to the water cycle on Earth with liquid methane and ethane raining down and forming lakes on its surface. Understanding Titan's methane cycle could help scientists understand Earth's water cycles, and shed some light on how these two worlds are so similar yet so different -- you'd definitely not like to get caught out in the methane rain!
But what's really interesting about Titan's atmosphere is the short life span of methane. Made of one carbon atom joined to four hydrogen atoms, methane breaks down readily when exposed to direct sunlight and is converted into more complex molecules and particles. The reaction generates the hydrocarbons that rain down and form dunes of organic material.
Scientists know how long it takes for methane to break down, and by measuring the concentration of heavier methane isotopes its possible to figure out how long Titan has been enveloped in its organic atmosphere.
Isotopes of a molecule are versions with different weight. In the case of carbon, carbon-13 is rarer and heavier than its more common sibling carbon-12. Methane is occasionally made with the heavier carbon-13 making it a heavier atom than methane made with carbon-12. The lighter methane is broken down the UV light faster, meaning the relative concentration of methane made with carbon-13 increases over time.
By modeling how the concentration of heavy methane has changed over time, scientists have been able to determine just how long Titan's "chemical factory" atmosphere has been running.
The baseline age for Titan's atmosphere going into the study was 1.6 billion years, about a third of the age of the moon itself. If it turned out that methane escapes from the top of the atmosphere, it would be younger. But if evidence suggested methane is replenished with fresh isotopes from the surface, through a theoretical subsurface ocean of water and ammonia for example, then the atmosphere would be much older. A buildup of methane on the surface lakes and in the atmosphere would be another sign of an older atmosphere.
Read more at Discovery News
Now, scientists have figured out just how long Titan has had its signature hazy atmosphere.
Discovered in 1655 by Dutch astronomer Christopher Huygens, it wasn't until 2004 that astronomers managed to peer through the dense atmosphere to uncover the moon underneath. The Cassini spacecraft in orbit around Saturn aimed its instruments at Titan and in 2005 the small Huygens probe actually landed and gathered data from its surface.
Titan's atmosphere is heavy. In fact, it's about 60 percent heavier than Earth's; standing on Titan's surface would feel like standing under 20 feet of water. It's mostly composed of nitrogen with just a touch of methane. The methane on Titan acts a lot like water on Earth -- it creates a greenhouse effect to keep the moon's temperature steady at about -180 degrees Celsius (-292 degrees Fahrenheit).
The moon has a methane cycle similar to the water cycle on Earth with liquid methane and ethane raining down and forming lakes on its surface. Understanding Titan's methane cycle could help scientists understand Earth's water cycles, and shed some light on how these two worlds are so similar yet so different -- you'd definitely not like to get caught out in the methane rain!
But what's really interesting about Titan's atmosphere is the short life span of methane. Made of one carbon atom joined to four hydrogen atoms, methane breaks down readily when exposed to direct sunlight and is converted into more complex molecules and particles. The reaction generates the hydrocarbons that rain down and form dunes of organic material.
Scientists know how long it takes for methane to break down, and by measuring the concentration of heavier methane isotopes its possible to figure out how long Titan has been enveloped in its organic atmosphere.
Isotopes of a molecule are versions with different weight. In the case of carbon, carbon-13 is rarer and heavier than its more common sibling carbon-12. Methane is occasionally made with the heavier carbon-13 making it a heavier atom than methane made with carbon-12. The lighter methane is broken down the UV light faster, meaning the relative concentration of methane made with carbon-13 increases over time.
By modeling how the concentration of heavy methane has changed over time, scientists have been able to determine just how long Titan's "chemical factory" atmosphere has been running.
The baseline age for Titan's atmosphere going into the study was 1.6 billion years, about a third of the age of the moon itself. If it turned out that methane escapes from the top of the atmosphere, it would be younger. But if evidence suggested methane is replenished with fresh isotopes from the surface, through a theoretical subsurface ocean of water and ammonia for example, then the atmosphere would be much older. A buildup of methane on the surface lakes and in the atmosphere would be another sign of an older atmosphere.
Read more at Discovery News
Apr 26, 2012
Genes Shed Light On Spread of Agriculture in Stone Age Europe
One of the most debated developments in human history is the transition from hunter-gatherer to agricultural societies. A recent issue of Science presents the genetic findings of a Swedish-Danish research team, which show that agriculture spread to Northern Europe via migration from Southern Europe.
"We have been able to show that the genetic variation of today's Europeans was strongly affected by immigrant Stone Age farmers, though a number of hunter-gatherer genes remain," says Assistant Professor Anders Götherström of the Evolutionary Biology Centre, who, along with Assistant Professor Mattias Jakobsson, co-led the study, a collaboration with Stockholm University and the University of Copenhagen.
"What is interesting and surprising is that Stone Age farmers and hunter-gatherers from the same time had entirely different genetic backgrounds and lived side by side for more than a thousand years, to finally interbreed," Mattias Jakobsson says.
Agriculture developed in the Middle East about 11,000 years ago and by about 5,000 years ago had reached most of Continental Europe. How the spread of agriculture progressed and how it affected the people living in Europe have been debated for almost 100 years. Earlier studies were largely based on small amounts of genetic data and were therefore unable to provide univocal answers. Was agriculture an idea that spread across Europe or a technique that a group of migrants took with them to different regions of the continent?
"Many attempts, including using genetics, have been made to come to terms with the problem since the significance of the spread of agriculture was established almost 100 years ago," Anders Götherström says. "Our success in carrying out this study depended on access to good material, modern laboratory methods and a high level of analytical expertise."
The study in question entailed the research team using advanced DNA techniques to characterise almost 250 million base pairs from four skeletons of humans who lived during the Stone Age, 5,000 years ago. Just ensuring that the DNA obtained from archaeological material is truly old and uncontaminated by modern DNA requires the use of advanced molecular and statistical methods.
The study involved thousands of genetic markers from the four Stone Age individuals, of which three were hunter-gatherers and one was from an agricultural culture. All of the archaeological data shows that the Stone Age farmer was representative of his time and group and was born and raised near the place of his burial. The researchers compared their findings with a large amount of genetic data from living individuals.
"The Stone Age farmer's genetic profile matched that of people currently living in the vicinity of the Mediterranean, on Cyprus, for example," says Pontus Skoglund, a doctoral student who developed new analytical methods used in the study. "The three hunter-gatherers from the same time most resembled Northern Europeans, without exactly matching any particular group."
Accordingly, the study strongly supports the thesis that the agricultural revolution was driven by people who migrated from Southern Europe. That they lived side by side with the hunter-gatherers for many generations, to eventually interbreed, explains the patterns of genetic variation that characterise present-day Europeans.
"The process appears in the end to have had the result that nobody today has the same genetic profile as the original hunter-gatherers, although they continue to be represented in the genetic heritage of today's Europeans," Pontus Skoglund says.
Jan Storå, researcher at Stockholm University, says the results are extremely exciting for archaeology in general and research into the Stone Age in particular.
Read more at Discovery News
"We have been able to show that the genetic variation of today's Europeans was strongly affected by immigrant Stone Age farmers, though a number of hunter-gatherer genes remain," says Assistant Professor Anders Götherström of the Evolutionary Biology Centre, who, along with Assistant Professor Mattias Jakobsson, co-led the study, a collaboration with Stockholm University and the University of Copenhagen.
"What is interesting and surprising is that Stone Age farmers and hunter-gatherers from the same time had entirely different genetic backgrounds and lived side by side for more than a thousand years, to finally interbreed," Mattias Jakobsson says.
Agriculture developed in the Middle East about 11,000 years ago and by about 5,000 years ago had reached most of Continental Europe. How the spread of agriculture progressed and how it affected the people living in Europe have been debated for almost 100 years. Earlier studies were largely based on small amounts of genetic data and were therefore unable to provide univocal answers. Was agriculture an idea that spread across Europe or a technique that a group of migrants took with them to different regions of the continent?
"Many attempts, including using genetics, have been made to come to terms with the problem since the significance of the spread of agriculture was established almost 100 years ago," Anders Götherström says. "Our success in carrying out this study depended on access to good material, modern laboratory methods and a high level of analytical expertise."
The study in question entailed the research team using advanced DNA techniques to characterise almost 250 million base pairs from four skeletons of humans who lived during the Stone Age, 5,000 years ago. Just ensuring that the DNA obtained from archaeological material is truly old and uncontaminated by modern DNA requires the use of advanced molecular and statistical methods.
The study involved thousands of genetic markers from the four Stone Age individuals, of which three were hunter-gatherers and one was from an agricultural culture. All of the archaeological data shows that the Stone Age farmer was representative of his time and group and was born and raised near the place of his burial. The researchers compared their findings with a large amount of genetic data from living individuals.
"The Stone Age farmer's genetic profile matched that of people currently living in the vicinity of the Mediterranean, on Cyprus, for example," says Pontus Skoglund, a doctoral student who developed new analytical methods used in the study. "The three hunter-gatherers from the same time most resembled Northern Europeans, without exactly matching any particular group."
Accordingly, the study strongly supports the thesis that the agricultural revolution was driven by people who migrated from Southern Europe. That they lived side by side with the hunter-gatherers for many generations, to eventually interbreed, explains the patterns of genetic variation that characterise present-day Europeans.
"The process appears in the end to have had the result that nobody today has the same genetic profile as the original hunter-gatherers, although they continue to be represented in the genetic heritage of today's Europeans," Pontus Skoglund says.
Jan Storå, researcher at Stockholm University, says the results are extremely exciting for archaeology in general and research into the Stone Age in particular.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Is This the Perfect Face?
What would a scientifically perfect face look like?
England thinks it would mirror Florence Colgate's. The 18-year-old student recently won a competition to find Britain's most naturally beautiful face. Although the final test came down to an opinion poll, science backs up Colgate's perfection, according to the Daily Mail.
Her "flawless proportions" represent the optimum ratio between eyes, mouth, forehead and chin, the newspaper reports. For example, it's believed that in the most attractive female faces, pupils are just under half the width of the face; Colgate's ratio is 44 percent. The distance between eyes and mouth should be just over a third of the distance between the hairline and chin; Colgate's ratio is 32.8 per cent.
Scientists have also linked symmetry and beauty, and Colgate's face is almost perfectly symmetrical.
Leonardo Da Vinci may have been the first to apply science and math to facial beauty, the Daily Mail points out.
"Symmetry appears to be a very important cue to attractiveness," Carmen Lefèvre, PhD student at the University of St. Andrews' Perception Lab in the School of Psychology, told KentOnline. "Although we don't realize it in everyday interactions, in most people's faces the right and left half of the face are actually quite different. For example, the size of the eyes is different or the nose is slightly bent to one side. An explanation why symmetry is important is that it may be a signal of health and good genes."
The "Lorraine: Naked" competition judged contestants without makeup or plastic surgery.
Read more at Discovery News
England thinks it would mirror Florence Colgate's. The 18-year-old student recently won a competition to find Britain's most naturally beautiful face. Although the final test came down to an opinion poll, science backs up Colgate's perfection, according to the Daily Mail.
Her "flawless proportions" represent the optimum ratio between eyes, mouth, forehead and chin, the newspaper reports. For example, it's believed that in the most attractive female faces, pupils are just under half the width of the face; Colgate's ratio is 44 percent. The distance between eyes and mouth should be just over a third of the distance between the hairline and chin; Colgate's ratio is 32.8 per cent.
Scientists have also linked symmetry and beauty, and Colgate's face is almost perfectly symmetrical.
Leonardo Da Vinci may have been the first to apply science and math to facial beauty, the Daily Mail points out.
"Symmetry appears to be a very important cue to attractiveness," Carmen Lefèvre, PhD student at the University of St. Andrews' Perception Lab in the School of Psychology, told KentOnline. "Although we don't realize it in everyday interactions, in most people's faces the right and left half of the face are actually quite different. For example, the size of the eyes is different or the nose is slightly bent to one side. An explanation why symmetry is important is that it may be a signal of health and good genes."
The "Lorraine: Naked" competition judged contestants without makeup or plastic surgery.
Read more at Discovery News
Belief in God, Critical Thinking Butt Heads
When pushed to think in a more rational way, people experience a dip in their religious beliefs, found a new study. Simply looking at pictures of Rodin's sculpture "The Thinker," for example, was enough to make people less likely to agree with statements like, "Nothing is as important to me as serving God as best I know how."
The effects were subtle, and encouraging critical thought is unlikely to destroy anyone's faith. But the findings suggest that rational analysis interacts with gut instinct in the brain to help distinguish between people who believe fully in God and those who abandon religion.
"This could help people take a broader approach to debates about whether religion is true or not, and realize that subtle cognitive differences might be influencing where people end up on that debate," said Will Gervais, a social psychologist at the University of British Columbia, Vancouver, who added that understanding why some people are more religious than others doesn't say anything about who's right.
Nor is rational thinking the only factor that influences religious belief.
"It's not the case that the Pope walked into the lab and Richard Dawkins walked out," he said. "I think this study tells us one factor that is implicated in whether or not people are believers, but it is just one factor out of many."
While most of the world's population believes in God or gods, hundreds of millions of people do not. To explain how intelligent people might believe in concepts that lack proof, researchers have previously theorized that our brains have two distinct modes of thought. One uses rational analysis to think things through. The other relies on intuition to form beliefs and gut feelings.
With that theory in mind, Gervais and colleague Ara Norenzayan challenged a diverse group of people to answer three questions whose answers were likely to differ depending on whether they reasoned out the answer or went with their gut.
For example, one question asked, "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?" Without thinking, many people guess 10 cents, even though a little bit of quick math shows that the correct answer is five.
People in the experiment who stepped back and thought analytically before answering tended to hold weaker religious beliefs, the researchers report today in the journal Science, suggesting a connection between rational thinking and a lack of faith.
But does the tendency to think rationally cause religious doubt, or does it go the other way? To find out, the researchers conducted a series of experiments with hundreds of people that triggered them to think analytically before answering faith-themed questions about things like their belief in God and the role that faith plays in their decision-making.
In one experiment, participants looked at artwork portraying either a thinker or a man throwing a discus. In another, in which people rearranged letters and words to form sentences, they saw either thinking-related words or neutral words. Yet another experiment asked people to read the religious-beliefs survey in a font that was either easy or hard to decipher.
No matter how the researchers primed the brain to think critically, people's responses were less strongly religious compared to the responses of people who were not put in a rational frame of mind. The findings, Gervais said, suggest that the rational brain is capable of undermining the intuitive brain in slight ways when it comes to faith.
Because our minds and bodies are so closely connected, it's not surprising that religious thought is linked with certain kinds of brain activities, said John Hare, a philosophical theologian at Yale Divinity School in New Haven, Conn. But discoveries like these say nothing about the existence of God or anything else that is outside of the mind.
Read more at Discovery News
The effects were subtle, and encouraging critical thought is unlikely to destroy anyone's faith. But the findings suggest that rational analysis interacts with gut instinct in the brain to help distinguish between people who believe fully in God and those who abandon religion.
"This could help people take a broader approach to debates about whether religion is true or not, and realize that subtle cognitive differences might be influencing where people end up on that debate," said Will Gervais, a social psychologist at the University of British Columbia, Vancouver, who added that understanding why some people are more religious than others doesn't say anything about who's right.
Nor is rational thinking the only factor that influences religious belief.
"It's not the case that the Pope walked into the lab and Richard Dawkins walked out," he said. "I think this study tells us one factor that is implicated in whether or not people are believers, but it is just one factor out of many."
While most of the world's population believes in God or gods, hundreds of millions of people do not. To explain how intelligent people might believe in concepts that lack proof, researchers have previously theorized that our brains have two distinct modes of thought. One uses rational analysis to think things through. The other relies on intuition to form beliefs and gut feelings.
With that theory in mind, Gervais and colleague Ara Norenzayan challenged a diverse group of people to answer three questions whose answers were likely to differ depending on whether they reasoned out the answer or went with their gut.
For example, one question asked, "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?" Without thinking, many people guess 10 cents, even though a little bit of quick math shows that the correct answer is five.
People in the experiment who stepped back and thought analytically before answering tended to hold weaker religious beliefs, the researchers report today in the journal Science, suggesting a connection between rational thinking and a lack of faith.
But does the tendency to think rationally cause religious doubt, or does it go the other way? To find out, the researchers conducted a series of experiments with hundreds of people that triggered them to think analytically before answering faith-themed questions about things like their belief in God and the role that faith plays in their decision-making.
In one experiment, participants looked at artwork portraying either a thinker or a man throwing a discus. In another, in which people rearranged letters and words to form sentences, they saw either thinking-related words or neutral words. Yet another experiment asked people to read the religious-beliefs survey in a font that was either easy or hard to decipher.
No matter how the researchers primed the brain to think critically, people's responses were less strongly religious compared to the responses of people who were not put in a rational frame of mind. The findings, Gervais said, suggest that the rational brain is capable of undermining the intuitive brain in slight ways when it comes to faith.
Because our minds and bodies are so closely connected, it's not surprising that religious thought is linked with certain kinds of brain activities, said John Hare, a philosophical theologian at Yale Divinity School in New Haven, Conn. But discoveries like these say nothing about the existence of God or anything else that is outside of the mind.
Read more at Discovery News
What is the Oldest Skeleton in the Earth’s Closet?
A small pit with long, thin grooves radiating away from it like spokes on a wheel, this faint fossil may not take your breath away. But it should. It represents one of the most monumental moments in the history of life: the evolution of a skeleton.
Called Coronacollina acula, the organism that left this impression lived on the seafloor between 560 million and 550 million years ago. Bearing a passing resemblance to primitive sponges, its thimble-shaped body left its mark in the rock as a depression a few millimeters to two centimeters deep.
What really made Coronacollina special, though, are its needle-like spicules, which cast the long, thin grooves in the rock. The animal most likely used these spicules to hold itself in place—something no known creature had ever done before.
Each animal sported at least four of these novel support struts, each one between 20 and 40 centimeters long. One end of each spicule attached to the thimble-shaped body; the other end was embedded in the seafloor.
In life, Coronacollina acula probably looked something like this:
A team of paleontologists from the University of California, Riverside (UCR), who describe their results in the April issue of the journal Geology, originally discovered these inventive creatures in the Ediacara Hills of South Australia.
Named after the Ediacara Hills are a famous fossil trove of unusual animals called the Ediacaran fauna, which scientists had long assumed were all soft-bodied. The discovery of Coronacollina acula among them turns that assumption on its head.
Until now the understanding had been that hard parts, which allowed animals to grow larger and protect against predators, did not show up until long after Coronacollina’s reign. The rise of animals with skeletons was thought to have coincided instead with a rapid diversification of life that took place during a later period of time known as the Cambrian, which lasted from 542 million to 488 million years ago.
The early debut of Coronacollina acula not only signals that the origin of skeletons well preceded the Cambrian explosion but also anchors Ediacaran animals within the evolutionary lineage of animals as we know them, explains UCR paleontologist Mary Droser, whose research team made the discovery.
“The fate of the earliest Ediacaran animals has been a subject of debate, with many suggesting that they all went extinct just before the Cambrian,” Droser said in a press release. “Our discovery shows that they did not.”
Read more at Discovery News
Called Coronacollina acula, the organism that left this impression lived on the seafloor between 560 million and 550 million years ago. Bearing a passing resemblance to primitive sponges, its thimble-shaped body left its mark in the rock as a depression a few millimeters to two centimeters deep.
What really made Coronacollina special, though, are its needle-like spicules, which cast the long, thin grooves in the rock. The animal most likely used these spicules to hold itself in place—something no known creature had ever done before.
Each animal sported at least four of these novel support struts, each one between 20 and 40 centimeters long. One end of each spicule attached to the thimble-shaped body; the other end was embedded in the seafloor.
In life, Coronacollina acula probably looked something like this:
A team of paleontologists from the University of California, Riverside (UCR), who describe their results in the April issue of the journal Geology, originally discovered these inventive creatures in the Ediacara Hills of South Australia.
Named after the Ediacara Hills are a famous fossil trove of unusual animals called the Ediacaran fauna, which scientists had long assumed were all soft-bodied. The discovery of Coronacollina acula among them turns that assumption on its head.
Until now the understanding had been that hard parts, which allowed animals to grow larger and protect against predators, did not show up until long after Coronacollina’s reign. The rise of animals with skeletons was thought to have coincided instead with a rapid diversification of life that took place during a later period of time known as the Cambrian, which lasted from 542 million to 488 million years ago.
The early debut of Coronacollina acula not only signals that the origin of skeletons well preceded the Cambrian explosion but also anchors Ediacaran animals within the evolutionary lineage of animals as we know them, explains UCR paleontologist Mary Droser, whose research team made the discovery.
“The fate of the earliest Ediacaran animals has been a subject of debate, with many suggesting that they all went extinct just before the Cambrian,” Droser said in a press release. “Our discovery shows that they did not.”
Read more at Discovery News
Apr 25, 2012
Evolution On an Island: Fossils Show Secret for a Longer Life
ICP researchers have discovered one of the first fossil-based evidences supporting the evolutionary theory of aging, which predicts that species evolving in low mortality and resource-limited ecosystems tend to be more long-lived.
The study shows that the tooth height of endemic insular mammals is an indicator of longevity, and questions the use of this morphological characteristic as an exclusive indicator to infer the diet of fossil species, and to interpret the climate in which they lived.
Island systems often function as natural laboratories to test evolutionary hypotheses, since they are less complex than continental systems. Increased longevity of insular endemic species is an adaptation that the evolutionary theory of aging predicts, as part of an evolutionary strategy that pushes the islands’ endemics to a slower life cycle, due to the absence of predators and the limited resources. In this context, Xavier Jordana and the researchers who sign the work published today in the online edition of the Proceedings of the Royal Society B wonder if the increased height of the teeth of herbivores endemic to islands may be an evolutionary response to this longevity. This would call into question the consensus that so far explained this morphological characteristic from differences on diet and climate.
The paper "Evidence of correlated evolution of hypsodonty and exceptional longevity in endemic insular mammals" concludes that indeed Myotragus balearicus, the fossil species chosen for this study, needed higher teeth to get to live as long as it did. Hypsodonty, as experts call having a high dental crown, can be an indicator of long-lived species.
As explained by the ICP researcher Xavier Jordana, lecturer at the Universitat Autònoma de Barcelona in the masters in Human Biologya and in Paleontology and main author of this work, "the study focuses on a fossil species, but our results have implications for herbivorous mammals in general, extinct and extant, and especially in insular endemic species. The latter share some common characteristics, known as the island syndrome, which are different from their mainland relatives, as they evolve in special ecological conditions, such as the lack of predators, high population density and limited resources.”
The research now published analyzes the diet, longevity and mortality patterns of M. balearicus, a fossil bovid endemic to the Balearic Islands. The paper concludes that, despite being extremely hypsodont, M. balearicus was mostly a browser, that fed on leaves and shoots of trees and shrubs, and probably also tubers and roots, which involve excessive abrasion of the teeth because they have to dig into the ground to reach them. They did not have such an abrasive diet as grazers, which feed mainly of grasses and, therefore, exhibit higher teeth. The feeding habits, however, are not sufficient to explain the hypsodonty of Myotragus.
By analyzing M. balearicus longevity from annual growth lines of the teeth cementum, the researchers obtained a calculation of about 27 years, almost doubling that expected for a bovid of such body mass. In addition, the study of mortality patterns in two populations of M. balearicus, one from Cova Estreta and another from Cova des Moro in Mallorca, show juvenile and adult survival rates higher than in extant continental bovids. This means that a large proportion of the population reached greater ages and, therefore, M. balearicus was a species with a slow senescence rate, ie. with late aging.
These results are consistent with the evolutionary theory of aging that predicts the delay of senescence in populations with low extrinsic mortality. In an environment where few external elements can cause death of individuals, such as the lack of predators on an island, the species adapts by changing its aging rate and lifespan. For herbivores one way to do that is to select those individuals in the population with higher teeth, for which senescence starts later.
The fossil genus Myotragus has been an ideal model for evolution studies in islands and M. balearicus is the terminal species, which became extinct about 3,000 years ago. Myotragus survived completely isolated in Mallorca and Menorca for more than 5 million years, from the Pliocene to the Holocene. During its evolution Myotragus underwent significant changes, particularly affecting the locomotor system and its body size, as well as its nervous system and feeding apparatus. Dwarfism, reduced brain size and changes in dentition are the most distinctive evolutionary traits. Many of these morphological features are shared by all the island fauna, as is the case with the increased crown height of the molar teeth.
Read more at Science Daily
The study shows that the tooth height of endemic insular mammals is an indicator of longevity, and questions the use of this morphological characteristic as an exclusive indicator to infer the diet of fossil species, and to interpret the climate in which they lived.
Island systems often function as natural laboratories to test evolutionary hypotheses, since they are less complex than continental systems. Increased longevity of insular endemic species is an adaptation that the evolutionary theory of aging predicts, as part of an evolutionary strategy that pushes the islands’ endemics to a slower life cycle, due to the absence of predators and the limited resources. In this context, Xavier Jordana and the researchers who sign the work published today in the online edition of the Proceedings of the Royal Society B wonder if the increased height of the teeth of herbivores endemic to islands may be an evolutionary response to this longevity. This would call into question the consensus that so far explained this morphological characteristic from differences on diet and climate.
The paper "Evidence of correlated evolution of hypsodonty and exceptional longevity in endemic insular mammals" concludes that indeed Myotragus balearicus, the fossil species chosen for this study, needed higher teeth to get to live as long as it did. Hypsodonty, as experts call having a high dental crown, can be an indicator of long-lived species.
As explained by the ICP researcher Xavier Jordana, lecturer at the Universitat Autònoma de Barcelona in the masters in Human Biologya and in Paleontology and main author of this work, "the study focuses on a fossil species, but our results have implications for herbivorous mammals in general, extinct and extant, and especially in insular endemic species. The latter share some common characteristics, known as the island syndrome, which are different from their mainland relatives, as they evolve in special ecological conditions, such as the lack of predators, high population density and limited resources.”
The research now published analyzes the diet, longevity and mortality patterns of M. balearicus, a fossil bovid endemic to the Balearic Islands. The paper concludes that, despite being extremely hypsodont, M. balearicus was mostly a browser, that fed on leaves and shoots of trees and shrubs, and probably also tubers and roots, which involve excessive abrasion of the teeth because they have to dig into the ground to reach them. They did not have such an abrasive diet as grazers, which feed mainly of grasses and, therefore, exhibit higher teeth. The feeding habits, however, are not sufficient to explain the hypsodonty of Myotragus.
By analyzing M. balearicus longevity from annual growth lines of the teeth cementum, the researchers obtained a calculation of about 27 years, almost doubling that expected for a bovid of such body mass. In addition, the study of mortality patterns in two populations of M. balearicus, one from Cova Estreta and another from Cova des Moro in Mallorca, show juvenile and adult survival rates higher than in extant continental bovids. This means that a large proportion of the population reached greater ages and, therefore, M. balearicus was a species with a slow senescence rate, ie. with late aging.
These results are consistent with the evolutionary theory of aging that predicts the delay of senescence in populations with low extrinsic mortality. In an environment where few external elements can cause death of individuals, such as the lack of predators on an island, the species adapts by changing its aging rate and lifespan. For herbivores one way to do that is to select those individuals in the population with higher teeth, for which senescence starts later.
The fossil genus Myotragus has been an ideal model for evolution studies in islands and M. balearicus is the terminal species, which became extinct about 3,000 years ago. Myotragus survived completely isolated in Mallorca and Menorca for more than 5 million years, from the Pliocene to the Holocene. During its evolution Myotragus underwent significant changes, particularly affecting the locomotor system and its body size, as well as its nervous system and feeding apparatus. Dwarfism, reduced brain size and changes in dentition are the most distinctive evolutionary traits. Many of these morphological features are shared by all the island fauna, as is the case with the increased crown height of the molar teeth.
Read more at Science Daily
Smuggled Cargo Found on Ancient Roman Ship
Evidence of ancient smuggling activity has emerged from a Roman shipwreck, according to Italian archaeologists who have investigated the vessel's cargo.
Dating to the third century AD, the large sunken ship was fully recovered six months ago at a depth of 7 feet near the shore of Marausa Lido, a beach resort near Trapani.
Her cargo, officially consisting of assorted jars once filled with walnuts, figs, olives, wine, oil and fish sauce, also contained many unusual tubular tiles.
The unique tiles were apparently valuable enough for sailors to smuggle them from North Africa to Rome, where they sold for higher prices.
"They are small terracotta tubes with one pointed end. Put one into the other, they formed interlocking, snake-like tiles. Rows of these so-called fictile tubes were used by Roman builders to relieve the weight of vaulting," Sebastiano Tusa, Sicily's Superintendent of the Sea Office, told Discovery News.
Tusa will detail the wreck discovery in a forthcoming publication by the Museum of the Sea in Cesenatico, within a national meeting of underwater archaeology and naval history.
Following an analysis of the jars and their contents, Tusa and colleagues concluded that the 52- by 16-foot ship was sailing from North Africa when she sank some 1,700 years ago, probably while trying to enter the local river Birgi.
In North Africa the vaulting tubes cost a quarter of what builders paid for them in Rome.
"It was a somewhat tolerated smuggling activity, used by sailors to round their poor salaries. They bought these small tubes cheaper in Africa, hid them everywhere within the ship, and then re-sold them in Rome," Tusa said.
According to Frank Sear, professor of classical studies at the University of Melbourne, vaults featuring rows of fictile tubes were most common in North Africa from about the 2nd century AD.
"The tiles were also frequently imported to Sicily and turn up in many places such as Syracuse, Catania, Marsala and Motya. There are good examples of them in the baths of the late Roman villa at Piazza Armerina," Sear, a leading authority on Roman architecture, told Discovery News.
Read more at Discovery News
Dating to the third century AD, the large sunken ship was fully recovered six months ago at a depth of 7 feet near the shore of Marausa Lido, a beach resort near Trapani.
Her cargo, officially consisting of assorted jars once filled with walnuts, figs, olives, wine, oil and fish sauce, also contained many unusual tubular tiles.
The unique tiles were apparently valuable enough for sailors to smuggle them from North Africa to Rome, where they sold for higher prices.
"They are small terracotta tubes with one pointed end. Put one into the other, they formed interlocking, snake-like tiles. Rows of these so-called fictile tubes were used by Roman builders to relieve the weight of vaulting," Sebastiano Tusa, Sicily's Superintendent of the Sea Office, told Discovery News.
Tusa will detail the wreck discovery in a forthcoming publication by the Museum of the Sea in Cesenatico, within a national meeting of underwater archaeology and naval history.
Following an analysis of the jars and their contents, Tusa and colleagues concluded that the 52- by 16-foot ship was sailing from North Africa when she sank some 1,700 years ago, probably while trying to enter the local river Birgi.
In North Africa the vaulting tubes cost a quarter of what builders paid for them in Rome.
"It was a somewhat tolerated smuggling activity, used by sailors to round their poor salaries. They bought these small tubes cheaper in Africa, hid them everywhere within the ship, and then re-sold them in Rome," Tusa said.
According to Frank Sear, professor of classical studies at the University of Melbourne, vaults featuring rows of fictile tubes were most common in North Africa from about the 2nd century AD.
"The tiles were also frequently imported to Sicily and turn up in many places such as Syracuse, Catania, Marsala and Motya. There are good examples of them in the baths of the late Roman villa at Piazza Armerina," Sear, a leading authority on Roman architecture, told Discovery News.
Read more at Discovery News
Mystery Sea Beast of Cincinnati Found
The discovery of a very large, very mysterious "monster" by an amateur Ohio paleontologist has researchers baffled and asking for answers.
Around 450 million years ago, shallow seas covered Cincinnati -- and harbored one very large organism. And despite its size, no one has ever found a fossil of this "monster" until its discovery by an amateur paleontologist last year.
The fossilized specimen, a roughly elliptical shape with multiple lobes, totaling almost seven feet in length was discovered by Ron Fine of Dayton, Ohio. He's a member of the Dry Dredgers, an association of amateur paleontologists based at the University of Cincinnati that has a long history of collaborating with professional scientists.
"I knew right away that I had found an unusual fossil," Fine said. "Imagine a saguaro cactus with flattened branches and horizontal stripes in place of the usual vertical stripes. That’s the best description I can give."
The Cincinnati region has been vigorously studied over the last two centuries, making the find even more impressive -- not only because it was the work of an amateur, but also because of its size.
"When I finally finished it was three-and-a-half feet wide and six-and-a-half feet long," Fine said. "In a world of thumb-sized fossils, that's gigantic!"
David L. Meyer of the University of Cincinnati geology department and co-author of "A Sea without Fish: Life in the Ordovician Sea of the Cincinnati Region" agreed that it might be the largest fossil recovered from the Cincinnati area.
"It's definitely a new discovery," Meyer said. "And we're sure it's biological. We just don't know yet exactly what it is."
"I've been fossil collecting for 39 years and never had a need to excavate. But this fossil just kept going, and going, and going," Fine added. "I had to make 12 trips, over the course of the summer, to excavate more material before I finally found the end of it."
Other specialists have been unable to explain the mystery monster, they said. Fine presented his discovery Tuesday at a regional meeting of the Geological Society of America in his quest for answers.
Read more at Discovery News
Around 450 million years ago, shallow seas covered Cincinnati -- and harbored one very large organism. And despite its size, no one has ever found a fossil of this "monster" until its discovery by an amateur paleontologist last year.
The fossilized specimen, a roughly elliptical shape with multiple lobes, totaling almost seven feet in length was discovered by Ron Fine of Dayton, Ohio. He's a member of the Dry Dredgers, an association of amateur paleontologists based at the University of Cincinnati that has a long history of collaborating with professional scientists.
"I knew right away that I had found an unusual fossil," Fine said. "Imagine a saguaro cactus with flattened branches and horizontal stripes in place of the usual vertical stripes. That’s the best description I can give."
The Cincinnati region has been vigorously studied over the last two centuries, making the find even more impressive -- not only because it was the work of an amateur, but also because of its size.
"When I finally finished it was three-and-a-half feet wide and six-and-a-half feet long," Fine said. "In a world of thumb-sized fossils, that's gigantic!"
David L. Meyer of the University of Cincinnati geology department and co-author of "A Sea without Fish: Life in the Ordovician Sea of the Cincinnati Region" agreed that it might be the largest fossil recovered from the Cincinnati area.
"It's definitely a new discovery," Meyer said. "And we're sure it's biological. We just don't know yet exactly what it is."
"I've been fossil collecting for 39 years and never had a need to excavate. But this fossil just kept going, and going, and going," Fine added. "I had to make 12 trips, over the course of the summer, to excavate more material before I finally found the end of it."
Other specialists have been unable to explain the mystery monster, they said. Fine presented his discovery Tuesday at a regional meeting of the Geological Society of America in his quest for answers.
Read more at Discovery News
Body Armor of First Land Animals Stymied Acid
The lumpy bumpy body armor of the world’s earliest four-legged animals might have served a surprising function -- acid relief.
The study, published in the latest Proceedings of the Royal Society B, could help to explain how some sea creatures managed to transition to a more terrestrial lifestyle circa 370 million years ago. It could also solve the mystery as to why these early animals, called tetrapods, had such odd-looking body armor.
“Dermal bones in the skull roof and on the front of the shoulder girdle are a general feature of bony fishes, as is having scales in the skin with a component of dermal bone,” lead author Christine Janis told Discovery News.
“The issue in these early tetrapods is the highly ornate, ‘sculpture’ appearance of these bones that have been seen as a sort of dermal armor,” added Janis, a professor of ecology and evolutionary biology at Brown University.
The inspiration for the research occurred 10 years ago, when co-author Daniel Warren noticed that modern leopard frogs use their skin region bone to buffer acid buildup. Frogs, turtles and caimans do this too, according to the researchers.
Janis explained that all birds, mammals and reptiles use rib breathing to ventilate their lungs, enabling oxygen intake and carbon dioxide release. Amphibians cannot do this, but they are small enough that they can use their skin to release CO2. Without such systems, animals could fill up with acid.
“If CO2 builds up in the blood it basically goes into solution with the water: H2O plus CO2 equals H2CO3, carbonic acid,” she said.
Fish can lose CO2 over their gills, but gills only work in water. Early tetrapods faced numerous challenges when they ventured on land, probably enticed by food. It would be like humans suddenly needing to develop an ocean-based existence without the benefit of technology.
Most of the early tetrapods, like Eryops (aka “drawn out face”), were too big to solve their CO2 buildup through simple skin release. Their lungs were not well suited to rapid breathing because, unlike humans or even reptiles, their ribs were immobile.
As a result, these first four-legged creatures may have drawn upon their nonstructural bone or other mineral deposits to neutralize acidity. Calcium and magnesium ions, when released into the blood, are what buffer the acid.
“This would likely be a temporary solution, sort of like storing up the acid until you could go back to the water and get rid of it via diffusion there,” Janis said. “But the dermal bone would allow the animals to stay out on land longer than they would otherwise be able to do, so you can see it would have a cumulative adaptive advantage.”
The authors support their theory by pointing out that primarily aquatic tetrapods, such as Whatcheeria, had very little dermal bone sculpture, while the more terrestrial relative Pederpes had more. Another prediction is that tetrapods with expandable ribs would also have less dermal lumps and bumps. That too is borne out in species of gator-like anthracosaurs, which are related to reptiles.
The researchers also show that semi-terrestrial prehistoric species, more closely related to modern amphibians, also had less dermal bone sculpture and could have used their skin to eliminate CO2.
Jason Anderson, an associate professor in the University of Calgary’s Faculty of Veterinary Medicine, told Discovery News that the new theory is “possible, but much more research would need to be done to establish that it even occurs in living animals, as they propose.”
Read more at Discovery News
The study, published in the latest Proceedings of the Royal Society B, could help to explain how some sea creatures managed to transition to a more terrestrial lifestyle circa 370 million years ago. It could also solve the mystery as to why these early animals, called tetrapods, had such odd-looking body armor.
“Dermal bones in the skull roof and on the front of the shoulder girdle are a general feature of bony fishes, as is having scales in the skin with a component of dermal bone,” lead author Christine Janis told Discovery News.
“The issue in these early tetrapods is the highly ornate, ‘sculpture’ appearance of these bones that have been seen as a sort of dermal armor,” added Janis, a professor of ecology and evolutionary biology at Brown University.
The inspiration for the research occurred 10 years ago, when co-author Daniel Warren noticed that modern leopard frogs use their skin region bone to buffer acid buildup. Frogs, turtles and caimans do this too, according to the researchers.
Janis explained that all birds, mammals and reptiles use rib breathing to ventilate their lungs, enabling oxygen intake and carbon dioxide release. Amphibians cannot do this, but they are small enough that they can use their skin to release CO2. Without such systems, animals could fill up with acid.
“If CO2 builds up in the blood it basically goes into solution with the water: H2O plus CO2 equals H2CO3, carbonic acid,” she said.
Fish can lose CO2 over their gills, but gills only work in water. Early tetrapods faced numerous challenges when they ventured on land, probably enticed by food. It would be like humans suddenly needing to develop an ocean-based existence without the benefit of technology.
Most of the early tetrapods, like Eryops (aka “drawn out face”), were too big to solve their CO2 buildup through simple skin release. Their lungs were not well suited to rapid breathing because, unlike humans or even reptiles, their ribs were immobile.
As a result, these first four-legged creatures may have drawn upon their nonstructural bone or other mineral deposits to neutralize acidity. Calcium and magnesium ions, when released into the blood, are what buffer the acid.
“This would likely be a temporary solution, sort of like storing up the acid until you could go back to the water and get rid of it via diffusion there,” Janis said. “But the dermal bone would allow the animals to stay out on land longer than they would otherwise be able to do, so you can see it would have a cumulative adaptive advantage.”
The authors support their theory by pointing out that primarily aquatic tetrapods, such as Whatcheeria, had very little dermal bone sculpture, while the more terrestrial relative Pederpes had more. Another prediction is that tetrapods with expandable ribs would also have less dermal lumps and bumps. That too is borne out in species of gator-like anthracosaurs, which are related to reptiles.
The researchers also show that semi-terrestrial prehistoric species, more closely related to modern amphibians, also had less dermal bone sculpture and could have used their skin to eliminate CO2.
Jason Anderson, an associate professor in the University of Calgary’s Faculty of Veterinary Medicine, told Discovery News that the new theory is “possible, but much more research would need to be done to establish that it even occurs in living animals, as they propose.”
Read more at Discovery News
Apr 24, 2012
Did Exploding Stars Help Life On Earth to Thrive?
Research by a Danish physicist suggests that the explosion of massive stars -- supernovae -- near the Solar System has strongly influenced the development of life. Prof. Henrik Svensmark of the Technical University of Denmark (DTU) sets out his novel work in a paper in the journal Monthly Notices of the Royal Astronomical Society.
When the most massive stars exhaust their available fuel and reach the end of their lives, they explode as supernovae, tremendously powerful explosions that are briefly brighter than an entire galaxy of normal stars. The remnants of these dramatic events also release vast numbers of high-energy charged particles known as galactic cosmic rays (GCR). If a supernova is close enough to the Solar System, the enhanced GCR levels can have a direct impact on the atmosphere of Earth.
Prof. Svensmark looked back through 500 million years of geological and astronomical data and considered the proximity of the Sun to supernovae as it moves around our Galaxy, the Milky Way. In particular, when the Sun is passing through the spiral arms of the Milky Way, it encounters newly forming clusters of stars. These so-called open clusters, which disperse over time, have a range of ages and sizes and will have started with a small proportion of stars massive enough to explode as supernovae. From the data on open clusters, Prof. Svensmark was able to deduce how the rate at which supernovae exploded near the Solar System varied over time.
Comparing this with the geological record, he found that the changing frequency of nearby supernovae seems to have strongly shaped the conditions for life on Earth. Whenever the Sun and its planets have visited regions of enhanced star formation in the Milky Way Galaxy, where exploding stars are most common, life has prospered. Prof. Svensmark remarks in the paper, "The biosphere seems to contain a reflection of the sky, in that the evolution of life mirrors the evolution of the Galaxy."
In the new work, the diversity of life over the last 500 million years seems remarkably well explained by tectonics affecting the sea-level together with variations in the supernova rate, and virtually nothing else. To obtain this result on the variety of life, or biodiversity, he followed the changing fortunes of the best-recorded fossils. These are from invertebrate animals in the sea, such as shrimps and octopuses, or the extinct trilobites and ammonites.
They tended to be richest in their variety when continents were drifting apart and sea levels were high and less varied when the land masses gathered 250 million years ago into the supercontinent called Pangaea and the sea-level was lower. But this geophysical effect was not the whole story. When it is removed from the record of biodiversity, what remains corresponds closely to the changing rate of nearby stellar explosions, with the variety of life being greatest when supernovae are plentiful. A likely reason, according to Prof. Svensmark, is that the cold climate associated with high supernova rates brings a greater variety of habitats between polar and equatorial regions, while the associated stresses of life prevent the ecosystems becoming too set in their ways.
He also notices that most geological periods seem to begin and end with either an upturn or a downturn in the supernova rate. The changes in typical species that define a period, in the transition from one to the next, could then be the result of a major change in the astrophysical environment.
Life's prosperity, or global bioproductivity, can be tracked by the amount of carbon dioxide in the air at various times in the past as set out in the geological record. When supernova rates were high, carbon dioxide was scarce, suggesting that flourishing microbial and plant life in the oceans consumed it greedily to grow. Support for this idea comes from the fact that microbes and plants dislike carbon dioxide molecules that contain a heavy form of carbon atom, carbon-13. As a result, the ocean water is left enriched by carbon-13. The geological evidence shows high carbon-13 when supernovae were commonest -- again pointing to high productivity. As to why this should be, Prof. Svensmark notes that growth is limited by available nutrients, especially phosphorus and nitrogen, and that cold conditions favour the recycling of the nutrients by vigorously mixing the oceans.
Although the new analysis suggests, perhaps surprisingly, that supernovae are on the whole good for life, high supernova rates can bring the cold and changeable climate of prolonged glacial episodes. And they can have nasty shocks in store. Geoscientists have long been puzzled by many relatively brief falls in sea-level by 25 metres or more that show up in seismic soundings as eroded beaches. Prof. Svensmark finds that they are what can be expected when chilling due to very close supernovae causes short-lived glacial episodes. With frozen water temporarily bottled up on land, the sea-level drops.
Read more at Science Daily
When the most massive stars exhaust their available fuel and reach the end of their lives, they explode as supernovae, tremendously powerful explosions that are briefly brighter than an entire galaxy of normal stars. The remnants of these dramatic events also release vast numbers of high-energy charged particles known as galactic cosmic rays (GCR). If a supernova is close enough to the Solar System, the enhanced GCR levels can have a direct impact on the atmosphere of Earth.
Prof. Svensmark looked back through 500 million years of geological and astronomical data and considered the proximity of the Sun to supernovae as it moves around our Galaxy, the Milky Way. In particular, when the Sun is passing through the spiral arms of the Milky Way, it encounters newly forming clusters of stars. These so-called open clusters, which disperse over time, have a range of ages and sizes and will have started with a small proportion of stars massive enough to explode as supernovae. From the data on open clusters, Prof. Svensmark was able to deduce how the rate at which supernovae exploded near the Solar System varied over time.
Comparing this with the geological record, he found that the changing frequency of nearby supernovae seems to have strongly shaped the conditions for life on Earth. Whenever the Sun and its planets have visited regions of enhanced star formation in the Milky Way Galaxy, where exploding stars are most common, life has prospered. Prof. Svensmark remarks in the paper, "The biosphere seems to contain a reflection of the sky, in that the evolution of life mirrors the evolution of the Galaxy."
In the new work, the diversity of life over the last 500 million years seems remarkably well explained by tectonics affecting the sea-level together with variations in the supernova rate, and virtually nothing else. To obtain this result on the variety of life, or biodiversity, he followed the changing fortunes of the best-recorded fossils. These are from invertebrate animals in the sea, such as shrimps and octopuses, or the extinct trilobites and ammonites.
They tended to be richest in their variety when continents were drifting apart and sea levels were high and less varied when the land masses gathered 250 million years ago into the supercontinent called Pangaea and the sea-level was lower. But this geophysical effect was not the whole story. When it is removed from the record of biodiversity, what remains corresponds closely to the changing rate of nearby stellar explosions, with the variety of life being greatest when supernovae are plentiful. A likely reason, according to Prof. Svensmark, is that the cold climate associated with high supernova rates brings a greater variety of habitats between polar and equatorial regions, while the associated stresses of life prevent the ecosystems becoming too set in their ways.
He also notices that most geological periods seem to begin and end with either an upturn or a downturn in the supernova rate. The changes in typical species that define a period, in the transition from one to the next, could then be the result of a major change in the astrophysical environment.
Life's prosperity, or global bioproductivity, can be tracked by the amount of carbon dioxide in the air at various times in the past as set out in the geological record. When supernova rates were high, carbon dioxide was scarce, suggesting that flourishing microbial and plant life in the oceans consumed it greedily to grow. Support for this idea comes from the fact that microbes and plants dislike carbon dioxide molecules that contain a heavy form of carbon atom, carbon-13. As a result, the ocean water is left enriched by carbon-13. The geological evidence shows high carbon-13 when supernovae were commonest -- again pointing to high productivity. As to why this should be, Prof. Svensmark notes that growth is limited by available nutrients, especially phosphorus and nitrogen, and that cold conditions favour the recycling of the nutrients by vigorously mixing the oceans.
Although the new analysis suggests, perhaps surprisingly, that supernovae are on the whole good for life, high supernova rates can bring the cold and changeable climate of prolonged glacial episodes. And they can have nasty shocks in store. Geoscientists have long been puzzled by many relatively brief falls in sea-level by 25 metres or more that show up in seismic soundings as eroded beaches. Prof. Svensmark finds that they are what can be expected when chilling due to very close supernovae causes short-lived glacial episodes. With frozen water temporarily bottled up on land, the sea-level drops.
Read more at Science Daily
Why We're Drawn to Fire
As America's $2 billion candle industry attests, there is something mesmerizing about a flickering flame. Most people love to feel fire's warmth, to test its limits, and to watch the way it consumes fuel. When there's a candle or bonfire around, why can't we help staring?
A dancing fire is pretty, as well as tantalizingly dangerous, but there may be a much deeper reason for our attraction to it. Daniel Fessler, an evolutionary anthropologist at the University of California, Los Angeles, has conducted research that indicates an adult's fascination with fire is a direct consequence of not having mastered it as a child. Fire has been crucial to human survival for around one million years, and in that time, Fessler argues, humans have evolved psychological mechanisms specifically dedicated to controlling it. But because most Westerners no longer learn how to start, maintain and use fire during childhood, we instead wind up with a curious attraction to it -- a burning desire left to languish.
"My preliminary findings indicate that humans are not universally fascinated by fire," Fessler told Life's Little Mysteries. "On the contrary, this fascination is a consequence of inadequate experience with fire during development."
In societies where fire is traditionally used daily as a tool, Fessler has found that children are only interested by fire until such point as they attain mastery of it. After that point -- usually at age 7 -- people display little interest in fire and merely use it as one would use any ordinary tool. "Hence, the modern Western fascination with fire may reflect the unnatural prolongation into adulthood of a motivational system that normally serves to spur children to master an important skill during maturation," Fessler wrote in an email.
Unlike a spider that inherently knows how to weave a web, humans don't instinctively know how to produce and control fire. The ability must be learned during childhood. This may be because there was no universal method of fire building and control among our ancestors, who lived in diverse environments, and so there was no single method for evolution to ingrain in us. Instead, "fire learning" became the instinct. As Fessler put it in an article in the Journal of Cognition and Culture, "The only avenue open to selection processes operating on a species as wide-ranging as ourselves was to rely on learning for the acquisition of the requisite behaviors."
Children are universally fascinated by predatory animals in a similar manner in which they are fascinated by fire. Because both could seriously harm or kill them, evolution requires that they be interested in those subjects, Fessler argues, as a way of ensuring that they pay special attention to information obtained about them. For example, children are naturally curious about which animals are dangerous and which aren't, as well as which materials are flammable and which aren't, and what the consequences are of adding, removing and rearranging objects in a fire. Our brains soak up this predator and fire knowledge.
In the United States, children's natural inclination to learn about fire is evidenced by the hundreds of deaths that occur each year due to "fire play," or the deliberate setting of a fire for no purpose beyond the fire itself. A study by the psychiatrist David Kolko of the University of Pittsburgh found that about three-quarters of children set a play fire during the three-year window of the study (1999-2001). Prior studies found that curiosity was the primary motive for the behavior, which, fire department records show, peaks at age 12.
Read more at Discovery News
A dancing fire is pretty, as well as tantalizingly dangerous, but there may be a much deeper reason for our attraction to it. Daniel Fessler, an evolutionary anthropologist at the University of California, Los Angeles, has conducted research that indicates an adult's fascination with fire is a direct consequence of not having mastered it as a child. Fire has been crucial to human survival for around one million years, and in that time, Fessler argues, humans have evolved psychological mechanisms specifically dedicated to controlling it. But because most Westerners no longer learn how to start, maintain and use fire during childhood, we instead wind up with a curious attraction to it -- a burning desire left to languish.
"My preliminary findings indicate that humans are not universally fascinated by fire," Fessler told Life's Little Mysteries. "On the contrary, this fascination is a consequence of inadequate experience with fire during development."
In societies where fire is traditionally used daily as a tool, Fessler has found that children are only interested by fire until such point as they attain mastery of it. After that point -- usually at age 7 -- people display little interest in fire and merely use it as one would use any ordinary tool. "Hence, the modern Western fascination with fire may reflect the unnatural prolongation into adulthood of a motivational system that normally serves to spur children to master an important skill during maturation," Fessler wrote in an email.
Unlike a spider that inherently knows how to weave a web, humans don't instinctively know how to produce and control fire. The ability must be learned during childhood. This may be because there was no universal method of fire building and control among our ancestors, who lived in diverse environments, and so there was no single method for evolution to ingrain in us. Instead, "fire learning" became the instinct. As Fessler put it in an article in the Journal of Cognition and Culture, "The only avenue open to selection processes operating on a species as wide-ranging as ourselves was to rely on learning for the acquisition of the requisite behaviors."
Children are universally fascinated by predatory animals in a similar manner in which they are fascinated by fire. Because both could seriously harm or kill them, evolution requires that they be interested in those subjects, Fessler argues, as a way of ensuring that they pay special attention to information obtained about them. For example, children are naturally curious about which animals are dangerous and which aren't, as well as which materials are flammable and which aren't, and what the consequences are of adding, removing and rearranging objects in a fire. Our brains soak up this predator and fire knowledge.
In the United States, children's natural inclination to learn about fire is evidenced by the hundreds of deaths that occur each year due to "fire play," or the deliberate setting of a fire for no purpose beyond the fire itself. A study by the psychiatrist David Kolko of the University of Pittsburgh found that about three-quarters of children set a play fire during the three-year window of the study (1999-2001). Prior studies found that curiosity was the primary motive for the behavior, which, fire department records show, peaks at age 12.
Read more at Discovery News
Are Pet Psychics Real?
We can all name at least a handful of fictional characters who can communicate with animals: Tarzan, Aquaman, the Horse Whisperer (a character appearing in a Nichols Evans novel and Robert Redford film), Dr. Doolittle and others.
Of course, most people communicate with animals all the time; pet lovers are famous for cooing baby talk to their animals and repeatedly asking banal, rhetorical questions like, "Do you want some food?" or "Who's a good boy? Who's a good boy?"
But pet psychics claim to do something more remarkable: They speak to animals and get information back. This is done, they say, by some sort of interspecies psychic power or telepathy.
Of course, psychic communication between humans has never been scientifically proven, so claims of psychic communication between animals and humans begins on very shaky ground. At least humans can share a common language; how a psychic could possibly translate the thoughts and intentions of a parakeet, fish, hamster, horse, spider or any other animal into human language is a mystery.
Yet thousands of people in real life claim to have exactly such a remarkable ability. For example, a Canadian woman named Lauren Bode claims she's a real-life horse whisperer.
Bode says that several horses at Toronto's Far Enough Farm telepathically told her that they are upset about plans to move them from their current location to another farm nearby. They are anxious about the June 30 move and worried about whether they will like their new home.
Bode did not explain how exactly the horses told her this, nor how they got wind of the news about the planned relocation; perhaps they learned enough English to eavesdrop on their trainers' conversations. If so, it would not be the first time that a horse was able to fool humans into believing it could understand languages.
Human-Animal Communication
A 2008 poll found that 67 percent of pet owners say they understand their animals' purrs, barks and other noises, and 62 percent said that when they speak, their pet understands them (or at least their intent).
One in five owners claim that they and their pets understand each other completely. One-quarter of cat owners said they completely understood their pets' sounds, while only 16 percent of dog owners said they were fluent in barks. This is not necessarily psychic power, but intuition, guessing and common sense.
Even something as simple as tone of voice carries a huge amount of information about intent; sharp, loud phrases like "Stop!" convey a clearly different meaning than a smooth, sing-song phrase like, "Hey buddy!"
Most animals (including humans) pick up plenty of accurate nonverbal cues about each other's intents. And, of course, just because pet owners (or psychics) think they communicate great with their pets doesn't mean that the other party feels the same way; just ask any couple.
Testing Pet Psychics
One major problem with trying to prove pet psychic abilities is that usually there's no way to know if the information they give is accurate or not. For example, if a concerned pet owner consults a psychic about her kitty's unusual behavior and is told that her cat says he's acting out because he doesn't like the color of the new drapes in the living room, or a neighbor dog looked at him menacingly, or he senses tension in the marriage, who's to say the psychic is wrong? The psychic could be misunderstanding the cat's messages -- or even making it all up -- and there's no way to prove otherwise.
Read more at Discovery News
Of course, most people communicate with animals all the time; pet lovers are famous for cooing baby talk to their animals and repeatedly asking banal, rhetorical questions like, "Do you want some food?" or "Who's a good boy? Who's a good boy?"
But pet psychics claim to do something more remarkable: They speak to animals and get information back. This is done, they say, by some sort of interspecies psychic power or telepathy.
Of course, psychic communication between humans has never been scientifically proven, so claims of psychic communication between animals and humans begins on very shaky ground. At least humans can share a common language; how a psychic could possibly translate the thoughts and intentions of a parakeet, fish, hamster, horse, spider or any other animal into human language is a mystery.
Yet thousands of people in real life claim to have exactly such a remarkable ability. For example, a Canadian woman named Lauren Bode claims she's a real-life horse whisperer.
Bode says that several horses at Toronto's Far Enough Farm telepathically told her that they are upset about plans to move them from their current location to another farm nearby. They are anxious about the June 30 move and worried about whether they will like their new home.
Bode did not explain how exactly the horses told her this, nor how they got wind of the news about the planned relocation; perhaps they learned enough English to eavesdrop on their trainers' conversations. If so, it would not be the first time that a horse was able to fool humans into believing it could understand languages.
Human-Animal Communication
A 2008 poll found that 67 percent of pet owners say they understand their animals' purrs, barks and other noises, and 62 percent said that when they speak, their pet understands them (or at least their intent).
One in five owners claim that they and their pets understand each other completely. One-quarter of cat owners said they completely understood their pets' sounds, while only 16 percent of dog owners said they were fluent in barks. This is not necessarily psychic power, but intuition, guessing and common sense.
Even something as simple as tone of voice carries a huge amount of information about intent; sharp, loud phrases like "Stop!" convey a clearly different meaning than a smooth, sing-song phrase like, "Hey buddy!"
Most animals (including humans) pick up plenty of accurate nonverbal cues about each other's intents. And, of course, just because pet owners (or psychics) think they communicate great with their pets doesn't mean that the other party feels the same way; just ask any couple.
Testing Pet Psychics
One major problem with trying to prove pet psychic abilities is that usually there's no way to know if the information they give is accurate or not. For example, if a concerned pet owner consults a psychic about her kitty's unusual behavior and is told that her cat says he's acting out because he doesn't like the color of the new drapes in the living room, or a neighbor dog looked at him menacingly, or he senses tension in the marriage, who's to say the psychic is wrong? The psychic could be misunderstanding the cat's messages -- or even making it all up -- and there's no way to prove otherwise.
Read more at Discovery News
Minivan-sized Asteroid Exploded Over California
The source of loud "booms" accompanied by a bright object traveling through the skies of Nevada and California on Sunday morning has been confirmed: It was a meteor. A big one.
It is thought to have been a small asteroid that slammed into the atmosphere at a speed of 15 kilometers per second (33,500 mph), turning into a fireball, and delivering an energy of 3.8 kilotons of TNT as it broke up over California's Sierra Nevada mountains. Bill Cooke, head of NASA's Meteoroid Environment Office, classified it as a "big event."
"I am not saying there was a 3.8 kiloton explosion on the ground in California," Cooke told Spaceweather.com. "I am saying that the meteor possessed this amount of energy before it broke apart in the atmosphere. (The map) shows the location of the atmospheric breakup, not impact with the ground."
Cooke went on to say that the meteor likely penetrated very deep into the atmosphere, producing the powerful sonic booms that rattled homes across the region. According to Reuters, car alarms in Carson City, Nev., were even triggered.
After some rough calculations, Cooke has been able to estimate the mass of the incoming object -- around 70 metric tons. This was a fairly hefty piece of space rock. From this estimate he was also able to arrive at an approximate size of the meteor: "Hazarding a further guess at the density of 3 grams per cubic centimeter (solid rock), I calculate a size of about 3-4 meters, or about the size of a minivan."
Although there were numerous reports of sightings in Nevada and California, there was few immediate clues as to where the fireball ended its journey. But with the help of two infrasound stations, the source of the explosion could be resolved.
"Elizabeth Silber at Western University has searched for infrasound signals from the explosion," said Cooke. "Infrasound is very low frequency sound which can travel great distances. There were strong signals at 2 stations, enabling a triangulation of the energy source at 37.6N, 120.5W. This is marked by a yellow flag in the map (above)."
Interestingly, the estimated size of the California fireball is bigger than the small 3-meter-wide asteroid that exploded over Sudan in 2008. That one delivered an energy of 1.1–2.1 kilotons of TNT. Asteroid 2008 TC3 was actually the first-ever space rock to be detected before it hit the Earth's atmosphere. With astonishing accuracy, astronomers at Mount Lemmon telescope in Arizona managed to spot the tiny asteroid and infrasound stations in Kenya triangulated the location where the asteroid hit. Using this information, meteorite hunters were able to recover fragments of the impact.
Read more at Discovery News
It is thought to have been a small asteroid that slammed into the atmosphere at a speed of 15 kilometers per second (33,500 mph), turning into a fireball, and delivering an energy of 3.8 kilotons of TNT as it broke up over California's Sierra Nevada mountains. Bill Cooke, head of NASA's Meteoroid Environment Office, classified it as a "big event."
"I am not saying there was a 3.8 kiloton explosion on the ground in California," Cooke told Spaceweather.com. "I am saying that the meteor possessed this amount of energy before it broke apart in the atmosphere. (The map) shows the location of the atmospheric breakup, not impact with the ground."
Cooke went on to say that the meteor likely penetrated very deep into the atmosphere, producing the powerful sonic booms that rattled homes across the region. According to Reuters, car alarms in Carson City, Nev., were even triggered.
After some rough calculations, Cooke has been able to estimate the mass of the incoming object -- around 70 metric tons. This was a fairly hefty piece of space rock. From this estimate he was also able to arrive at an approximate size of the meteor: "Hazarding a further guess at the density of 3 grams per cubic centimeter (solid rock), I calculate a size of about 3-4 meters, or about the size of a minivan."
Although there were numerous reports of sightings in Nevada and California, there was few immediate clues as to where the fireball ended its journey. But with the help of two infrasound stations, the source of the explosion could be resolved.
"Elizabeth Silber at Western University has searched for infrasound signals from the explosion," said Cooke. "Infrasound is very low frequency sound which can travel great distances. There were strong signals at 2 stations, enabling a triangulation of the energy source at 37.6N, 120.5W. This is marked by a yellow flag in the map (above)."
Interestingly, the estimated size of the California fireball is bigger than the small 3-meter-wide asteroid that exploded over Sudan in 2008. That one delivered an energy of 1.1–2.1 kilotons of TNT. Asteroid 2008 TC3 was actually the first-ever space rock to be detected before it hit the Earth's atmosphere. With astonishing accuracy, astronomers at Mount Lemmon telescope in Arizona managed to spot the tiny asteroid and infrasound stations in Kenya triangulated the location where the asteroid hit. Using this information, meteorite hunters were able to recover fragments of the impact.
Read more at Discovery News
Apr 23, 2012
The Search Is On For Elusive White Whale
A team of Russian scientists say they will embark on a quest next week to observe the only all-white, adult killer whale ever spotted -- a majestic and elusive bull they have named Iceberg.
The researchers from the universities of Moscow and Saint Petersburg first spotted the orca's towering, two-metre (about six feet) dorsal fin break the surface near the Commander Islands in the North Pacific in August 2010.
Living in a pod with 12 other family members, Iceberg was deemed to be at least 16 years old, given the size of his dorsal fin, said Erich Hoyt, co-director of the Far East Russia Orca Project (FEROP).
"This is the first time we have ever seen an all-white, mature male orca," Hoyt told AFP. "It is a breathtakingly beautiful animal."
The scientists decided to hold back on releasing photographs of Iceberg until they were able to study him further, "but we have been looking for him ever since," said Hoyt.
Orcas can travel thousands of miles.
The scientists would like to establish whether Iceberg is albino -- a genetic condition that leaves animals unable to produce melanin, a dark pigment of skin, hair and the eye's retina and iris.
Many albino animals never grow into adulthood. Their visibility is a disadvantage in the hunt for food and protection against predators.
Two other white orcas are known to live in the waters where Iceberg was spotted, east of Kamtchatka peninsula in Russia's far-east, but they are juveniles.
Read more at Discovery News
The researchers from the universities of Moscow and Saint Petersburg first spotted the orca's towering, two-metre (about six feet) dorsal fin break the surface near the Commander Islands in the North Pacific in August 2010.
Living in a pod with 12 other family members, Iceberg was deemed to be at least 16 years old, given the size of his dorsal fin, said Erich Hoyt, co-director of the Far East Russia Orca Project (FEROP).
"This is the first time we have ever seen an all-white, mature male orca," Hoyt told AFP. "It is a breathtakingly beautiful animal."
The scientists decided to hold back on releasing photographs of Iceberg until they were able to study him further, "but we have been looking for him ever since," said Hoyt.
Orcas can travel thousands of miles.
The scientists would like to establish whether Iceberg is albino -- a genetic condition that leaves animals unable to produce melanin, a dark pigment of skin, hair and the eye's retina and iris.
Many albino animals never grow into adulthood. Their visibility is a disadvantage in the hunt for food and protection against predators.
Two other white orcas are known to live in the waters where Iceberg was spotted, east of Kamtchatka peninsula in Russia's far-east, but they are juveniles.
Read more at Discovery News
Costs of 'Dirty Bomb' Attack in Los Angeles
A dirty bomb attack centered on downtown Los Angeles' financial district could severely impact the region's economy to the tune of nearly $16 billion, fueled primarily by psychological effects that could persist for a decade.
The study, published by a team of internationally recognized economists and decision scientists in the current issue of Risk Analysis, monetized the effects of fear and risk perception and incorporated them into a state-of-the-art macroeconomic model.
"We decided to study a terrorist attack on Los Angeles not to scare people, but to alert policymakers just how large the impact of the public's reaction might be," said study co-author William Burns, a research scientist at Decision Research in Eugene, Ore. "This underscores the importance of risk communication before and after a major disaster to reduce economic losses."
Economists most often focus on the immediate economic costs of a terrorist event, such as injuries, cleanup and business closures. In this scenario, those initial costs would total just over $1 billion.
"Terrorism can have a much larger impact than first believed," said study co-author Adam Rose, a research professor with the USC Price School of Public Policy and USC's Center for Risk and Economic Analysis of Terrorism Events (CREATE). "The economic effects of the public's change in behavior are 15 times more costly than the immediate damage in the wake of a disaster."
"These findings illustrate that because the costs of modern disasters are so large, even small changes in public perception and behaviors may significantly affect the economic impact," said Rose, who has published economic estimates of the 9/11 attacks, the Northridge Earthquake and other major disasters.
To estimate how fear and risk perception ripple through the economy after a major terrorist event, the researchers surveyed 625 people nationwide after showing them a mock newspaper article and newscasts about the hypothetical dirty bomb attack to gauge the public's reticence to return to normal life in the financial district.
The study translated these survey results into estimates of what economic premiums would be put on wages and what discounts shoppers would likely require in the aftermath of a terrorist attack.
After six months, 41 percent of those surveyed said they would still not consider shopping or dining in the financial district. And, on average, employees would demand a 25 percent increase in wages to return to their jobs.
"The stigma generated by dirty bomb radiation could generate large changes in the perceived risk of doing business in the region," said co-author James Giesecke of the Centre of Policy Studies at Monash University. "However, with regional economies in competition with one another for customers, businesses, and employees, it takes only small changes in perceived risk to generate big losses in economic activity."
The paper relied on one of 15 planning scenarios -- the detonation of a dirty bomb in a city center -- identified by the Department of Homeland Security in an effort to focus anti-terrorism spending nationwide.
Read more at Science Daily
The study, published by a team of internationally recognized economists and decision scientists in the current issue of Risk Analysis, monetized the effects of fear and risk perception and incorporated them into a state-of-the-art macroeconomic model.
"We decided to study a terrorist attack on Los Angeles not to scare people, but to alert policymakers just how large the impact of the public's reaction might be," said study co-author William Burns, a research scientist at Decision Research in Eugene, Ore. "This underscores the importance of risk communication before and after a major disaster to reduce economic losses."
Economists most often focus on the immediate economic costs of a terrorist event, such as injuries, cleanup and business closures. In this scenario, those initial costs would total just over $1 billion.
"Terrorism can have a much larger impact than first believed," said study co-author Adam Rose, a research professor with the USC Price School of Public Policy and USC's Center for Risk and Economic Analysis of Terrorism Events (CREATE). "The economic effects of the public's change in behavior are 15 times more costly than the immediate damage in the wake of a disaster."
"These findings illustrate that because the costs of modern disasters are so large, even small changes in public perception and behaviors may significantly affect the economic impact," said Rose, who has published economic estimates of the 9/11 attacks, the Northridge Earthquake and other major disasters.
To estimate how fear and risk perception ripple through the economy after a major terrorist event, the researchers surveyed 625 people nationwide after showing them a mock newspaper article and newscasts about the hypothetical dirty bomb attack to gauge the public's reticence to return to normal life in the financial district.
The study translated these survey results into estimates of what economic premiums would be put on wages and what discounts shoppers would likely require in the aftermath of a terrorist attack.
After six months, 41 percent of those surveyed said they would still not consider shopping or dining in the financial district. And, on average, employees would demand a 25 percent increase in wages to return to their jobs.
"The stigma generated by dirty bomb radiation could generate large changes in the perceived risk of doing business in the region," said co-author James Giesecke of the Centre of Policy Studies at Monash University. "However, with regional economies in competition with one another for customers, businesses, and employees, it takes only small changes in perceived risk to generate big losses in economic activity."
The paper relied on one of 15 planning scenarios -- the detonation of a dirty bomb in a city center -- identified by the Department of Homeland Security in an effort to focus anti-terrorism spending nationwide.
Read more at Science Daily
Marco Polo Really Did Go To China, Maybe
A meticulous description of currency and salt production could restore Marco Polo's honor by proving that he really did go to China, a new study into the explorer's accounts of the Far East suggests.
"Polo was not a swindler," said Hans Ulrich Vogel, Professor of Chinese Studies at the German University of Tübingen.
Vogel reexamines the great Venetian traveler in a new book entitled "Marco Polo Was in China."
"The strongest evidence is that he provided complex and detailed information about monetary conditions, salt production, public revenues and administrative geography which have been overlooked so far, but are fully corroborated in Chinese sources," Vogel told Discovery News.
The historian noted that these Chinese sources were collated or translated long after Marco Polo’s time.
"So he could not have drawn on them. He could not even read Chinese," he said.
Although there is no doubt Polo existed -- traces of his family home are still around in Venice -- many have questioned whether the 13th-century traveler actually made it to China.
Skeptics argue that his famous travelogue Description of the World, commonly called The Travels of Marco Polo, does not mention several distinctive features of the Chinese society.
Born around 1254 in the Venetian Republic, Polo claimed that he reached China with his merchant father and uncle in 1275, at age 17. He would spend the next 24 years venturing into the Far East China, serving for some years in the court of Mongol emperor Kublai Khan.
The story of the explorer's return to Venice in 1295 and of the subsequent coup de theater when he threw back his Tartar clothes and allowed precious stones to cascade over the floor, has become part of the Polo legend.
Since the mid-eighteenth century, doubts have been raised about Marco Polo’s presence in China.
Historian Frances Wood argued in her 1995 book "Did Marco Polo Go to China?" that the famous explorer didn't even get beyond Constantinople. He would have made the rest of the trip up, with the help of an imaginative ghostwriter, Rustichello da Pisa, cobbling together details provided by fellow traders who had actually been there.
Although Polo's tales of exotic lands inspired Columbus and became the model for generations of explorers, scholars wondered why he did not mention tea, chopsticks, the Chinese writing system, and the practice of binding women's feet.
Furthermore, there is no mention of the Great Wall, while Marco, his father and his uncle are not recorded in any Chinese document.
According to Vogel, skeptics have often overestimated the frequency of documentation and the intentions of Chinese historiographers.
"Even Giovanni de Marignolli (1290-1357), an important papal envoy at the court of the Yuan rulers, is not mentioned in any Chinese sources – nor his 32-man retinue, nor the name of the pope," he said.
As for the Great Wall, new research has established that it did not exist at the time.
"The original wall had long since disintegrated, while the present structure -- a product of the Ming Dynasty (1368-1644) -- was yet to be erected," said Vogel.
Other historians have used these arguments to rehabilitate the Venetian traveler, but Vogel used new data, focusing on a largely neglected part of Polo's travelogue.
"He is the only one to describe precisely how paper for money was made from the bark of the mulberry tree. Not only he detailed the shape and size of the paper, he also described the use of seals and the various denominations of paper money," said Vogel.
According to the historian, Polo's Travels is not a 13th century guidebook to China - on the contrary, it's a rather dry manual to the commerce of the Silk Road.
"Marco Polo reported on the monopolizing of gold, silver, pearls and gems by the state – which enforced a compulsory exchange for paper money," said Vogel.
"He described the punishment for counterfeiters, as well as the 3% exchange fee for worn-out notes and the widespread use of paper money in official and private transactions," he added.
Vogel noted that Polo is also the only one among his contemporaries to explain that paper money was not in circulation in all parts of China, but was used primarily in the north and in the regions along the Yangtze.
According to Polo, cowries, salt, gold and silver were the main currencies in other parts of China such as Fujian and Yunnan.
"This information is confirmed by archaeological evidence and Chinese sources compiled long after Polo had written his Travels," Vogel said.
The Venetian traveler's description of salt production was also accurate and unique. Not only he listed the most important salt production centers, but described the methods used to make salt and detailed the value of salt production.
Read more at Discovery News
"Polo was not a swindler," said Hans Ulrich Vogel, Professor of Chinese Studies at the German University of Tübingen.
Vogel reexamines the great Venetian traveler in a new book entitled "Marco Polo Was in China."
"The strongest evidence is that he provided complex and detailed information about monetary conditions, salt production, public revenues and administrative geography which have been overlooked so far, but are fully corroborated in Chinese sources," Vogel told Discovery News.
The historian noted that these Chinese sources were collated or translated long after Marco Polo’s time.
"So he could not have drawn on them. He could not even read Chinese," he said.
Although there is no doubt Polo existed -- traces of his family home are still around in Venice -- many have questioned whether the 13th-century traveler actually made it to China.
Skeptics argue that his famous travelogue Description of the World, commonly called The Travels of Marco Polo, does not mention several distinctive features of the Chinese society.
Born around 1254 in the Venetian Republic, Polo claimed that he reached China with his merchant father and uncle in 1275, at age 17. He would spend the next 24 years venturing into the Far East China, serving for some years in the court of Mongol emperor Kublai Khan.
The story of the explorer's return to Venice in 1295 and of the subsequent coup de theater when he threw back his Tartar clothes and allowed precious stones to cascade over the floor, has become part of the Polo legend.
Since the mid-eighteenth century, doubts have been raised about Marco Polo’s presence in China.
Historian Frances Wood argued in her 1995 book "Did Marco Polo Go to China?" that the famous explorer didn't even get beyond Constantinople. He would have made the rest of the trip up, with the help of an imaginative ghostwriter, Rustichello da Pisa, cobbling together details provided by fellow traders who had actually been there.
Although Polo's tales of exotic lands inspired Columbus and became the model for generations of explorers, scholars wondered why he did not mention tea, chopsticks, the Chinese writing system, and the practice of binding women's feet.
Furthermore, there is no mention of the Great Wall, while Marco, his father and his uncle are not recorded in any Chinese document.
According to Vogel, skeptics have often overestimated the frequency of documentation and the intentions of Chinese historiographers.
"Even Giovanni de Marignolli (1290-1357), an important papal envoy at the court of the Yuan rulers, is not mentioned in any Chinese sources – nor his 32-man retinue, nor the name of the pope," he said.
As for the Great Wall, new research has established that it did not exist at the time.
"The original wall had long since disintegrated, while the present structure -- a product of the Ming Dynasty (1368-1644) -- was yet to be erected," said Vogel.
Other historians have used these arguments to rehabilitate the Venetian traveler, but Vogel used new data, focusing on a largely neglected part of Polo's travelogue.
"He is the only one to describe precisely how paper for money was made from the bark of the mulberry tree. Not only he detailed the shape and size of the paper, he also described the use of seals and the various denominations of paper money," said Vogel.
According to the historian, Polo's Travels is not a 13th century guidebook to China - on the contrary, it's a rather dry manual to the commerce of the Silk Road.
"Marco Polo reported on the monopolizing of gold, silver, pearls and gems by the state – which enforced a compulsory exchange for paper money," said Vogel.
"He described the punishment for counterfeiters, as well as the 3% exchange fee for worn-out notes and the widespread use of paper money in official and private transactions," he added.
Vogel noted that Polo is also the only one among his contemporaries to explain that paper money was not in circulation in all parts of China, but was used primarily in the north and in the regions along the Yangtze.
According to Polo, cowries, salt, gold and silver were the main currencies in other parts of China such as Fujian and Yunnan.
"This information is confirmed by archaeological evidence and Chinese sources compiled long after Polo had written his Travels," Vogel said.
The Venetian traveler's description of salt production was also accurate and unique. Not only he listed the most important salt production centers, but described the methods used to make salt and detailed the value of salt production.
Read more at Discovery News
New Purple Crab Species Found in Philippines
Four new species of freshwater crab, bright purple in color, have been discovered in the biologically diverse but ecologically-threatened Philippines, the man who found them said Saturday.
The tiny crustaceans burrow under boulders and roots in streams, feeding on dead plants, fruits, carrion and small animals in the water at night, said Hendrik Freitag of Germany's Senckenberg Museum of Zoology.
Found only in small, lowland-forest ecosystems in the Palawan island group, most have purple shells, with claws and legs tipped red.
"It is known that crabs can discriminate colors. Therefore, it seems likely that the coloration has a signal function for the social behavior, e.g. mating," Freitag told AFP by email on Saturday.
"This could explain why large males of various Insulamon species are more reddish compared to the generally violet females and immature males."
Scientists began extensive investigations of similar freshwater crabs in the area in the late 1980s, when one new species was found -- the Insulamon unicorn, Freitag said.
More field work led Freitag to conclude there were four other unique species.
"Based on available new material, a total of five species are recognized... four of which are new to science," Freitag wrote in the latest edition of the National University of Singapore's Raffles Bulletin of Zoology.
The carapace of the biggest, Insulamon magnum, is just 53 millimeters by 41.8 millimeters while the smallest, Insulamon porculum, measures 33.1 by 25.1 millimeters.
The two other new species were called Insulamon palawense and Insulamon johannchristiani.
The four slightly differ from the first find, and from each other, in the shapes of their body shells, legs, and sex organs.
US-based Conservation International lists the Philippines as one of 17 countries that harbors most of Earth's plant and animal life.
Read more at Discovery News
The tiny crustaceans burrow under boulders and roots in streams, feeding on dead plants, fruits, carrion and small animals in the water at night, said Hendrik Freitag of Germany's Senckenberg Museum of Zoology.
Found only in small, lowland-forest ecosystems in the Palawan island group, most have purple shells, with claws and legs tipped red.
"It is known that crabs can discriminate colors. Therefore, it seems likely that the coloration has a signal function for the social behavior, e.g. mating," Freitag told AFP by email on Saturday.
"This could explain why large males of various Insulamon species are more reddish compared to the generally violet females and immature males."
Scientists began extensive investigations of similar freshwater crabs in the area in the late 1980s, when one new species was found -- the Insulamon unicorn, Freitag said.
More field work led Freitag to conclude there were four other unique species.
"Based on available new material, a total of five species are recognized... four of which are new to science," Freitag wrote in the latest edition of the National University of Singapore's Raffles Bulletin of Zoology.
The carapace of the biggest, Insulamon magnum, is just 53 millimeters by 41.8 millimeters while the smallest, Insulamon porculum, measures 33.1 by 25.1 millimeters.
The two other new species were called Insulamon palawense and Insulamon johannchristiani.
The four slightly differ from the first find, and from each other, in the shapes of their body shells, legs, and sex organs.
US-based Conservation International lists the Philippines as one of 17 countries that harbors most of Earth's plant and animal life.
Read more at Discovery News
Apr 22, 2012
Eternal Life? Nobel Laureate Rita Levi Montalcini Turns 103
Has Dr. Rita Levi Montalcini unlocked the secret of eternal life? The oldest living and the longest-lived Nobel laureate in history, Montalcini celebrates today her 103th birthday.
"I can say my mental capacity is greater today than when I was 20, since it has been enriched by so many experiences," she says.
Her longevity might be the result of an unusual potion she takes every day in the form of eye drops -- a dose of nerve growth factor (NGF), which she discovered (jointly with American co-worker Stanley Cohen), in June 1951 in the labs of Washington University in St. Louis.
A protein essential for the growth, maintenance and survival of sensory and sympathetic neurons (nerve cells) in the peripheral nervous system, NGF was not widely recognized until 1986, when it won Levi-Montalcini and Cohen the Nobel Prize in Physiology or Medicine.
Levi-Montalcini still follows the developments of her findings at the European Brain Research Institute, which she founded in Rome.
Indeed, her work has had a significant influence on research exploring several diseases, including cancer, Parkinson's disease and Alzheimer's disease.
Rita, as she prefers to be called, celebrated her birthday privately, raising a toast with some of her closest collaborators.
As always, she was exquisitely dressed and wore some ancient jewels, the Italian daily Affari Italiani writes.
In line with her motto "I look forward," she decided to wait for the cake until this fall.
"She will celebrate at the Brain Forum in Rome, which is dedicated to her amazing career," the daily wrote.
"Grazie! Thank you," Levi-Montalcini wrote on her Facebook page, in response to innumerable birthday wishes.
Envisaging a quantum Internet, for now she is pleased to find out that her Facebook likes grew "from 2,000 to 200,000."
She now continues to work as a senator for life, and last month harshly criticized Mario Monti's government of technocrats for abolishing the peer review mechanism in funding policies to researchers.
"Italy -- and quite possibly the world -- has never seen a scientist quite like her," the journal Nature wrote on the occasion of her widely celebrated 100th birthday.
Born with her twin sister Paola (who died in 2000 aged 91) to a Jewish family in Turin in 1909, Levi-Montalcini went to medical school, despite the objections of her father. He worried that her work as a doctor would interfere with her duties of future wife and mother.
"At twenty, I realized that I could not possibly adjust to a feminine role as conceived by my father, and asked him permission to engage in a professional career," Levi-Montalcini wrote in her biography.
"In eight months I filled my gaps in Latin, Greek and mathematics, graduated from high school, and entered medical school in Turin," she added.
She graduated in 1936, but two years later her career was halted by Mussolini's laws banning "inferior races" from academic and professional careers.
Undaunted, Levi-Montalcini set up an improvised laboratory in her bedroom during World War II, and studied the growth of nerve fibers in chicken embryos.
Read more at Discovery News
"I can say my mental capacity is greater today than when I was 20, since it has been enriched by so many experiences," she says.
Her longevity might be the result of an unusual potion she takes every day in the form of eye drops -- a dose of nerve growth factor (NGF), which she discovered (jointly with American co-worker Stanley Cohen), in June 1951 in the labs of Washington University in St. Louis.
A protein essential for the growth, maintenance and survival of sensory and sympathetic neurons (nerve cells) in the peripheral nervous system, NGF was not widely recognized until 1986, when it won Levi-Montalcini and Cohen the Nobel Prize in Physiology or Medicine.
Levi-Montalcini still follows the developments of her findings at the European Brain Research Institute, which she founded in Rome.
Indeed, her work has had a significant influence on research exploring several diseases, including cancer, Parkinson's disease and Alzheimer's disease.
Rita, as she prefers to be called, celebrated her birthday privately, raising a toast with some of her closest collaborators.
As always, she was exquisitely dressed and wore some ancient jewels, the Italian daily Affari Italiani writes.
In line with her motto "I look forward," she decided to wait for the cake until this fall.
"She will celebrate at the Brain Forum in Rome, which is dedicated to her amazing career," the daily wrote.
"Grazie! Thank you," Levi-Montalcini wrote on her Facebook page, in response to innumerable birthday wishes.
Envisaging a quantum Internet, for now she is pleased to find out that her Facebook likes grew "from 2,000 to 200,000."
She now continues to work as a senator for life, and last month harshly criticized Mario Monti's government of technocrats for abolishing the peer review mechanism in funding policies to researchers.
"Italy -- and quite possibly the world -- has never seen a scientist quite like her," the journal Nature wrote on the occasion of her widely celebrated 100th birthday.
Born with her twin sister Paola (who died in 2000 aged 91) to a Jewish family in Turin in 1909, Levi-Montalcini went to medical school, despite the objections of her father. He worried that her work as a doctor would interfere with her duties of future wife and mother.
"At twenty, I realized that I could not possibly adjust to a feminine role as conceived by my father, and asked him permission to engage in a professional career," Levi-Montalcini wrote in her biography.
"In eight months I filled my gaps in Latin, Greek and mathematics, graduated from high school, and entered medical school in Turin," she added.
She graduated in 1936, but two years later her career was halted by Mussolini's laws banning "inferior races" from academic and professional careers.
Undaunted, Levi-Montalcini set up an improvised laboratory in her bedroom during World War II, and studied the growth of nerve fibers in chicken embryos.
Read more at Discovery News
History Is Key Factor in Plant Disease Virulence
The virulence of plant-borne diseases depends on not just the particular strain of a pathogen, but on where the pathogen has been before landing in its host, according to new research results.
Scientists from the University of California System and the U.S. Department of Agriculture's Agricultural Research Service (USDA ARS) recently published the results in the journal PLoS ONE.
The study demonstrates that the pattern of gene regulation--how a cell determines which genes it will encode into its structure and how it will encode them--rather than gene make-up alone affects how aggressively a microbe will behave in a plant host.
The pattern of gene regulation is formed by past environments, or by an original host plant from which the pathogen is transmitted.
"If confirmed, this finding could add a key new dimension to how we look at microbes because their history is going to matter--and their history may be hard to reconstruct," said Matteo Garbelotto, an environmental scientist at the University of California, Berkeley and co-author of the paper.
Epigenetic factors--for example, gene regulation mechanisms controlled by diet or exposure to extreme environments--are well-known to affect the susceptibility of humans to some diseases.
The new study is the first to show a similar process for plant pathogens.
"Sudden oak death, for example, is one of many pathogens that seemingly came out of nowhere to ravage the forests of California," said Sam Scheiner, a director of the National Science Foundation's (NSF) Ecology and Evolution of Infectious Diseases (EEID) program, which funded the research.
"This study shows that such sudden emergence can happen through rapid evolution, and may provide clues for predicting future epidemics."
The EEID program is a joint effort of NSF and the National Institutes of Health. At NSF, it is supported by the Directorates for Biological Sciences and Geosciences.
Garbelotto said that other scientists hypothesized that gene regulation has an effect on plant pathogens, based on the evolutionary rates of portions of the genome that are known to have an effect on gene regulation.
"Our work provides the concrete evidence those hypotheses were correct," he said.
Researchers showed that genetically identical strains of the sudden oak death pathogen isolated from different plant hosts were strikingly different in their virulence and their ability to proliferate.
They also demonstrated that these traits were maintained long after they had been isolated from their hosts.
"We found that an identical strain placed in two different plant hosts will undergo distinct changes that will persistently affect the strain's virulence and fitness," said Takao Kasuga, a molecular geneticist with the USDA ARS and the lead author of the paper.
The implications for disease control are significant.
Scientists say that it may not be enough to know what strain of pathogens they are dealing with in order to make treatment decisions; it also may be necessary to know how the pathogen's genes are being regulated.
This study shows that gene regulation may be the result of the environments the strain inhabited before being identified.
Garbelotto uses a parallel example of a well-known human pathogen: particular strains of the H1N1 flu virus have been identified as highly virulent, so a diagnosis of one of these strains indicates to doctors that they should treat that flu aggressively.
"But, hypothetically, if you caught one of these aggressive strains of H1N1 from a guy that went to, for example, Paris, it could be 10 times more dangerous. You may never know from whom you got it, and it's even less likely that you'll be able to learn where your infector visited before passing the germ on to you."
In plants, Garbelotto said, tracking a pathogen's history may prove even more difficult.
Correct information could give scientists a new weapon to use against virulent strains of diseases like sudden oak death, which can devastate forests and the ecosystems that depend on them.
The researchers also identified two groups of genes that are capable of affecting virulence and whose expression patterns are indicative of the previous host species they inhabited.
Understanding the regulation of these genes may provide scientists with future approaches to control a disease, such as manipulating gene expression to artificially reduce the aggressiveness of plant pathogens.
While Garbelotto stresses that more study is needed, he says if the paper's findings are confirmed, it could influence not just treatment but policy as well.
"Most countries impose regulations on microbes based on their genetic make up--which ones can and can't cross state and international lines and how they must be transported," he said.
"Our findings suggest that when making regulatory policy, we may also need to identify gene expression levels and take into account the history of a microbe."
Read more at Science Daily
Scientists from the University of California System and the U.S. Department of Agriculture's Agricultural Research Service (USDA ARS) recently published the results in the journal PLoS ONE.
The study demonstrates that the pattern of gene regulation--how a cell determines which genes it will encode into its structure and how it will encode them--rather than gene make-up alone affects how aggressively a microbe will behave in a plant host.
The pattern of gene regulation is formed by past environments, or by an original host plant from which the pathogen is transmitted.
"If confirmed, this finding could add a key new dimension to how we look at microbes because their history is going to matter--and their history may be hard to reconstruct," said Matteo Garbelotto, an environmental scientist at the University of California, Berkeley and co-author of the paper.
Epigenetic factors--for example, gene regulation mechanisms controlled by diet or exposure to extreme environments--are well-known to affect the susceptibility of humans to some diseases.
The new study is the first to show a similar process for plant pathogens.
"Sudden oak death, for example, is one of many pathogens that seemingly came out of nowhere to ravage the forests of California," said Sam Scheiner, a director of the National Science Foundation's (NSF) Ecology and Evolution of Infectious Diseases (EEID) program, which funded the research.
"This study shows that such sudden emergence can happen through rapid evolution, and may provide clues for predicting future epidemics."
The EEID program is a joint effort of NSF and the National Institutes of Health. At NSF, it is supported by the Directorates for Biological Sciences and Geosciences.
Garbelotto said that other scientists hypothesized that gene regulation has an effect on plant pathogens, based on the evolutionary rates of portions of the genome that are known to have an effect on gene regulation.
"Our work provides the concrete evidence those hypotheses were correct," he said.
Researchers showed that genetically identical strains of the sudden oak death pathogen isolated from different plant hosts were strikingly different in their virulence and their ability to proliferate.
They also demonstrated that these traits were maintained long after they had been isolated from their hosts.
"We found that an identical strain placed in two different plant hosts will undergo distinct changes that will persistently affect the strain's virulence and fitness," said Takao Kasuga, a molecular geneticist with the USDA ARS and the lead author of the paper.
The implications for disease control are significant.
Scientists say that it may not be enough to know what strain of pathogens they are dealing with in order to make treatment decisions; it also may be necessary to know how the pathogen's genes are being regulated.
This study shows that gene regulation may be the result of the environments the strain inhabited before being identified.
Garbelotto uses a parallel example of a well-known human pathogen: particular strains of the H1N1 flu virus have been identified as highly virulent, so a diagnosis of one of these strains indicates to doctors that they should treat that flu aggressively.
"But, hypothetically, if you caught one of these aggressive strains of H1N1 from a guy that went to, for example, Paris, it could be 10 times more dangerous. You may never know from whom you got it, and it's even less likely that you'll be able to learn where your infector visited before passing the germ on to you."
In plants, Garbelotto said, tracking a pathogen's history may prove even more difficult.
Correct information could give scientists a new weapon to use against virulent strains of diseases like sudden oak death, which can devastate forests and the ecosystems that depend on them.
The researchers also identified two groups of genes that are capable of affecting virulence and whose expression patterns are indicative of the previous host species they inhabited.
Understanding the regulation of these genes may provide scientists with future approaches to control a disease, such as manipulating gene expression to artificially reduce the aggressiveness of plant pathogens.
While Garbelotto stresses that more study is needed, he says if the paper's findings are confirmed, it could influence not just treatment but policy as well.
"Most countries impose regulations on microbes based on their genetic make up--which ones can and can't cross state and international lines and how they must be transported," he said.
"Our findings suggest that when making regulatory policy, we may also need to identify gene expression levels and take into account the history of a microbe."
Read more at Science Daily
Subscribe to:
Posts (Atom)