During a trance-like session of psychography, experienced mediums in Brazil allow themselves to become receptive to spirits or dead souls. Then they write automatically, channeling the voices of those they believe to be speaking to them.
As these mediums communicate with the dead, found a new study, parts of their brains involved in language and purposeful activity shut down, alongside other patterns of increased and decreased activity.
The findings add to our limited understanding of how the spiritual brain works, though for now, science cannot speak to the existence of the spirit world.
"I don't think this does anything to make (the experience) less real or less profound or to make it less important in the moment," said Andrew Newberg, a neuroscientist at Thomas Jefferson University in Philadelphia.
"At some point, maybe we will design the perfect study that can prove there were not spirits there and this is just a fascinating way that the brain works," he added. "At the moment, all we're really doing is saying that this is what happens in the brain when you do this particular practice."
In an attempt to understand how the human brain experiences spirituality, Newberg and colleagues have studied a range of practices, including yoga, meditation, prayer and speaking in tongues.
This time, he turned to psychography, one of a variety of practices associated with mediums, who lose their own sense of self as they connect with external souls.
Of the ten Brazilian psychographers considered in the study, five were experts who had been practicing for an average of 37 years and conducted an average of 15 sessions per month. The other five were novices who had been practicing for far less time and practiced with much less frequency. All were well adjusted and mentally healthy.
Each medium entered a trance state and began writing. After 10 minutes, the scientists injected them with a radioactive tracer that traveled to the brain, where it essentially got locked in place, reflecting how blood was flowing to various parts of the brain at the moment of injection. When the session was over 15 minutes later, a scanner illuminated that moment for the researchers.
Compared to times when they were simply writing about their thoughts, sessions of psychography induced a number of brain changes in experienced mediums, the researchers report today in the journal PLoS ONE. Specifically, activity decreased in six areas, including the left hippocampus, the left anterior cingulate and the right superior temporal gyrus.
The parts that shut down while the spirits moved their hands are areas normally involved in actively writing, concentrating and processing language, Newburg said. Similar trends showed up in a previous study of people who spoke in tongues. Both groups shared the common belief that spirits moved through them to be heard.
Novices in the new study showed the opposite pattern, with increased activity in the same parts of the brain that shut down in the advanced practitioners, suggesting that training improves the ability of the brain to enter a spirit-channeling state.
To Newburg's surprise, experienced psychographers also consistently produced more complex language on the page when they entered a trance state.
"You would expect this to mean that language areas were more active because they were making more detailed writings," he said. "In fact, it was just the opposite. The less active the brain was and the more expert the person was, the more complex the writing was."
With so few studies done on the brains of people involved in spiritual activities, the new research is a helpful contribution to the field of neurotheology, said Patrick McNamara, a neuropsychologist at Northcentral University in Prescott Valley, Arizona.
More research might eventually reveal reliable patterns of brain activation that occur across spiritual disciplines, eventually offering insight into the roots of religion and why some people are more devout than others.
"Then we can ask the big questions," McNamara said. "Is that activation of the brain state necessary to enter into the spiritual experience? Or is the spiritual experience key to activate those brain areas?
Read more at Discovery News
Nov 16, 2012
Danish Astronomer Not Poisoned
The 16th-century Danish astronomer Tycho Brahe did not die of mercury poisoning, according to a team of researchers who have analysed the scientist's remains.
The study started in 2010, as Brahe's remains were exhumed from his grave in Prague in a bid to investigate long standing rumors about the astronomer's untimely death.
Brahe died on Oct. 24, 1601, only 11 days after the onset of a sudden illness. The first astronomer to describe a supernova, Brahe was apparently in good health and at the height of his career.
Not only had he catalogued more than 1,000 stars, he had also discovered a new star in the constellation Cassiopeia -- a shocking finding in the state of knowledge of the time, as the heavens were thought to be unchanging.
Amazingly, Brahe made all these discoveries without a telescope, which was invented seven years after his death.
Rumors of death by mercury poisoning, whether deliberate or accidental, arose shortly after the astronomer's demise.
One theory speculated that Brahe was murdered by a distant cousin, the Swedish nobleman Erik Brahe, on the orders of the Danish king Christian IV, enraged over rumors that the astronomer, a father of eight, was having an affair with the king's mother.
Another theory identified Brahe's assistant, German astronomer Johannes Kepler, as a possible murder suspect.
It wasn't until after Brahe's sudden death that Kepler gained full access to a treasure trove of precise stellar and planetary observations that finally enabled him to come up with the laws of planetary motion.
In the past century, the mercury poisoning theory received apparent corroboration from repeated tests on samples of hair taken from Brahe's long moustache, which were removed from the astronomer's grave in another exhumation in 1901.
"To definitively prove or disprove these much debated theories, we took samples from Tycho Brahe's beard, bones and teeth when we exhumed his remains in 2010. While our analyses of his teeth are not yet complete, the scientific analyses of Tycho Brahe's bones and beard are," Jens Vellev, an archaeologist at Aarhus University in Denmark who is heading the research project, said in a statement.
The Danish-Czech team measured the concentration of mercury using three different quantitative chemical methods.
"All tests revealed the same result: that mercury concentrations were not sufficiently high to have caused his death," Kaare Lund Rasmussen, associate professor of chemistry at the University of Southern Denmark, said.
In particular, chemical analysis of the bones indicated that "Tycho Brahe was not exposed to an abnormally high mercury load in the last five to ten years of his life," Rasmussen said.
According to the researchers, the description given by Kepler of Brahe's death at the age of 54 is compatible with the progression of a severe bladder infection.
Another widely told -- although not very credible -- story reported that 11 days before his death, Brahe attended a royal banquet. There, his bladder burst since he was too polite to leave the table and go to the toilet.
During their investigation, the researchers also shed light on Brahe's famous prosthetic nose, reputedly made of gold and silver.
The astronomer lost part of the nose in a duel he fought in his youth in 1566 -- the matter of dispute wasn't a woman, but an obscure mathematical point.
Read more at Discovery News
The study started in 2010, as Brahe's remains were exhumed from his grave in Prague in a bid to investigate long standing rumors about the astronomer's untimely death.
Brahe died on Oct. 24, 1601, only 11 days after the onset of a sudden illness. The first astronomer to describe a supernova, Brahe was apparently in good health and at the height of his career.
Not only had he catalogued more than 1,000 stars, he had also discovered a new star in the constellation Cassiopeia -- a shocking finding in the state of knowledge of the time, as the heavens were thought to be unchanging.
Amazingly, Brahe made all these discoveries without a telescope, which was invented seven years after his death.
Rumors of death by mercury poisoning, whether deliberate or accidental, arose shortly after the astronomer's demise.
One theory speculated that Brahe was murdered by a distant cousin, the Swedish nobleman Erik Brahe, on the orders of the Danish king Christian IV, enraged over rumors that the astronomer, a father of eight, was having an affair with the king's mother.
Another theory identified Brahe's assistant, German astronomer Johannes Kepler, as a possible murder suspect.
It wasn't until after Brahe's sudden death that Kepler gained full access to a treasure trove of precise stellar and planetary observations that finally enabled him to come up with the laws of planetary motion.
In the past century, the mercury poisoning theory received apparent corroboration from repeated tests on samples of hair taken from Brahe's long moustache, which were removed from the astronomer's grave in another exhumation in 1901.
"To definitively prove or disprove these much debated theories, we took samples from Tycho Brahe's beard, bones and teeth when we exhumed his remains in 2010. While our analyses of his teeth are not yet complete, the scientific analyses of Tycho Brahe's bones and beard are," Jens Vellev, an archaeologist at Aarhus University in Denmark who is heading the research project, said in a statement.
The Danish-Czech team measured the concentration of mercury using three different quantitative chemical methods.
"All tests revealed the same result: that mercury concentrations were not sufficiently high to have caused his death," Kaare Lund Rasmussen, associate professor of chemistry at the University of Southern Denmark, said.
In particular, chemical analysis of the bones indicated that "Tycho Brahe was not exposed to an abnormally high mercury load in the last five to ten years of his life," Rasmussen said.
According to the researchers, the description given by Kepler of Brahe's death at the age of 54 is compatible with the progression of a severe bladder infection.
Another widely told -- although not very credible -- story reported that 11 days before his death, Brahe attended a royal banquet. There, his bladder burst since he was too polite to leave the table and go to the toilet.
During their investigation, the researchers also shed light on Brahe's famous prosthetic nose, reputedly made of gold and silver.
The astronomer lost part of the nose in a duel he fought in his youth in 1566 -- the matter of dispute wasn't a woman, but an obscure mathematical point.
Read more at Discovery News
Lonesome George: Not Last of His Kind
Lonesome George, a giant tortoise who died this past summer, was thought to be the last of his species. DNA evidence now, however, suggests more of his kind might still exist.
The species, Chelonoidis abingdoni, native to Pinta Island in the Galapagos Islands, could have arisen from tortoises thrown overboard there by 19th century sailors.
For the study, published in the journal Biological Conservation, Yale researchers collected DNA from more than 1,600 giant tortoises. They discovered that 17 were ancestors of Lonesome George. The 17 tortoises are hybrids, but evidence suggested a few might be the offspring of a purebred C. abingdoni parent.
Since five of the tortoises are juveniles, their parents and hopefully others may still live on the rocky cliffs of Isabella in an area called Volcano Wolf.
“Our goal is to go back this spring to look for surviving individuals of this species and to collect hybrids,” Adalgisa “Gisella” Caccone, senior research scientist in Yale University's Department of Ecology and Evolutionary Biology and senior author on the study, was quoted as saying in a press release. “We hope that with a selective breeding program, we can reintroduce this tortoise species to its native home.”
Volcano Wolf is 37 miles away from Pinta Island, where locals probably hunted Lonesome George's kin there to death. This often happened on islands in the past, unfortunately. With certain species limited to just those locations, and arriving humans facing scant food choices, the combination proved to be a perfect storm for extinction. Even other hominids, like Homo erectus, might have bumped off species in such a manner.
The distance from Pinta, however, provides a clue regarding the latest findings, not to mention Lonesome George.
Volcano Wolf is next to Banks Bay, where in the 19th century sailors of naval and whaling vessels discarded giant tortoises collected from other islands when they were no longer needed for food.
A previous genetic analysis of these same tortoises had discovered tortoises with genetic ancestry of C. elephantopus, a species from Floreana Island that had been hunted to extinction in its home range. The members of these marooned tortoise species then mated with indigenous tortoises, researchers suggest.
We've been following these expeditions for a while, so hopefully we'll soon have other good news to report if the researchers can collect hybrids and possibly even find a purebred Lonesome George relative.
Read more at Discovery News
The species, Chelonoidis abingdoni, native to Pinta Island in the Galapagos Islands, could have arisen from tortoises thrown overboard there by 19th century sailors.
For the study, published in the journal Biological Conservation, Yale researchers collected DNA from more than 1,600 giant tortoises. They discovered that 17 were ancestors of Lonesome George. The 17 tortoises are hybrids, but evidence suggested a few might be the offspring of a purebred C. abingdoni parent.
Since five of the tortoises are juveniles, their parents and hopefully others may still live on the rocky cliffs of Isabella in an area called Volcano Wolf.
“Our goal is to go back this spring to look for surviving individuals of this species and to collect hybrids,” Adalgisa “Gisella” Caccone, senior research scientist in Yale University's Department of Ecology and Evolutionary Biology and senior author on the study, was quoted as saying in a press release. “We hope that with a selective breeding program, we can reintroduce this tortoise species to its native home.”
Volcano Wolf is 37 miles away from Pinta Island, where locals probably hunted Lonesome George's kin there to death. This often happened on islands in the past, unfortunately. With certain species limited to just those locations, and arriving humans facing scant food choices, the combination proved to be a perfect storm for extinction. Even other hominids, like Homo erectus, might have bumped off species in such a manner.
The distance from Pinta, however, provides a clue regarding the latest findings, not to mention Lonesome George.
Volcano Wolf is next to Banks Bay, where in the 19th century sailors of naval and whaling vessels discarded giant tortoises collected from other islands when they were no longer needed for food.
A previous genetic analysis of these same tortoises had discovered tortoises with genetic ancestry of C. elephantopus, a species from Floreana Island that had been hunted to extinction in its home range. The members of these marooned tortoise species then mated with indigenous tortoises, researchers suggest.
We've been following these expeditions for a while, so hopefully we'll soon have other good news to report if the researchers can collect hybrids and possibly even find a purebred Lonesome George relative.
Read more at Discovery News
'White Widows' May Spawn Supernovae
A new thought experiment by J. Craig Wheeler -- an astronomer at the University of Texas, Austin -- offers a possible alternative scenario for the origins of Type Ia supernovae. The paper appeared last month in The Astrophysical Journal.
Most supernovae occur when a single star dies, specifically those of sufficiently large mass, namely, 1.4 times the mass of the sun. This is known as the Chandrasekhar limit. Stars of lesser mass usually end their lives as white dwarf stars.
Other supernovae occur in binary star systems, usually with one white dwarf and one normal star (a "single-degenerate" model). The latter sloughs off matter onto its white dwarf companion, which explodes when its mass hits the Chandrasekhar limit, leaving behind the normal stellar companion, which must live out the rest of its life alone, pining for its lost partner.
Wheeler has studied these exploding stars for 40 years and pioneered this idea back in 1971. Ever since, astronomers have bandied about various theories of what kind of star the companion in such a system might be.
Last month, astronomers announced that one of the most famous supernovae, SN 1006, may have resulted from a collision or merger of two white dwarf stars (a "double-degenerate" model). Such an event would also produce a supernova explosion -- only one that leaves no trace, other than the glowing remnant we see today.
Wheeler has proposed a third option, asserting that observational data of actual Type 1a supernovae don't support either of those two models satisfactorily. Specifically, current models of supernova spectra -- the light signatures from these very bright stars -- as they change over time don't match the data from actual supernovae.
So Wheeler suggests a modification of the first model is needed, in which a white dwarf pairs with a so-called M dwarf star in a binary system.
M dwarfs are quite common, but they are also very dim, and might not be detected by the large telescopes astronomers use to observe the remnants that remain after a supernova explosion. "One thing blows up as a supernova, the other thing's got to be left behind," Wheeler said via press release. "Where is it? We don't see it."
In this case, seeing nothing might be something -- small red M dwarf stars too faint to be detected, or perhaps even devoured entirely by its companion star before the latter went supernova. That's why Wheeler has dubbed his model the "white widow system" -- similar to black widow systems where a neutron star devours its companion star.
Read more at Discovery News
Most supernovae occur when a single star dies, specifically those of sufficiently large mass, namely, 1.4 times the mass of the sun. This is known as the Chandrasekhar limit. Stars of lesser mass usually end their lives as white dwarf stars.
Other supernovae occur in binary star systems, usually with one white dwarf and one normal star (a "single-degenerate" model). The latter sloughs off matter onto its white dwarf companion, which explodes when its mass hits the Chandrasekhar limit, leaving behind the normal stellar companion, which must live out the rest of its life alone, pining for its lost partner.
Wheeler has studied these exploding stars for 40 years and pioneered this idea back in 1971. Ever since, astronomers have bandied about various theories of what kind of star the companion in such a system might be.
Last month, astronomers announced that one of the most famous supernovae, SN 1006, may have resulted from a collision or merger of two white dwarf stars (a "double-degenerate" model). Such an event would also produce a supernova explosion -- only one that leaves no trace, other than the glowing remnant we see today.
Wheeler has proposed a third option, asserting that observational data of actual Type 1a supernovae don't support either of those two models satisfactorily. Specifically, current models of supernova spectra -- the light signatures from these very bright stars -- as they change over time don't match the data from actual supernovae.
So Wheeler suggests a modification of the first model is needed, in which a white dwarf pairs with a so-called M dwarf star in a binary system.
M dwarfs are quite common, but they are also very dim, and might not be detected by the large telescopes astronomers use to observe the remnants that remain after a supernova explosion. "One thing blows up as a supernova, the other thing's got to be left behind," Wheeler said via press release. "Where is it? We don't see it."
In this case, seeing nothing might be something -- small red M dwarf stars too faint to be detected, or perhaps even devoured entirely by its companion star before the latter went supernova. That's why Wheeler has dubbed his model the "white widow system" -- similar to black widow systems where a neutron star devours its companion star.
Read more at Discovery News
A Galaxy Far, Far Away is Furthest in Universe
A new celestial wonder has stolen the title of most distant object ever seen in the universe, astronomers report.
The new record holder is the galaxy MACS0647-JD, which is about 13.3 billion light-years away. The universe itself is only 13.7 billion years old, so this galaxy's light has been traveling toward us for almost the whole history of space and time.
Astronomers spotted the object using NASA's Hubble and Spitzer space telescopes, with the aid of a naturally occurring cosmic zoom lens as well. This lens is a huge cluster of galaxies whose collective gravity warps space-time, producing what's called a gravitational lens. As the distant galaxy's light traveled through this lens on its way to Earth, it was magnified.
"This cluster does what no manmade telescope can do," Marc Postman of the Space Telescope Science Institute in Baltimore, Md., said in a statement unveiling the discovery today (Nov. 15). "Without the magnification, it would require a Herculean effort to observe this galaxy." Postman leads the Cluster Lensing And Supernova Survey with Hubble (CLASH), which performed the study.
The distant galaxy is just a tiny blob, and is much smaller than our own Milky Way, researchers said. The object is very young, and it also dates from an epoch when the universe itself was still a baby, just 420 million years old, or 3 percent of its present age.
The mini galaxy is less than 600 light-years wide; for comparison, the Milky Way is 150,000 light-years across. Astronomers think MACS0647-JD may eventually combine with other small galaxies to create a larger whole.
"This object may be one of many building blocks of a galaxy," said the Space Telescope Science Institute's Dan Coe, who led the study of this particular galaxy. "Over the next 13 billion years, it may have dozens, hundreds, or even thousands of merging events with other galaxies and galaxy fragments."
Astronomers are continually spotting ever farther galaxies as their observation techniques and tools improve. The last object to hold the title of farthest thing ever seen was the galaxy SXDF-NB1006-2, which lies 12.91 billion light-years from Earth. That object was sighted by the Subaru and Keck Telescopes in Hawaii.
Read more at Discovery News
The new record holder is the galaxy MACS0647-JD, which is about 13.3 billion light-years away. The universe itself is only 13.7 billion years old, so this galaxy's light has been traveling toward us for almost the whole history of space and time.
Astronomers spotted the object using NASA's Hubble and Spitzer space telescopes, with the aid of a naturally occurring cosmic zoom lens as well. This lens is a huge cluster of galaxies whose collective gravity warps space-time, producing what's called a gravitational lens. As the distant galaxy's light traveled through this lens on its way to Earth, it was magnified.
"This cluster does what no manmade telescope can do," Marc Postman of the Space Telescope Science Institute in Baltimore, Md., said in a statement unveiling the discovery today (Nov. 15). "Without the magnification, it would require a Herculean effort to observe this galaxy." Postman leads the Cluster Lensing And Supernova Survey with Hubble (CLASH), which performed the study.
The distant galaxy is just a tiny blob, and is much smaller than our own Milky Way, researchers said. The object is very young, and it also dates from an epoch when the universe itself was still a baby, just 420 million years old, or 3 percent of its present age.
The mini galaxy is less than 600 light-years wide; for comparison, the Milky Way is 150,000 light-years across. Astronomers think MACS0647-JD may eventually combine with other small galaxies to create a larger whole.
"This object may be one of many building blocks of a galaxy," said the Space Telescope Science Institute's Dan Coe, who led the study of this particular galaxy. "Over the next 13 billion years, it may have dozens, hundreds, or even thousands of merging events with other galaxies and galaxy fragments."
Astronomers are continually spotting ever farther galaxies as their observation techniques and tools improve. The last object to hold the title of farthest thing ever seen was the galaxy SXDF-NB1006-2, which lies 12.91 billion light-years from Earth. That object was sighted by the Subaru and Keck Telescopes in Hawaii.
Read more at Discovery News
Nov 15, 2012
Neanderthals May Have Sailed to Crete
Neanderthals, or even older Homo erectus("Upright Man") might have sailed around the Mediterranean, stopping at islands such as Crete and Cyprus, new evidence suggests.
The evidence suggests that these hominid species had considerable seafaring and cognitive skills.
"They had to have had boats of some sort; unlikely they swam," said Alan Simmons, lead author of a study about the find in this week's Science. "Many of the islands had no land-bridges, thus they must have had the cognitive ability to both build boats and know how to navigate them."
Simmons, a professor of anthropology at the University of Nevada, added that there is no direct evidence for boats dating back to over 100,000 years ago. If they were built then, the wood or other natural materials likely eroded. Instead, other clues hint that modern humans may not have been the first to set foot on Mediterranean islands.
On Crete, for example, tools such as quartz hand-axes, picks and cleavers are associated with deposits that may date to 170,000 years ago. Previously, this island, as well as Cyprus, was thought to have first been colonized about 9,000 years ago by late Neolithic agriculturalists with domesticated resources.
Excavations at an Akrotiri site on Cyprus have turned up ancient thumbnail scrapers and other tools dating to beyond 9,000 years ago. There is also a huge assembly of fossils for a dwarf pygmy hippopotamus, which might have been good eats for the earlier islanders. It's possible they hunted the small, plump animal to extinction.
"Conventional wisdom used to be that none of these islands had too much settlement prior to the Neolithic because the islands were too impoverished to have supported permanent occupation," Simmons said. "This likely is untrue. Hunters and gatherers can be pretty creative."
Permanent settlements, however, appear to have happened after these suspected first forays into the islands.
Other evidence outside of the Mediterranean supports that pre-Neolithic humans could sail. Simmons, for instance, points out that these individuals "must have been able to cross substantial expanses of sea to reach Australia by at least 50,000 years ago."
"Additionally," he continued, "findings from the Indonesian Wallacea islands suggest the presence of hominins as early as 1.1 million years ago on Flores Island."
Modern humans today quibble about which culture was the first to discover this or that country, but the truth is that many lands were probably first discovered and/or settled by hominid species that were not Homo sapiens.
As for what happened when modern humans arrived, it is possible that the different populations were not entirely put off by each other.
"If the Crete and likely Homo erectus or the other Ionian (Neanderthal) evidence is ultimately verified, it is possible that some mating could have occurred with later modern humans emerging from Africa, but this likely occurred around 100,000 years ago," Simmons said, adding that evidence for island occupation at that particular time is scant.
Bernard Knapp, a professor at the Cyprus American Archaeological Research Institute, told Discovery News, "The very earliest documented presence of people on these (Mediterranean) islands should be termed 'exploitation.' Once people came to stay, we should speak of 'permanent settlement.' 'Colonization' is a loaded term."
Read more at Discovery News
The evidence suggests that these hominid species had considerable seafaring and cognitive skills.
"They had to have had boats of some sort; unlikely they swam," said Alan Simmons, lead author of a study about the find in this week's Science. "Many of the islands had no land-bridges, thus they must have had the cognitive ability to both build boats and know how to navigate them."
Simmons, a professor of anthropology at the University of Nevada, added that there is no direct evidence for boats dating back to over 100,000 years ago. If they were built then, the wood or other natural materials likely eroded. Instead, other clues hint that modern humans may not have been the first to set foot on Mediterranean islands.
On Crete, for example, tools such as quartz hand-axes, picks and cleavers are associated with deposits that may date to 170,000 years ago. Previously, this island, as well as Cyprus, was thought to have first been colonized about 9,000 years ago by late Neolithic agriculturalists with domesticated resources.
Excavations at an Akrotiri site on Cyprus have turned up ancient thumbnail scrapers and other tools dating to beyond 9,000 years ago. There is also a huge assembly of fossils for a dwarf pygmy hippopotamus, which might have been good eats for the earlier islanders. It's possible they hunted the small, plump animal to extinction.
"Conventional wisdom used to be that none of these islands had too much settlement prior to the Neolithic because the islands were too impoverished to have supported permanent occupation," Simmons said. "This likely is untrue. Hunters and gatherers can be pretty creative."
Permanent settlements, however, appear to have happened after these suspected first forays into the islands.
Other evidence outside of the Mediterranean supports that pre-Neolithic humans could sail. Simmons, for instance, points out that these individuals "must have been able to cross substantial expanses of sea to reach Australia by at least 50,000 years ago."
"Additionally," he continued, "findings from the Indonesian Wallacea islands suggest the presence of hominins as early as 1.1 million years ago on Flores Island."
Modern humans today quibble about which culture was the first to discover this or that country, but the truth is that many lands were probably first discovered and/or settled by hominid species that were not Homo sapiens.
As for what happened when modern humans arrived, it is possible that the different populations were not entirely put off by each other.
"If the Crete and likely Homo erectus or the other Ionian (Neanderthal) evidence is ultimately verified, it is possible that some mating could have occurred with later modern humans emerging from Africa, but this likely occurred around 100,000 years ago," Simmons said, adding that evidence for island occupation at that particular time is scant.
Bernard Knapp, a professor at the Cyprus American Archaeological Research Institute, told Discovery News, "The very earliest documented presence of people on these (Mediterranean) islands should be termed 'exploitation.' Once people came to stay, we should speak of 'permanent settlement.' 'Colonization' is a loaded term."
Read more at Discovery News
Great White Shark Origins Found
Great white sharks are among the world's largest living predatory animals, and now we have a better idea of their ancestors and how these toothy media superstars evolved.
Great whites turn out not to be very related to the extinct Carcharocles megalodon, the largest carnivorous shark that ever lived. Instead, they likely descended from broad-toothed mako sharks.
As you can see from the above photo, however, these sharks back in the day had impressive mouths and teeth too. The well-preserved fossil from Peru is the only intact partial skull ever found of a white shark that lived about 4.5 million years ago.
The species was named Carcharodon hubbelli for Gordon Hubbell, who donated the fossil to the Florida Museum of Natural History on the UF campus. The fossil jaw contains 222 teeth, some in rows up to six teeth deep.
"The impetus of this project was really the fact that Gordon Hubbell donated a majority of his fossil shark collection to the Florida Museum," author Dana Ehret, a lecturer at Monmouth University in New Jersey who conducted research for the study as a University of Florida graduate student, said in a press release."Naming the shark in his honor is a small tip of the hat to all the great things he has done to advance paleontology."
Ehret studying the fossil
He continued, "We can look at white sharks today a little bit differently ecologically if we know that they come from a mako shark ancestor."
That ancestor is 2 million years older than previously suspected, based on recalibrated dating.
Ehret said,"That 2-million-year pushback is pretty significant because in the evolutionary history of white sharks, that puts this species in a more appropriate time category to be ancestral or kind of an intermediate form of white shark."
He made the connection between modern great whites and C. hubbelli by comparing the physical shapes of shark teeth to one another. While modern white sharks have serrations on their teeth for consuming marine mammals, mako sharks do not have serrations because they primarily feed on fish. Hubbell's white shark has coarse serrations indicative of a transition from broad-toothed mako sharks to modern white sharks.
Read more at Discovery News
Great whites turn out not to be very related to the extinct Carcharocles megalodon, the largest carnivorous shark that ever lived. Instead, they likely descended from broad-toothed mako sharks.
As you can see from the above photo, however, these sharks back in the day had impressive mouths and teeth too. The well-preserved fossil from Peru is the only intact partial skull ever found of a white shark that lived about 4.5 million years ago.
The species was named Carcharodon hubbelli for Gordon Hubbell, who donated the fossil to the Florida Museum of Natural History on the UF campus. The fossil jaw contains 222 teeth, some in rows up to six teeth deep.
"The impetus of this project was really the fact that Gordon Hubbell donated a majority of his fossil shark collection to the Florida Museum," author Dana Ehret, a lecturer at Monmouth University in New Jersey who conducted research for the study as a University of Florida graduate student, said in a press release."Naming the shark in his honor is a small tip of the hat to all the great things he has done to advance paleontology."
Ehret studying the fossil
He continued, "We can look at white sharks today a little bit differently ecologically if we know that they come from a mako shark ancestor."
That ancestor is 2 million years older than previously suspected, based on recalibrated dating.
Ehret said,"That 2-million-year pushback is pretty significant because in the evolutionary history of white sharks, that puts this species in a more appropriate time category to be ancestral or kind of an intermediate form of white shark."
He made the connection between modern great whites and C. hubbelli by comparing the physical shapes of shark teeth to one another. While modern white sharks have serrations on their teeth for consuming marine mammals, mako sharks do not have serrations because they primarily feed on fish. Hubbell's white shark has coarse serrations indicative of a transition from broad-toothed mako sharks to modern white sharks.
Read more at Discovery News
Animals Get Bored, Too
It is easy to look at caged or cooped up animals and think that, like people, they must get bored with such a confined existence.
While it’s impossible to know what other creatures are thinking, a new study is the first to experimentally demonstrate signs of boredom in animals that aren't given much to do.
For the study, researchers from the University of Guelph, Canada, worked with 29 captive mink. Some animals were housed in plain wire-mesh cages, where they lived for seven months before the experiments began.
Another group lived in identical cages but they could access a tunnel that took them to an even bigger space that included opportunities for stimulating activities, including shelf-like structures for climbing, rubber dog toys and other objects for play, as well as water for wading and dipping their heads in. Every month, these animals got new stuff.
When it finally came time to start the experiments, the researchers presented each animal with a series of new experiences, including puffs of air, scented candles and moving toothbrushes.
It didn't matter whether the stimulus was rewarding, stressful or neutral for the animals, the researchers report today in the journal PLOS ONE. Animals raised in boring cages showed more interest in new things.
They also snacked more on food made available during the experiments, even though they weren't hungry. They also spent more time lying around awake than the animals given a more enriching home life.
All of those behaviors, the researchers concluded, are potential signs of feeling bored.
Boredom is a hard emotion to define. Among people, responses to it vary from apathy to depression to immersion in extreme adventures. Still the researchers say, their study is a valuable first step in figuring out if and how various animals get bored and what can be done to make their lives more satisfying.
Read more at Discovery News
While it’s impossible to know what other creatures are thinking, a new study is the first to experimentally demonstrate signs of boredom in animals that aren't given much to do.
For the study, researchers from the University of Guelph, Canada, worked with 29 captive mink. Some animals were housed in plain wire-mesh cages, where they lived for seven months before the experiments began.
Another group lived in identical cages but they could access a tunnel that took them to an even bigger space that included opportunities for stimulating activities, including shelf-like structures for climbing, rubber dog toys and other objects for play, as well as water for wading and dipping their heads in. Every month, these animals got new stuff.
When it finally came time to start the experiments, the researchers presented each animal with a series of new experiences, including puffs of air, scented candles and moving toothbrushes.
It didn't matter whether the stimulus was rewarding, stressful or neutral for the animals, the researchers report today in the journal PLOS ONE. Animals raised in boring cages showed more interest in new things.
They also snacked more on food made available during the experiments, even though they weren't hungry. They also spent more time lying around awake than the animals given a more enriching home life.
All of those behaviors, the researchers concluded, are potential signs of feeling bored.
Boredom is a hard emotion to define. Among people, responses to it vary from apathy to depression to immersion in extreme adventures. Still the researchers say, their study is a valuable first step in figuring out if and how various animals get bored and what can be done to make their lives more satisfying.
Read more at Discovery News
Higgs Boson Likely a 'Boring' Boson
The thing with physicists is that they love discovering something unexpected, strange or exotic. This mindset is what makes physics, and indeed all science disciplines, awesome. But in light of the grand announcement of the probable discovery of the elusive Higgs boson in July, it looks like the particle that was discovered is likely a "standard" Higgs boson. As in, it's a little bit boring.
Of course, "boring" is a relative term. The story of the hunt for the Higgs -- the 'exchange particle' that endows matter with mass -- reads like a Dan Brown novel, culminating in the construction of the biggest, boldest and most complex machine mankind has ever conceived: the Large Hadron Collider (LHC). There's more twists, turns and subplots than you can shake a lepton at.
But the famous Higgs boson, that until recently has been purely a theoretical particle only appearing in equations, looks like it comes from the Standard Model of physics and not something more exotic.
The Standard Model is a set of equations and particles that underpin our known Universe. It's a recipe book of sorts and works like this: If you collide particle A with particle B you get particle C plus some energy -- we know what will come out of a particle interaction even before the interaction takes place. There's nothing unexpected in this recipe book; so it doesn't work like this: If you collide particle A with particle B you get particle Q and -- what the #$%@?! -- a small elephant playing with a black hole!
The latter scenario would violate our known laws of physics, suggesting something more exotic is going on. In that case, there would be some kind of new physics, something beyond the Standard Model at play -- perhaps an exotic result from the LHC would provide evidence of "supersymmetry." One interpretation of supersymmetry suggests there may be an entire family of Higgs bosons that cannot be explained by the Standard Model. However, supersymmetry has recently been dealt a "hospitalizing" blow.
So particle physicists have been wondering, is the signal of the Higgs representative of a Standard Model Higgs (i.e. a particle that is completely predicted by the equations described by the Standard Model) or are some exotic physics at play?
In new results presented at a particle physics conference in Kyoto, Japan, on Wednesday, physicists of the ATLAS and CMS experiments at the LHC revealed that there's little strange or unexpected with the behavior of the Higgs boson they have detected in their data. This is basically confirmation that the equations formulated by Peter Higgs and his colleagues over 50 years ago correctly describe the Higgs boson. Few signs of exotic physics have, so far, been detected.
"The (Higgs boson) is still there, and it's certainly staying consistent with the Standard Model," said Joe Incandela, lead physicist of the CMS detector team.
In July, when LHC physicists initially made their announcement, there were signs that the particle they'd detected could have some exotic properties.
The Higgs boson is naturally very unstable -- one of the reasons why it has been so elusive. When it decays, the Standard Model predicts that it should produce a certain number of tau particles (heavy cousins of the electron). But the data suggest an excess of gamma particles were being generated. At the time, this discrepancy was too small to make any conclusions, so more experiments were carried out. Wednesday's announcement didn't modify this slight discrepancy, but, again, no conclusions can be drawn.
Read more at Discovery News
Of course, "boring" is a relative term. The story of the hunt for the Higgs -- the 'exchange particle' that endows matter with mass -- reads like a Dan Brown novel, culminating in the construction of the biggest, boldest and most complex machine mankind has ever conceived: the Large Hadron Collider (LHC). There's more twists, turns and subplots than you can shake a lepton at.
But the famous Higgs boson, that until recently has been purely a theoretical particle only appearing in equations, looks like it comes from the Standard Model of physics and not something more exotic.
The Standard Model is a set of equations and particles that underpin our known Universe. It's a recipe book of sorts and works like this: If you collide particle A with particle B you get particle C plus some energy -- we know what will come out of a particle interaction even before the interaction takes place. There's nothing unexpected in this recipe book; so it doesn't work like this: If you collide particle A with particle B you get particle Q and -- what the #$%@?! -- a small elephant playing with a black hole!
The latter scenario would violate our known laws of physics, suggesting something more exotic is going on. In that case, there would be some kind of new physics, something beyond the Standard Model at play -- perhaps an exotic result from the LHC would provide evidence of "supersymmetry." One interpretation of supersymmetry suggests there may be an entire family of Higgs bosons that cannot be explained by the Standard Model. However, supersymmetry has recently been dealt a "hospitalizing" blow.
So particle physicists have been wondering, is the signal of the Higgs representative of a Standard Model Higgs (i.e. a particle that is completely predicted by the equations described by the Standard Model) or are some exotic physics at play?
In new results presented at a particle physics conference in Kyoto, Japan, on Wednesday, physicists of the ATLAS and CMS experiments at the LHC revealed that there's little strange or unexpected with the behavior of the Higgs boson they have detected in their data. This is basically confirmation that the equations formulated by Peter Higgs and his colleagues over 50 years ago correctly describe the Higgs boson. Few signs of exotic physics have, so far, been detected.
"The (Higgs boson) is still there, and it's certainly staying consistent with the Standard Model," said Joe Incandela, lead physicist of the CMS detector team.
In July, when LHC physicists initially made their announcement, there were signs that the particle they'd detected could have some exotic properties.
The Higgs boson is naturally very unstable -- one of the reasons why it has been so elusive. When it decays, the Standard Model predicts that it should produce a certain number of tau particles (heavy cousins of the electron). But the data suggest an excess of gamma particles were being generated. At the time, this discrepancy was too small to make any conclusions, so more experiments were carried out. Wednesday's announcement didn't modify this slight discrepancy, but, again, no conclusions can be drawn.
Read more at Discovery News
Nov 14, 2012
Mayan Bones Reveal Painful End
Evidence of the miserable life lived by the Maya during the Spanish conquest of the 16th century has emerged in an ancient settlement of Mexico's east coast, as archaeologists unearthed dozens of infant skeletons with signs of malnutrition and acute anemia.
Found in the recently opened archaeological site of San Miguelito, in the middle of the hotel chain area of Quintana Roo, near Cancun, the human burials were excavated within 11 housing buildings dating to the Late Postclassic Mayan Period (1200 – 1550).
Archaeologists of the National Institute of Anthropology and History (INAH) estimate that at least 30 burials belong to infants between the ages of three and six. The majority suffered from hunger and most likely died of related diseases.
The 16th-century skeletons point to "a high infant mortality rate, probably derived from poor health and malnutrition," archaeologist Sandra Elizalde said in a statement.
"Some infants were accompanied by very humble offerings, typical of an impoverished society. One of the burials contained a hummingbird-shaped figurine and another that of an old woman with perfectly detailed wrinkles on her face," Elizalde said.
Strategically located at the entrance of the Nichupte Lagoon, San Miguelito was an important trading center in pre-Hispanic times (1200-1350 AD).
"The population exploited marine resources and the place thrived," archaeologist Adriana Velazquez Morlet, director of the INAH Center in Quintana Roo, said.
Pre-hispanic structures built at that time in the settlement included the 26-foot-high by 39-foot-wide Great Pyramid, and other four architectural complexes called South, Dragons, Chaac and North, where most of the burials were unearthed.
Things radically changed as the Spanish arrived to the Yucatan peninsula.
"The conquest was different from the rest of Mesoamerica because there were many scattered cities," Velazquez Morlet said.
"It took the Spanish 20 years to conquer them all and when they did, they settled in the west (Yucatan and Campeche). All the eastern part of Mesoamerica suffered the consequences of severed Mayan trade routes," she added.
Inevitably, San Miguelito was abandoned.
In addition to the infant burials, the archaeologists unearthed other 17 burials -- some belong to adult individuals, while others are so fragmented they cannot be identified.
Read more at Discovery News
Found in the recently opened archaeological site of San Miguelito, in the middle of the hotel chain area of Quintana Roo, near Cancun, the human burials were excavated within 11 housing buildings dating to the Late Postclassic Mayan Period (1200 – 1550).
Archaeologists of the National Institute of Anthropology and History (INAH) estimate that at least 30 burials belong to infants between the ages of three and six. The majority suffered from hunger and most likely died of related diseases.
The 16th-century skeletons point to "a high infant mortality rate, probably derived from poor health and malnutrition," archaeologist Sandra Elizalde said in a statement.
"Some infants were accompanied by very humble offerings, typical of an impoverished society. One of the burials contained a hummingbird-shaped figurine and another that of an old woman with perfectly detailed wrinkles on her face," Elizalde said.
Strategically located at the entrance of the Nichupte Lagoon, San Miguelito was an important trading center in pre-Hispanic times (1200-1350 AD).
"The population exploited marine resources and the place thrived," archaeologist Adriana Velazquez Morlet, director of the INAH Center in Quintana Roo, said.
Pre-hispanic structures built at that time in the settlement included the 26-foot-high by 39-foot-wide Great Pyramid, and other four architectural complexes called South, Dragons, Chaac and North, where most of the burials were unearthed.
Things radically changed as the Spanish arrived to the Yucatan peninsula.
"The conquest was different from the rest of Mesoamerica because there were many scattered cities," Velazquez Morlet said.
"It took the Spanish 20 years to conquer them all and when they did, they settled in the west (Yucatan and Campeche). All the eastern part of Mesoamerica suffered the consequences of severed Mayan trade routes," she added.
Inevitably, San Miguelito was abandoned.
In addition to the infant burials, the archaeologists unearthed other 17 burials -- some belong to adult individuals, while others are so fragmented they cannot be identified.
Read more at Discovery News
Supernova Explosions Make Nuclear Pasta
When dying stars explode into supernovae, the outward shock wave is preceded by a kind of "bounce" -- matter and elementary particles collapse toward the star's core so tightly that they reach a critical threshold of density, such that nuclear forces kick in and push back against that collapse. In the end, all that's left is a glowing remnant of dust and gas.
Yet much of the physics that occurs during this process is not yet well understood, although scientists believe that at some point during the collapsing stage, the ultra-dense matter organizes itself into unusual shapes dubbed "nuclear pasta."
A new computer simulation by physicists at the University of Tennessee in Knoxville, published last month in Physical Review Letters, has identified yet another new pasta shape, or phase, which they hope may one day shed some light on the very complicated physics behind supernova explosions -- such as the role of neutrinos during such events.
The pasta shapes formed by nuclear particles when a supernova is developing can be rods, slabs, or bubbles (round holes, or holes shaped like cylinders). It's a category known among physicists as "frustrated matter," which also includes ferromagnets, glasses, soft solids, and the like.
This phenomenon occurs when different forces clash within a material system, unable to find a balance, making the material unstable, i.e., "frustrated."
In the case of a supernova, those competing forces are the nuclear attraction and Coulomb repulsion -- the physical law stating that the force of attraction or repulsion between two electrically charged bodies is directly proportional to the strength of the electrical charges (and inversely proportional to the square of the distance between them.)
Basically, you have positively charge nucleons that repel each other, at least until they get too close, at which point the strong nuclear force kicks and the nucleons start to attract each other. The result is the strange pasta shapes predicted by prior computer models. Frustrated matter may account for as much as 15-20 percent of the total matter during the "bounce" stage of a supernova.
Read more at Discovery News
Yet much of the physics that occurs during this process is not yet well understood, although scientists believe that at some point during the collapsing stage, the ultra-dense matter organizes itself into unusual shapes dubbed "nuclear pasta."
A new computer simulation by physicists at the University of Tennessee in Knoxville, published last month in Physical Review Letters, has identified yet another new pasta shape, or phase, which they hope may one day shed some light on the very complicated physics behind supernova explosions -- such as the role of neutrinos during such events.
The pasta shapes formed by nuclear particles when a supernova is developing can be rods, slabs, or bubbles (round holes, or holes shaped like cylinders). It's a category known among physicists as "frustrated matter," which also includes ferromagnets, glasses, soft solids, and the like.
This phenomenon occurs when different forces clash within a material system, unable to find a balance, making the material unstable, i.e., "frustrated."
In the case of a supernova, those competing forces are the nuclear attraction and Coulomb repulsion -- the physical law stating that the force of attraction or repulsion between two electrically charged bodies is directly proportional to the strength of the electrical charges (and inversely proportional to the square of the distance between them.)
Basically, you have positively charge nucleons that repel each other, at least until they get too close, at which point the strong nuclear force kicks and the nucleons start to attract each other. The result is the strange pasta shapes predicted by prior computer models. Frustrated matter may account for as much as 15-20 percent of the total matter during the "bounce" stage of a supernova.
Read more at Discovery News
Free-Floating Orphan Planet Spotted
A Jupiter-class planet that orbits no star, but is free-floating in space has been directly observed. The phenomenon is believed to be common, but is very difficult to detect.
Astronomers were on the hunt for so-called brown dwarf stars, sometimes referred to as "failed stars" since they grow like stars from collapsing balls of gas and dust, but aren't massive enough to get thermonuclear reactions started in their cores.
But when CFBDSIR2149, located about 130 light-years from Earth, came into view, scientists suspected they had nabbed something else.
Too dim to glow in optical light, astronomers analyzed 2149's infrared emissions to learn more about its chemical composition, from which they could determine its mass and temperature and then approximate its age.
They found the object to be between 50 million and 120 million years old, with a temperature of about 400 degrees Celsius (752 degrees Fahrenheit), and a mass four to seven times that of Jupiter.
"It's a portrait of Jupiter in the first million years of its life," astrophysicist Étienne Artigau, with the University of Montreal, told Discovery News.
"It was not something that was unexpected, but to find one is very challenging," he added.
Scientists' first clue that something unusual might be going on was the company that 2149 travels with -- a very young group of stars. Presuming the object formed along with the stars, it cooled rapidly, which meant it must be small.
It also is possible that the object is a brown dwarf star, albeit a very small one, that happens to find itself near the AB Doradus Moving Group star cluster.
But a separate analysis projected an 87 percent chance 2149 is moving with the star cluster. That would mean that it either formed away from a parent star or was booted out by gravitational forces from its original star system.
The discovery follows indirect observations of 10 free-flying, Jupiter-sized planets at the center of the Milky Way found by a technique called gravitational microlensing which occurs when a star or planet passes in front of another more distant object.
Gravity from the mass of the closer body warps the light coming from the background star, causing it to brighten for some period of time. Small bodies, like planets, cause less distortion than bigger objects like stars.
Read more at Discovery News
Astronomers were on the hunt for so-called brown dwarf stars, sometimes referred to as "failed stars" since they grow like stars from collapsing balls of gas and dust, but aren't massive enough to get thermonuclear reactions started in their cores.
But when CFBDSIR2149, located about 130 light-years from Earth, came into view, scientists suspected they had nabbed something else.
Too dim to glow in optical light, astronomers analyzed 2149's infrared emissions to learn more about its chemical composition, from which they could determine its mass and temperature and then approximate its age.
They found the object to be between 50 million and 120 million years old, with a temperature of about 400 degrees Celsius (752 degrees Fahrenheit), and a mass four to seven times that of Jupiter.
"It's a portrait of Jupiter in the first million years of its life," astrophysicist Étienne Artigau, with the University of Montreal, told Discovery News.
"It was not something that was unexpected, but to find one is very challenging," he added.
Scientists' first clue that something unusual might be going on was the company that 2149 travels with -- a very young group of stars. Presuming the object formed along with the stars, it cooled rapidly, which meant it must be small.
It also is possible that the object is a brown dwarf star, albeit a very small one, that happens to find itself near the AB Doradus Moving Group star cluster.
But a separate analysis projected an 87 percent chance 2149 is moving with the star cluster. That would mean that it either formed away from a parent star or was booted out by gravitational forces from its original star system.
The discovery follows indirect observations of 10 free-flying, Jupiter-sized planets at the center of the Milky Way found by a technique called gravitational microlensing which occurs when a star or planet passes in front of another more distant object.
Gravity from the mass of the closer body warps the light coming from the background star, causing it to brighten for some period of time. Small bodies, like planets, cause less distortion than bigger objects like stars.
Read more at Discovery News
Are We Dumber Than Ancient Mayans?
There has been was a bit of news about a new 2,000-year climate record from the Yok Balum Cave in Belize that, according to a press release "shows how Maya political systems developed and disintegrated in response to climate change." I was pleased to see an article about it even made it into my local newspaper, the Albuquerque Journal.
But after reading it I got to thinking about a recent writing assignment I got from the Santa Fe Institute (SFI) on the demise of the classic Mayan civilization. One of the take home messages I got from talking with the SFI folks is that nothing is ever so simple as some of the press on this new climate record suggests. These researchers are experts at modeling complex adaptive systems of all kinds – from cells to civilizations. They love the stuff that gives the rest of us headaches: big messy complicated systems that change a lot and have way too many variables. It's like candy to them.
Their take on the demise of the Classic Maya civilization in the Central Maya Lowlands in the ninth century A.D. -- reported in the August 20, 2012, Proceedings of the National Academy of Science -- is that while climate was important, it was not the only factor.
“There is no monolithic period of collapse, but a lot of variability,” co-author and President of the Santa Fe Institute Jerry Sabloff told me. “What we see are many variable patterns. The only way to explain the variability is to take a complex systems view.”
Sabloff and Arizona State University geographer B. L. Turner wove together a complex, data-rich history of Classic Maya agricultural practices and the demands on ecosystem services that stressed the environment and made it vulnerable for trouble when one particular drought hit.
In other words, maybe the drought wouldn't have done it if the Maya had managed some other things differently. Of course, the Classic Maya probably had limited ability to assess the long-term effects of their farming practices and likewise perhaps had no reason to believe the climate could change so dramatically. It was just bad luck, you might say.
Read more at Discovery News
But after reading it I got to thinking about a recent writing assignment I got from the Santa Fe Institute (SFI) on the demise of the classic Mayan civilization. One of the take home messages I got from talking with the SFI folks is that nothing is ever so simple as some of the press on this new climate record suggests. These researchers are experts at modeling complex adaptive systems of all kinds – from cells to civilizations. They love the stuff that gives the rest of us headaches: big messy complicated systems that change a lot and have way too many variables. It's like candy to them.
Their take on the demise of the Classic Maya civilization in the Central Maya Lowlands in the ninth century A.D. -- reported in the August 20, 2012, Proceedings of the National Academy of Science -- is that while climate was important, it was not the only factor.
“There is no monolithic period of collapse, but a lot of variability,” co-author and President of the Santa Fe Institute Jerry Sabloff told me. “What we see are many variable patterns. The only way to explain the variability is to take a complex systems view.”
Sabloff and Arizona State University geographer B. L. Turner wove together a complex, data-rich history of Classic Maya agricultural practices and the demands on ecosystem services that stressed the environment and made it vulnerable for trouble when one particular drought hit.
In other words, maybe the drought wouldn't have done it if the Maya had managed some other things differently. Of course, the Classic Maya probably had limited ability to assess the long-term effects of their farming practices and likewise perhaps had no reason to believe the climate could change so dramatically. It was just bad luck, you might say.
Read more at Discovery News
Nov 13, 2012
Road to Language Learning Is Iconic
Languages are highly complex systems and yet most children seem to acquire language easily, even in the absence of formal instruction. New research on young children's use of British Sign Language (BSL) sheds light on one mechanism -- iconicity -- that may play an important role in children's ability to learn language.
For spoken and written language, the arbitrary relationship between a word's form -- how it sounds or how it looks on paper -- and its meaning is a particularly challenging feature of language acquisition. But one of the first things people notice about sign languages is that signs often represent aspects of meaning in their form. For example, in BSL the sign EAT involves bringing the hand to the mouth just as you would if you were bringing food to the mouth to eat it.
In fact, a high proportion of signs across the world's sign languages are similarly iconic, connecting human experience to linguistic form.
Robin Thompson and colleagues David Vison, Bencie Woll, and Gabriella Vigliocco at the Deafness, Cognition and Language Research Centre (DCAL) at University College London in the United Kingdom wanted to examine whether this kind of iconicity might provide a key to understanding how children come to link words to their meaning.
Their findings are published in Psychological Science, a journal of the Association for Psychological Science.
The researchers looked at data from 31 deaf children who were being raised in deaf BSL signing families in the United Kingdom. Parents indicated the number of words understood and produced by their children between the ages of 8 and 30 months. The researchers decided to focus on 89 specific signs, examining children's familiarity with the signs as well as the iconicity and complexity of the signs.
The findings reveal that younger (11-20 months) and older (21-30 months) children comprehended and produced more BSL signs that were iconic than those that were less iconic. And the benefit of iconicity seemed to be greater for the older children. Importantly, this relationship did not seem to depend on how familiar, complex or concrete the words were.
Together, these findings suggest that iconicity could play an important role in language acquisition.
Thompson and colleagues hypothesize that iconic links between our perceptual-motor experience of the world and the form of a sign may provide an imitation-based mechanism that supports early sign acquisition. These iconic links highlight motor and perceptual similarity between actions and signs such as DRINK, which is produced by tipping a curved hand to the mouth and represents the action of holding a cup and drinking from it.
The researchers emphasize that these results can also be applied to spoken languages, in which gestures, tone of voice, inflection, and face-to-face communication can help make the link between words and their meanings less arbitrary.
Read more at Science Daily
For spoken and written language, the arbitrary relationship between a word's form -- how it sounds or how it looks on paper -- and its meaning is a particularly challenging feature of language acquisition. But one of the first things people notice about sign languages is that signs often represent aspects of meaning in their form. For example, in BSL the sign EAT involves bringing the hand to the mouth just as you would if you were bringing food to the mouth to eat it.
In fact, a high proportion of signs across the world's sign languages are similarly iconic, connecting human experience to linguistic form.
Robin Thompson and colleagues David Vison, Bencie Woll, and Gabriella Vigliocco at the Deafness, Cognition and Language Research Centre (DCAL) at University College London in the United Kingdom wanted to examine whether this kind of iconicity might provide a key to understanding how children come to link words to their meaning.
Their findings are published in Psychological Science, a journal of the Association for Psychological Science.
The researchers looked at data from 31 deaf children who were being raised in deaf BSL signing families in the United Kingdom. Parents indicated the number of words understood and produced by their children between the ages of 8 and 30 months. The researchers decided to focus on 89 specific signs, examining children's familiarity with the signs as well as the iconicity and complexity of the signs.
The findings reveal that younger (11-20 months) and older (21-30 months) children comprehended and produced more BSL signs that were iconic than those that were less iconic. And the benefit of iconicity seemed to be greater for the older children. Importantly, this relationship did not seem to depend on how familiar, complex or concrete the words were.
Together, these findings suggest that iconicity could play an important role in language acquisition.
Thompson and colleagues hypothesize that iconic links between our perceptual-motor experience of the world and the form of a sign may provide an imitation-based mechanism that supports early sign acquisition. These iconic links highlight motor and perceptual similarity between actions and signs such as DRINK, which is produced by tipping a curved hand to the mouth and represents the action of holding a cup and drinking from it.
The researchers emphasize that these results can also be applied to spoken languages, in which gestures, tone of voice, inflection, and face-to-face communication can help make the link between words and their meanings less arbitrary.
Read more at Science Daily
Computer Memory Could Increase Fivefold from Advances in Self-Assembling Polymers
The storage capacity of hard disk drives could increase by a factor of five thanks to processes developed by chemists and engineers at The University of Texas at Austin.
The researchers' technique, which relies on self-organizing substances known as block copolymers, was described this week in an article in Science. It's also being given a real-world test run in collaboration with HGST, one of the world's leading innovators in disk drives.
"In the last few decades there's been a steady, exponential increase in the amount of information that can be stored on memory devices, but things have now reached a point where we're running up against physical limits," said C. Grant Willson, professor of chemistry and biochemistry in the College of Natural Sciences and the Rashid Engineering Regents Chair in the Cockrell School of Engineering.
With current production methods, zeroes and ones are written as magnetic dots on a continuous metal surface. The closer together the dots are, the more information can be stored in the same area. But that tactic has been pretty much maxed out. The dots have now gotten so close together that any further increase in proximity would cause them to be affected by the magnetic fields of their neighboring dots and become unstable.
"The industry is now at about a terabit of information per square inch," said Willson, who co-authored the paper with chemical engineering professor Christopher Ellison and a team of graduate and undergraduate students. "If we moved the dots much closer together with the current method, they would begin to flip spontaneously now and then, and the archival properties of hard disk drives would be lost. Then you're in a world of trouble. Can you imagine if one day your bank account info just changed spontaneously?"
There's a quirk in the physics, however. If the dots are isolated from one another, with no magnetic material between them, they can be pushed closer together without destabilization.
This is where block copolymers come in. At room temperature, coated on a disk surface, they don't look like much. But if they're designed in the right way, and given the right prod, they'll self-assemble into highly regular patterns of dots or lines. If the surface onto which they're coated already has some guideposts etched into it, the dots or lines will form into precisely the patterns needed for a hard disk drive.
This process, which is called directed self-assembly (DSA), was pioneered by engineers at the University of Wisconsin and the Massachusetts Institute of Technology.
When Willson, Ellison and their students began working with directed self-assembly, the best anyone in the field had done was to get the dots small enough to double the storage density of disk drives. The challenge has been to shrink the dots further and to find processing methods that are compatible with high-throughput production.
The team has made great progress on a number of fronts. They've synthesized block copolymers that self-assemble into the smallest dots in the world. In some cases they form into the right, tight patterns in less than a minute, which is also a record.
"I am kind of amazed that our students have been able to do what they've done," said Willson. "When we started, for instance, I was hoping that we could get the processing time under 48 hours. We're now down to about 30 seconds. I'm not even sure how it is possible to do it that fast. It doesn't seem reasonable, but once in a while you get lucky."
Most significantly, the team has designed a special top coat that goes over the block copolymers while they are self-assembling.
"I've been fortunate enough to be involved in the experimental work of the top coat project from its inception all the way to our final results," said Leon Dean, a senior chemical engineering major and one of the authors on the Science paper. "We've had to develop an innovative spin-on top coat for neutralizing the surface energy at the top interface of a block copolymer film."
This top coat allows the polymers to achieve the right orientation relative to the plane of the surface simply by heating.
"The patterns of super small dots can now self-assemble in vertical or perpendicular patterns at smaller dimensions than ever before," said Thomas Albrecht, manager of patterned media technology at HGST. "That makes them easier to etch into the surface of a master plate for nanoimprinting, which is exactly what we need to make patterned media for higher capacity disk drives."
Willson, Ellison and their students are currently working with HGST to see whether these advances can be adapted to their products and integrated into amainstream manufacturing process.
Read more at Science Daily
The researchers' technique, which relies on self-organizing substances known as block copolymers, was described this week in an article in Science. It's also being given a real-world test run in collaboration with HGST, one of the world's leading innovators in disk drives.
"In the last few decades there's been a steady, exponential increase in the amount of information that can be stored on memory devices, but things have now reached a point where we're running up against physical limits," said C. Grant Willson, professor of chemistry and biochemistry in the College of Natural Sciences and the Rashid Engineering Regents Chair in the Cockrell School of Engineering.
With current production methods, zeroes and ones are written as magnetic dots on a continuous metal surface. The closer together the dots are, the more information can be stored in the same area. But that tactic has been pretty much maxed out. The dots have now gotten so close together that any further increase in proximity would cause them to be affected by the magnetic fields of their neighboring dots and become unstable.
"The industry is now at about a terabit of information per square inch," said Willson, who co-authored the paper with chemical engineering professor Christopher Ellison and a team of graduate and undergraduate students. "If we moved the dots much closer together with the current method, they would begin to flip spontaneously now and then, and the archival properties of hard disk drives would be lost. Then you're in a world of trouble. Can you imagine if one day your bank account info just changed spontaneously?"
There's a quirk in the physics, however. If the dots are isolated from one another, with no magnetic material between them, they can be pushed closer together without destabilization.
This is where block copolymers come in. At room temperature, coated on a disk surface, they don't look like much. But if they're designed in the right way, and given the right prod, they'll self-assemble into highly regular patterns of dots or lines. If the surface onto which they're coated already has some guideposts etched into it, the dots or lines will form into precisely the patterns needed for a hard disk drive.
This process, which is called directed self-assembly (DSA), was pioneered by engineers at the University of Wisconsin and the Massachusetts Institute of Technology.
When Willson, Ellison and their students began working with directed self-assembly, the best anyone in the field had done was to get the dots small enough to double the storage density of disk drives. The challenge has been to shrink the dots further and to find processing methods that are compatible with high-throughput production.
The team has made great progress on a number of fronts. They've synthesized block copolymers that self-assemble into the smallest dots in the world. In some cases they form into the right, tight patterns in less than a minute, which is also a record.
"I am kind of amazed that our students have been able to do what they've done," said Willson. "When we started, for instance, I was hoping that we could get the processing time under 48 hours. We're now down to about 30 seconds. I'm not even sure how it is possible to do it that fast. It doesn't seem reasonable, but once in a while you get lucky."
Most significantly, the team has designed a special top coat that goes over the block copolymers while they are self-assembling.
"I've been fortunate enough to be involved in the experimental work of the top coat project from its inception all the way to our final results," said Leon Dean, a senior chemical engineering major and one of the authors on the Science paper. "We've had to develop an innovative spin-on top coat for neutralizing the surface energy at the top interface of a block copolymer film."
This top coat allows the polymers to achieve the right orientation relative to the plane of the surface simply by heating.
"The patterns of super small dots can now self-assemble in vertical or perpendicular patterns at smaller dimensions than ever before," said Thomas Albrecht, manager of patterned media technology at HGST. "That makes them easier to etch into the surface of a master plate for nanoimprinting, which is exactly what we need to make patterned media for higher capacity disk drives."
Willson, Ellison and their students are currently working with HGST to see whether these advances can be adapted to their products and integrated into amainstream manufacturing process.
Read more at Science Daily
Beer Foam Gene Found
For beer drinkers, a frothy head is one of the key features of an ideal brew. Now, scientists say they have found a gene in yeast that makes the protein responsible for producing that beloved foam.
"This report represents the first time that a brewing yeast foaming gene has been cloned and its action fully characterized," wrote the Spanish and Australian researchers in the Journal of Agricultural and Food Chemistry.
The gene, called CFG1, directs the manufacture of a protein in the cell walls of the common beer-making yeast Saccharomyces cerevisiae. The proteins, which are released during fermentation, are averse to water so they orient themselves on the insides of gas bubbles. This adds surface tension to the bubbles, helps them resist the draining of liquid and overall, makes the foam stable.
The formation of foam in beer depends not just on proteins but also on all sorts of other factors, including metallic ions and long carbohydrate molecules.
But with the discovery of the first gene in brewing yeast that is involved in producing and stabilizing beer foam, science has taken another step toward creating a precise molecular recipe for the perfect beer.
Read more at Discovery News
"This report represents the first time that a brewing yeast foaming gene has been cloned and its action fully characterized," wrote the Spanish and Australian researchers in the Journal of Agricultural and Food Chemistry.
The gene, called CFG1, directs the manufacture of a protein in the cell walls of the common beer-making yeast Saccharomyces cerevisiae. The proteins, which are released during fermentation, are averse to water so they orient themselves on the insides of gas bubbles. This adds surface tension to the bubbles, helps them resist the draining of liquid and overall, makes the foam stable.
The formation of foam in beer depends not just on proteins but also on all sorts of other factors, including metallic ions and long carbohydrate molecules.
But with the discovery of the first gene in brewing yeast that is involved in producing and stabilizing beer foam, science has taken another step toward creating a precise molecular recipe for the perfect beer.
Read more at Discovery News
Quasars Help Shed Light on Dark Energy Mystery
There's exciting news today for those following the quest to comprehend dark energy -- the mysterious repulsive force that fills the universe, causing its expansion to accelerate. New results from the Baryon Oscillation Spectroscopic Survey (BOSS) -- part of the Sloan Digital Sky Survey (SDSS-III) -- relying on data from quasars have enabled physicists to produce a detailed 3D "map" of the early universe a whopping 11.5 billion years ago.
That's the period where physicists think dark energy was not yet dominant over the mutual gravitational pull of all the matter in the universe. The new BOSS measurements indicate that before that critical point 11 billion years ago, the expansion of the universe was actually slowing down. These findings should shed light on that critical transition point where dark matter started to dominate.
"If we think of the universe as a roller coaster, then today we are rushing downhill, gaining speed as we go," said Nicolas Busca (French Centre National de la Recherche), one of the lead authors on the study, said via press release. "Our new measurement tells us about the time when the universe was climbing the hill -- still being slowed by gravity."
Some 80 years after Edwin Hubble and Georges Lemaitre made the first measurements of how fast our nearby universe was expanding, the BOSS collaboration has done the same for our universe as it was 11 billion years ago.
The revolutionary 1998 discovery that led to the theory of dark energy relied on studying the red shifts of bright light from supernovae. BOSS, in contrast, looks at something called baryonic acoustic oscillation (BAO). This phenomenon is the result of pressure waves (sound, or acoustic waves) propagating through the early universe in its earliest hot phase, when everything was just one big primordial soup.
Those sound waves created pockets where the density differed in regular intervals or periods, a "wiggle" pattern indicative of oscillation, or vibration. Then the universe cooled sufficiently for ordinary matter and light to go their separate ways, the former condensing into hydrogen atoms.
We can still see signs of those variations in temperature in the cosmic microwave background (CMB), thereby giving scientists a basic scale for BAO -- a "standard ruler," if you will, to compare the size of the universe at various points as it evolved through time.
Earlier this year, BOSS announced the first results from their collaboration, revealing the most precise measurements ever made of the large-scale structure of the universe between five to seven billion years ago.
The results were significant because that time frame is the era when dark energy "turned on."
That earlier survey looked at galaxies, but as useful as the results were, the galaxy-measuring approach was not sufficient to map structures that are as far away as 11.5 billion years, because those galaxies are just too faint.
So BOSS scientists turned to a cunning new technique that relied on the light from quasars to measure the clumping of hydrogen gas between galaxies in the distant universe.
"Quasars" are short for "quasi-stellar radio sources," regions around the black holes at the center of massive galaxies that give off a great deal of radiation. Black holes provide the power source. As matter -- gas and dust, primarily -- wanders near a black hole, it doesn't cross the event horizon and fall into the hole directly. Instead, it forms an accretion disk, and the light is the result of the energy produced as the black hole gobbles up that gas and dust.
Quasars have long been a boon to astronomers: images of a double quasar helped confirm Einstein's prediction of gravitational lensing in 1979. And in September, quasar data from the Massive Compact Halo Objects (MACHO) project were used as cosmic mileposts to map the structure and expansion history of the universe.
How do quasars help us view the distant universe? They let us "see" all that intergalactic hydrogen gas clustering between galaxies, because that gas absorbs some of the light from the quasars just behind it.
Physicists can look at the spectra of quasar light to figure out how it changes as that light moves through space and time. In particular, the spectrum changes as intervening gas absorbs some of the quasar's light -- a phenomenon known as the "Lyman-alpha forest."
Brookhaven's Anze Slosar, another contributor to this research, described the technique as "measuring the shadows cast by gas along a single line billions of light years long." But it's difficult to see the Lyman-alpha forest for the trees. The tricky part, Slosar added, was "combining all those one-dimensional maps into a three-dimensional map. It's like trying to see a picture that's been painted on the quills of porcupines."
Physicists weren't sure at first if this unusual approach would work, but over the last year, as the data was analyzed, it became clear that the measurements matched perfectly with theoretical predictions of where the BOA "peak" should be.
Read more at Discovery News
That's the period where physicists think dark energy was not yet dominant over the mutual gravitational pull of all the matter in the universe. The new BOSS measurements indicate that before that critical point 11 billion years ago, the expansion of the universe was actually slowing down. These findings should shed light on that critical transition point where dark matter started to dominate.
"If we think of the universe as a roller coaster, then today we are rushing downhill, gaining speed as we go," said Nicolas Busca (French Centre National de la Recherche), one of the lead authors on the study, said via press release. "Our new measurement tells us about the time when the universe was climbing the hill -- still being slowed by gravity."
Some 80 years after Edwin Hubble and Georges Lemaitre made the first measurements of how fast our nearby universe was expanding, the BOSS collaboration has done the same for our universe as it was 11 billion years ago.
The revolutionary 1998 discovery that led to the theory of dark energy relied on studying the red shifts of bright light from supernovae. BOSS, in contrast, looks at something called baryonic acoustic oscillation (BAO). This phenomenon is the result of pressure waves (sound, or acoustic waves) propagating through the early universe in its earliest hot phase, when everything was just one big primordial soup.
Those sound waves created pockets where the density differed in regular intervals or periods, a "wiggle" pattern indicative of oscillation, or vibration. Then the universe cooled sufficiently for ordinary matter and light to go their separate ways, the former condensing into hydrogen atoms.
We can still see signs of those variations in temperature in the cosmic microwave background (CMB), thereby giving scientists a basic scale for BAO -- a "standard ruler," if you will, to compare the size of the universe at various points as it evolved through time.
Earlier this year, BOSS announced the first results from their collaboration, revealing the most precise measurements ever made of the large-scale structure of the universe between five to seven billion years ago.
The results were significant because that time frame is the era when dark energy "turned on."
That earlier survey looked at galaxies, but as useful as the results were, the galaxy-measuring approach was not sufficient to map structures that are as far away as 11.5 billion years, because those galaxies are just too faint.
So BOSS scientists turned to a cunning new technique that relied on the light from quasars to measure the clumping of hydrogen gas between galaxies in the distant universe.
"Quasars" are short for "quasi-stellar radio sources," regions around the black holes at the center of massive galaxies that give off a great deal of radiation. Black holes provide the power source. As matter -- gas and dust, primarily -- wanders near a black hole, it doesn't cross the event horizon and fall into the hole directly. Instead, it forms an accretion disk, and the light is the result of the energy produced as the black hole gobbles up that gas and dust.
Quasars have long been a boon to astronomers: images of a double quasar helped confirm Einstein's prediction of gravitational lensing in 1979. And in September, quasar data from the Massive Compact Halo Objects (MACHO) project were used as cosmic mileposts to map the structure and expansion history of the universe.
How do quasars help us view the distant universe? They let us "see" all that intergalactic hydrogen gas clustering between galaxies, because that gas absorbs some of the light from the quasars just behind it.
Physicists can look at the spectra of quasar light to figure out how it changes as that light moves through space and time. In particular, the spectrum changes as intervening gas absorbs some of the quasar's light -- a phenomenon known as the "Lyman-alpha forest."
Brookhaven's Anze Slosar, another contributor to this research, described the technique as "measuring the shadows cast by gas along a single line billions of light years long." But it's difficult to see the Lyman-alpha forest for the trees. The tricky part, Slosar added, was "combining all those one-dimensional maps into a three-dimensional map. It's like trying to see a picture that's been painted on the quills of porcupines."
Physicists weren't sure at first if this unusual approach would work, but over the last year, as the data was analyzed, it became clear that the measurements matched perfectly with theoretical predictions of where the BOA "peak" should be.
Read more at Discovery News
Nov 12, 2012
Researchers Unlock Ancient Maya Secrets With Modern Soil Science
After emerging sometime before 1000 BC, the Maya rose to become the most advanced Pre-Columbian society in the Americas, thriving in jungle cities of tens of thousands of people, such as the one in Guatemala's Tikal National Park. But after reaching its peak between 250 and 900 AD, the Maya civilization began to wane and exactly why has been an enduring mystery to scientists.
Writing in the Nov.-Dec. issue of the Soil Science of America Journal (SSSA-J), an interdisciplinary team led by Richard Terry, a Brigham Young University soil scientist, now describes its analysis of maize agriculture in the soils of Tikal. Not surprisingly, the study uncovered evidence for major maize production in lowland areas, where erosion is less likely and agriculture was presumably more sustainable for this community of an estimated 60,000 people.
But the team also discovered evidence of erosion in upslope soils, suggesting that farming did spread to steeper, less suitable soils over time. And if Maya agriculture did cause substantial erosion, the soil loss could eventually have undercut the Maya's ability to grow food, say the researchers.
The findings are just the latest example of how invisible artifacts in soil -- something archeologists literally used to brush aside -- can inform studies of past civilizations. That's because artwork and buildings can crumble over time and jungles will eventually conceal ancient farm fields, but "the soil chemistry is still there," Terry says.
He explains, for example, that most forest vegetation native to Tikal uses a photosynthetic pathway called C3, while maize uses a pathway called C4. The soil organic matter derived from these two pathways also differs, allowing researchers to make conclusions about the types of plants that were growing in the soils they test.
Thus, by analyzing soils in different areas of Tikal as well as looking at the layers that had formed in the soils, Terry and his collaborators were able to map the areas where ancient maize production occurred, including lowland "bajo" areas and possibly steeper slopes, as more food was needed.
Questions like this about past farming practices have always interested archeologists, Terry notes. But the tools of modern soil science are now enabling these scientists to ask increasingly sophisticated questions about how ancient peoples tried to sustain themselves -- and whether their treatment of the land was a factor in cases where they failed.
Read more at Science Daily
Writing in the Nov.-Dec. issue of the Soil Science of America Journal (SSSA-J), an interdisciplinary team led by Richard Terry, a Brigham Young University soil scientist, now describes its analysis of maize agriculture in the soils of Tikal. Not surprisingly, the study uncovered evidence for major maize production in lowland areas, where erosion is less likely and agriculture was presumably more sustainable for this community of an estimated 60,000 people.
But the team also discovered evidence of erosion in upslope soils, suggesting that farming did spread to steeper, less suitable soils over time. And if Maya agriculture did cause substantial erosion, the soil loss could eventually have undercut the Maya's ability to grow food, say the researchers.
The findings are just the latest example of how invisible artifacts in soil -- something archeologists literally used to brush aside -- can inform studies of past civilizations. That's because artwork and buildings can crumble over time and jungles will eventually conceal ancient farm fields, but "the soil chemistry is still there," Terry says.
He explains, for example, that most forest vegetation native to Tikal uses a photosynthetic pathway called C3, while maize uses a pathway called C4. The soil organic matter derived from these two pathways also differs, allowing researchers to make conclusions about the types of plants that were growing in the soils they test.
Thus, by analyzing soils in different areas of Tikal as well as looking at the layers that had formed in the soils, Terry and his collaborators were able to map the areas where ancient maize production occurred, including lowland "bajo" areas and possibly steeper slopes, as more food was needed.
Questions like this about past farming practices have always interested archeologists, Terry notes. But the tools of modern soil science are now enabling these scientists to ask increasingly sophisticated questions about how ancient peoples tried to sustain themselves -- and whether their treatment of the land was a factor in cases where they failed.
Read more at Science Daily
Species Persistence or Extinction: Through a Mathematical Lens
Scientists have estimated that there are 1.7 million species of animals, plants and algae on earth, and new species continue to be discovered. Unfortunately, as new species are found, many are also disappearing, contributing to a net decrease in biodiversity. The more diversity there is in a population, the longer the ecosystem can sustain itself. Hence, biodiversity is key to ecosystem resilience.
Disease, destruction of habitats, pollution, chemical and pesticide use, increased UV-B radiation, and even the presence of new species are some of the causes for disappearing species. "Allee effect," the phenomenon by which a population's growth declines at low densities, is another key reason for perishing populations, and is an overriding feature of a paper published last month in the SIAM Journal on Applied Mathematics.
Authors Avner Friedman and Abdul-Aziz Yakubu use mathematical modeling to analyze the impact of disease, animal migrations and Allee effects in maintaining biodiversity. Some Allee effect causes in smaller and less dense populations are challenges faced in finding mating partners, genetic inbreeding, and cooperative behaviors such as group feeding and defense. The Allee threshold in such a population is the population below which it is likely to go extinct, and above which persistence is possible. Declining populations that are known to exhibit Allee effects currently include the African wild dog and the Florida panther.
Author Abdul-Aziz Yakubu explains how disease can alter the behavior of populations that exhibit Allee effects. In infectious disease studies, the reproduction number or Ro is defined as the expected number of secondary infections arising from an initial infected individual during the latter's infectious period. For regular populations, the disease disappears in the population if (and only if) the Ro is less than 1. "In the present paper, we deal with a population whose survival is precarious even when Ro is less than 1," says Yakubu. "That is, independent of Ro, if the population size decreases below a certain level (the Allee index), then the individuals die faster than they reproduce."
A previous study by the authors showed that even a healthy stable population that is subject to Allee effects would succumb to a small number of infected individuals within a single location or "patch," causing the entire population to become extinct, since small perturbations can reduce population size or density to a level below or close to the Allee threshold.
Transmission of infectious diseases through a population is affected by local population dynamics as well as migration. Thus, when trying to understand the resilience of the ecosystem, the global survival of the species needs to be taken into account, that is, how does movement of animals between different locations affect survival when a disease affects one or more locations? Various infectious disease outbreaks, such as the West Nile virus, Phocine and distemper viruses have been seen to spread rapidly due to migrations.
In this study, the authors extend their previous research by using a multi-patch model to analyze Allee effects within the context of migration between patches. "We investigate the combined effect of a fatal disease, Allee effect and migration on different groups of the same species," Yakubu says. In their conclusions, the host population is seen to become extinct whenever the initial host population density on each patch is lower than the smallest Allee threshold. When the initial host population has a high Allee threshold, the population persists on each patch if the disease transmission rates are small and the growth rate is large. Even in the case of high Allee thresholds, the host population goes extinct if the disease transmission rate is high, and growth rate and disease threshold are small. The presence of a strong Allee effect adds the possibility of population extinction even as the disease disappears.
The research can be applied to various kinds of populations for conservation studies. "Our models and results are very general and may be applied to several declining populations," says Yakubu. "For example, the African wild dog, an endangered species, is vulnerable to fatal diseases like rabies, distemper and anthrax. Our models can be used to investigate how the Allee threshold of one subpopulation of an African wild dog pack at a geographical location is influenced by the collective migrations of several wild dog populations from different packs with different Allee thresholds."
Read more at Science Daily
Disease, destruction of habitats, pollution, chemical and pesticide use, increased UV-B radiation, and even the presence of new species are some of the causes for disappearing species. "Allee effect," the phenomenon by which a population's growth declines at low densities, is another key reason for perishing populations, and is an overriding feature of a paper published last month in the SIAM Journal on Applied Mathematics.
Authors Avner Friedman and Abdul-Aziz Yakubu use mathematical modeling to analyze the impact of disease, animal migrations and Allee effects in maintaining biodiversity. Some Allee effect causes in smaller and less dense populations are challenges faced in finding mating partners, genetic inbreeding, and cooperative behaviors such as group feeding and defense. The Allee threshold in such a population is the population below which it is likely to go extinct, and above which persistence is possible. Declining populations that are known to exhibit Allee effects currently include the African wild dog and the Florida panther.
Author Abdul-Aziz Yakubu explains how disease can alter the behavior of populations that exhibit Allee effects. In infectious disease studies, the reproduction number or Ro is defined as the expected number of secondary infections arising from an initial infected individual during the latter's infectious period. For regular populations, the disease disappears in the population if (and only if) the Ro is less than 1. "In the present paper, we deal with a population whose survival is precarious even when Ro is less than 1," says Yakubu. "That is, independent of Ro, if the population size decreases below a certain level (the Allee index), then the individuals die faster than they reproduce."
A previous study by the authors showed that even a healthy stable population that is subject to Allee effects would succumb to a small number of infected individuals within a single location or "patch," causing the entire population to become extinct, since small perturbations can reduce population size or density to a level below or close to the Allee threshold.
Transmission of infectious diseases through a population is affected by local population dynamics as well as migration. Thus, when trying to understand the resilience of the ecosystem, the global survival of the species needs to be taken into account, that is, how does movement of animals between different locations affect survival when a disease affects one or more locations? Various infectious disease outbreaks, such as the West Nile virus, Phocine and distemper viruses have been seen to spread rapidly due to migrations.
In this study, the authors extend their previous research by using a multi-patch model to analyze Allee effects within the context of migration between patches. "We investigate the combined effect of a fatal disease, Allee effect and migration on different groups of the same species," Yakubu says. In their conclusions, the host population is seen to become extinct whenever the initial host population density on each patch is lower than the smallest Allee threshold. When the initial host population has a high Allee threshold, the population persists on each patch if the disease transmission rates are small and the growth rate is large. Even in the case of high Allee thresholds, the host population goes extinct if the disease transmission rate is high, and growth rate and disease threshold are small. The presence of a strong Allee effect adds the possibility of population extinction even as the disease disappears.
The research can be applied to various kinds of populations for conservation studies. "Our models and results are very general and may be applied to several declining populations," says Yakubu. "For example, the African wild dog, an endangered species, is vulnerable to fatal diseases like rabies, distemper and anthrax. Our models can be used to investigate how the Allee threshold of one subpopulation of an African wild dog pack at a geographical location is influenced by the collective migrations of several wild dog populations from different packs with different Allee thresholds."
Read more at Science Daily
Self-Healing Synthetic ‘Skin’ Points Way to New Prosthetics
Human skin is a special material: It needs to be flexible, so that it doesn’t crack every time a user clenches his fist. It needs to be sensitive to stimuli like touch and pressure — which are measured as electrical signals, so it needs to conduct electricity. Crucially, if it’s to survive the wear and tear it’s put through every day, it needs to be able to repair itself. Now, researchers in California may have designed a synthetic version — a flexible, electrically conductive, self-healing polymer.
The result is part of a decadelong miniboom in “epidermal electronics” — the production of circuits thin and flexible enough to be attached to skin (for use as wearable heart rate monitors, for example) or to provide skinlike touch sensitivity to prosthetic limbs. The problem is that silicon, the base material of the electronics industry, is brittle. So various research groups have investigated different ways to produce flexible electronic sensors.
Chemists, meanwhile, have become increasingly interested in “self-healing” polymers. This sounds like science fiction, but several research groups have produced plastics that can join their cut edges together when scientists heat them, shine a light on them, or even just hold the cut edges together. In 2008, researchers at ESPCI ParisTech showed that a specially designed rubber compound could recover its mechanical properties after being broken and healed repeatedly.
Chemical engineer Zhenan Bao of Stanford University in Palo Alto, California, and her team combined these two concepts and explored the potential of self-healing polymers in epidermal electronics. However, all the self-healing polymers demonstrated to date had had very low bulk electrical conductivities and would have been little use in electrical sensors. Writing in Nature Nanotechnology, the researchers detail how they increased the conductivity of a self-healing polymer by incorporating nickel atoms, allowing electrons to “jump” between the metal atoms. The polymer is sensitive to applied forces like pressure and torsion (twisting) because such forces alter the distance between the nickel atoms, affecting the difficulty the electrons have jumping from one to the other and changing the electrical resistance of the polymer.
To demonstrate that both the mechanical and the electrical properties of the material could be repeatedly restored to their original values after the material had been damaged and healed, the researchers cut the polymer completely through with a scalpel. After pressing the cut edges together gently for 15 seconds, the researchers found the sample went on to regain 98 percent of its original conductivity. And crucially, just like the ESPCI group’s rubber compound, the Stanford team’s polymer could be cut and healed over and over again.
“I think it’s kind of a breakthrough,” says John J. Boland, a chemist at the CRANN nanoscience institute at Trinity College Dublin. “It’s the first time that we’ve seen this combination of both mechanical and electrical self-healing.” He is, however, skeptical about one point: “With a scalpel you can very precisely cut the material without inducing significant local mechanical deformation around the wound.” Failure due to mechanical tension, however, could stretch the material, producing significant scarring and preventing complete self-healing, he suspects.
Read more at Wired Science
The result is part of a decadelong miniboom in “epidermal electronics” — the production of circuits thin and flexible enough to be attached to skin (for use as wearable heart rate monitors, for example) or to provide skinlike touch sensitivity to prosthetic limbs. The problem is that silicon, the base material of the electronics industry, is brittle. So various research groups have investigated different ways to produce flexible electronic sensors.
Chemists, meanwhile, have become increasingly interested in “self-healing” polymers. This sounds like science fiction, but several research groups have produced plastics that can join their cut edges together when scientists heat them, shine a light on them, or even just hold the cut edges together. In 2008, researchers at ESPCI ParisTech showed that a specially designed rubber compound could recover its mechanical properties after being broken and healed repeatedly.
Chemical engineer Zhenan Bao of Stanford University in Palo Alto, California, and her team combined these two concepts and explored the potential of self-healing polymers in epidermal electronics. However, all the self-healing polymers demonstrated to date had had very low bulk electrical conductivities and would have been little use in electrical sensors. Writing in Nature Nanotechnology, the researchers detail how they increased the conductivity of a self-healing polymer by incorporating nickel atoms, allowing electrons to “jump” between the metal atoms. The polymer is sensitive to applied forces like pressure and torsion (twisting) because such forces alter the distance between the nickel atoms, affecting the difficulty the electrons have jumping from one to the other and changing the electrical resistance of the polymer.
To demonstrate that both the mechanical and the electrical properties of the material could be repeatedly restored to their original values after the material had been damaged and healed, the researchers cut the polymer completely through with a scalpel. After pressing the cut edges together gently for 15 seconds, the researchers found the sample went on to regain 98 percent of its original conductivity. And crucially, just like the ESPCI group’s rubber compound, the Stanford team’s polymer could be cut and healed over and over again.
“I think it’s kind of a breakthrough,” says John J. Boland, a chemist at the CRANN nanoscience institute at Trinity College Dublin. “It’s the first time that we’ve seen this combination of both mechanical and electrical self-healing.” He is, however, skeptical about one point: “With a scalpel you can very precisely cut the material without inducing significant local mechanical deformation around the wound.” Failure due to mechanical tension, however, could stretch the material, producing significant scarring and preventing complete self-healing, he suspects.
Read more at Wired Science
Peanut Allergies Higher Among Wealthier Kids
Children from wealthy families may more likely to have peanut allergies than those less well-off, a new study finds.
In the study, children ages 1 to 9 from high-income families had higher rates of peanut allergies compared with children these ages from lower income families.
The researchers analyzed information from 8,306 children and adultswhose blood samples were taken as part of a national health survey in 2005 to 2006. About 9 percent of participants had an elevated levels of antibodies to peanuts, indicating they had the potential to be allergic to peanuts.
The results add support to the hygiene hypothesis, said study researcher Dr. Sandy Yip, of the U.S. Air Force. The hygiene hypothesis is the idea that living in a cleaner environment may make people's immune systems more sensitive, and increase the prevalence of allergies.
The findings are also inline with those of a study published earlier this year, which found children living in cities were more likely to have food allergies compared with those living in rural areas, which tend to be less expensive than cities.
Read more at Discovery News
In the study, children ages 1 to 9 from high-income families had higher rates of peanut allergies compared with children these ages from lower income families.
The researchers analyzed information from 8,306 children and adultswhose blood samples were taken as part of a national health survey in 2005 to 2006. About 9 percent of participants had an elevated levels of antibodies to peanuts, indicating they had the potential to be allergic to peanuts.
The results add support to the hygiene hypothesis, said study researcher Dr. Sandy Yip, of the U.S. Air Force. The hygiene hypothesis is the idea that living in a cleaner environment may make people's immune systems more sensitive, and increase the prevalence of allergies.
The findings are also inline with those of a study published earlier this year, which found children living in cities were more likely to have food allergies compared with those living in rural areas, which tend to be less expensive than cities.
Read more at Discovery News
Early Human Ancestors Ate Grass
Early human ancestors in central Africa 3.5 million years ago ate a diet of mostly tropical grasses and sedges, finds new research.
The study suggests our relatives were mostly plant-eaters before they evolved a taste for meaty flesh. Consider that tidbit while passing around the creamed spinach during Thanksgiving dinner.
The study focused on Australopithecus bahrelghazali, which had quite a set of teeth. You can see a reconstruction of this human relative here.
"We found evidence suggesting that early hominins, in central Africa at least, ate a diet mainly comprised of tropical grasses and sedges," co-author Julia Lee-Thorp, a University of Oxford archaeologist, said in a press release.
She continued, "No African great apes, including chimpanzees, eat this type of food despite the fact it grows in abundance in tropical and subtropical regions. The only notable exception is the savannah baboon which still forages for these types of plants today. We were surprised to discover that early hominins appear to have consumed more than even the baboons."
She and her colleagues made the determination after studying the fossilised teeth of three A. bahrelghazali individuals -- the first early human relatives excavated at two sites in Chad. The researchers analyzed the carbon isotope ratios in the teeth and found the signature of a diet rich in foods derived from C4 plants.
This indicates our long-gone relatives experienced a shift in their diet relatively early, at least in central Africa. These individuals survived in open landscapes with few trees, so apparently they could exploit not only dense woodland areas but also other environments.
Although the area where A. bahrelghazali roamed is now dry and hyper-arid, back in the day it featured a network of shallow lakes with nearby floodplains and wooded grasslands.
While this ancestor of ours clearly had big, impressive teeth, they would not have been able to tackle leaves day after day. The individuals also lacked cow-like guts to break down and digest such food, so the researchers suspect the early hominids probably relied more on the roots, corms and bulbs at the base of the plants.
Read more at Discovery News
The study suggests our relatives were mostly plant-eaters before they evolved a taste for meaty flesh. Consider that tidbit while passing around the creamed spinach during Thanksgiving dinner.
The study focused on Australopithecus bahrelghazali, which had quite a set of teeth. You can see a reconstruction of this human relative here.
"We found evidence suggesting that early hominins, in central Africa at least, ate a diet mainly comprised of tropical grasses and sedges," co-author Julia Lee-Thorp, a University of Oxford archaeologist, said in a press release.
She continued, "No African great apes, including chimpanzees, eat this type of food despite the fact it grows in abundance in tropical and subtropical regions. The only notable exception is the savannah baboon which still forages for these types of plants today. We were surprised to discover that early hominins appear to have consumed more than even the baboons."
She and her colleagues made the determination after studying the fossilised teeth of three A. bahrelghazali individuals -- the first early human relatives excavated at two sites in Chad. The researchers analyzed the carbon isotope ratios in the teeth and found the signature of a diet rich in foods derived from C4 plants.
This indicates our long-gone relatives experienced a shift in their diet relatively early, at least in central Africa. These individuals survived in open landscapes with few trees, so apparently they could exploit not only dense woodland areas but also other environments.
Although the area where A. bahrelghazali roamed is now dry and hyper-arid, back in the day it featured a network of shallow lakes with nearby floodplains and wooded grasslands.
While this ancestor of ours clearly had big, impressive teeth, they would not have been able to tackle leaves day after day. The individuals also lacked cow-like guts to break down and digest such food, so the researchers suspect the early hominids probably relied more on the roots, corms and bulbs at the base of the plants.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Nov 11, 2012
Painful Truths About Genital Injuries
A comprehensive survey of genital injuries over the last decade involving mishaps with consumer products like clothing, furniture, tools and toys that brought U.S. adults to emergency rooms reveals that such injuries are common and may be preventable, according to doctors at the University of California, San Francisco (UCSF).
The study, described this week in The Journal of Urology, was the largest ever to look at major and minor "genitourinary" injuries, which involve the genitals, urinary tract and kidneys. It showed that 142,144 U.S. adults went to emergency rooms between 2002 and 2010 for such injuries -- about 16,000 a year.
The work suggests educational and product safety approaches for preventing these injuries may be possible because the injuries themselves tended to cluster into particular age groups and involve specific consumer products.
"It shows which groups are at risk and with which products," said UCSF urologist Benjamin Breyer MD, MAS, who led the research.
Most of the patients in the study -- about 70 percent -- were men, and more than a third were young men (18-28), who tended to hurt themselves most often in sporting accidents -- crashing onto the crossbar of a mountain bike, for instance.
Older men were more likely to sustain genital injuries during routine activities, such as slipping into a split and hitting their groin on the edge of the bathtub. They were also more likely to be hospitalized for their injuries.
While women were overall less likely to endure genital injuries than their male counterparts, there was at least one exception: cuts and infections related to shaving or grooming pubic hair.
The last few years have seen a dramatic increase in these types of injuries in women, and a second study that was recently published by the same UCSF group found that these types of injuries increased five-fold between 2002 and 2010.
Breyer said insight into the common ways injuries occur may also suggest the most fruitful ways to prevent them through consumer education and product safety measures, such as padding on bike rails, slip-free bath mats and safer techniques for grooming pubic hair.
In their paper, the UCSF team noted that there are also standard procedures that emergency department doctors would do well to learn, such as "zipper detachment strategies for penile skin entrapment."
How the Injuries were Counted
The data was collected through the National Electronic Injury Surveillance System (NEISS), a service of the U.S. Consumer Product Safety Commission. The system collects extensive patient information from 100 hospitals nationwide and uses this data to extrapolate nationwide estimates of injuries that occur each year. The system also collects short narrative descriptions of the injuries, and the UCSF team reviewed more than 10,000 of these narratives to produce studies looking at pediatric and adult injuries.
While the data was extensive, it may underestimate the number of injuries, Breyer said, because only those injuries that brought patients to the emergency room were in the database. Many more may have occurred that were not serious enough to warrant a hospital visit.
Even those injuries collected by the NEISS tended to be minor, the study found. About 90 percent of the patients tracked were seen by hospital staff and released, rather than being admitted into the hospital -- though some, especially those involving internal injuries to the kidney, were much more serious.
The article, "Product Related Adult Genitourinary Injuries Treated in United States Emergency Departments from 2002 -- 2010" by Herman S. Bagga, Gregory E. Tasian, Patrick B. Fisher, Charles E. McCulloch, Jack W. McAninch and Benjamin N. Breyer was published on November 2, 2012 in The Journal of Urology.
Read more at Science Daily
The study, described this week in The Journal of Urology, was the largest ever to look at major and minor "genitourinary" injuries, which involve the genitals, urinary tract and kidneys. It showed that 142,144 U.S. adults went to emergency rooms between 2002 and 2010 for such injuries -- about 16,000 a year.
The work suggests educational and product safety approaches for preventing these injuries may be possible because the injuries themselves tended to cluster into particular age groups and involve specific consumer products.
"It shows which groups are at risk and with which products," said UCSF urologist Benjamin Breyer MD, MAS, who led the research.
Most of the patients in the study -- about 70 percent -- were men, and more than a third were young men (18-28), who tended to hurt themselves most often in sporting accidents -- crashing onto the crossbar of a mountain bike, for instance.
Older men were more likely to sustain genital injuries during routine activities, such as slipping into a split and hitting their groin on the edge of the bathtub. They were also more likely to be hospitalized for their injuries.
While women were overall less likely to endure genital injuries than their male counterparts, there was at least one exception: cuts and infections related to shaving or grooming pubic hair.
The last few years have seen a dramatic increase in these types of injuries in women, and a second study that was recently published by the same UCSF group found that these types of injuries increased five-fold between 2002 and 2010.
Breyer said insight into the common ways injuries occur may also suggest the most fruitful ways to prevent them through consumer education and product safety measures, such as padding on bike rails, slip-free bath mats and safer techniques for grooming pubic hair.
In their paper, the UCSF team noted that there are also standard procedures that emergency department doctors would do well to learn, such as "zipper detachment strategies for penile skin entrapment."
How the Injuries were Counted
The data was collected through the National Electronic Injury Surveillance System (NEISS), a service of the U.S. Consumer Product Safety Commission. The system collects extensive patient information from 100 hospitals nationwide and uses this data to extrapolate nationwide estimates of injuries that occur each year. The system also collects short narrative descriptions of the injuries, and the UCSF team reviewed more than 10,000 of these narratives to produce studies looking at pediatric and adult injuries.
While the data was extensive, it may underestimate the number of injuries, Breyer said, because only those injuries that brought patients to the emergency room were in the database. Many more may have occurred that were not serious enough to warrant a hospital visit.
Even those injuries collected by the NEISS tended to be minor, the study found. About 90 percent of the patients tracked were seen by hospital staff and released, rather than being admitted into the hospital -- though some, especially those involving internal injuries to the kidney, were much more serious.
The article, "Product Related Adult Genitourinary Injuries Treated in United States Emergency Departments from 2002 -- 2010" by Herman S. Bagga, Gregory E. Tasian, Patrick B. Fisher, Charles E. McCulloch, Jack W. McAninch and Benjamin N. Breyer was published on November 2, 2012 in The Journal of Urology.
Read more at Science Daily
Scientists Discover Possible Building Blocks of Ancient Genetic Systems in Earth's Most Primitive Organisms
Scientists believe that prior to the advent of DNA as Earth's primary genetic material, early forms of life used RNA to encode genetic instructions. What sort of genetic molecules did life rely on before RNA?
The answer may be AEG, a small molecule that when linked into chains forms a hypothetical backbone for peptide nucleic acids, which have been hypothesized as the first genetic molecules. Synthetic AEG has been studied by the pharmaceutical industry as a possible gene silencer to stop or slow certain genetic diseases. The only problem with the theory is that up to now, AEG has been unknown in nature.
A team of scientists from the United States and Sweden announced that they have discovered AEG within cyanobacteria which are believed to be some of the most primitive organisms on Earth. Cyanobacteria sometimes appear as mats or scums on the surface of reservoirs and lakes during hot summer months. Their tolerance for extreme habitats is remarkable, ranging from the hot springs of Yellowstone to the tundra of the Arctic.
"Our discovery of AEG in cyanobacteria was unexpected," explains Dr. Paul Alan Cox, co-author of the paper that appeared in the journal PLOS ONE. The American team members are based at the Institute for Ethnomedicine in Jackson Hole, and serve as adjunct faculty at Weber State University in Ogden, Utah.
"While we were writing our manuscript," Cox says, "we learned that our colleagues at the Stockholm University Department of Analytical Chemistry had made a similar discovery, so we asked them to join us on the paper."
To determine how widespread AEG production is among cyanobacteria, the scientists analyzed pristine cyanobacterial cultures from the Pasteur Culture Collection of Paris, France. They also collected samples of cyanobacteria from Guam, Japan, Qatar, as well as in the Gobi desert of Mongolia, the latter sample being collected by famed Wyoming naturalist Derek Craighead. All were found to produce AEG.
Professor Leopold Ilag and his student Liying Jiang at Stockholm University's Department of Analytical Chemistry analyzed the same samples and came up with identical results: cyanobacteria produce AEG. While the analysis is certain, its significance for studies of the earliest forms of life on Earth remains unclear. Does the production of AEG by cyanobacteria represent an echo of the earliest life on Earth?
Read more at Science Daily
The answer may be AEG, a small molecule that when linked into chains forms a hypothetical backbone for peptide nucleic acids, which have been hypothesized as the first genetic molecules. Synthetic AEG has been studied by the pharmaceutical industry as a possible gene silencer to stop or slow certain genetic diseases. The only problem with the theory is that up to now, AEG has been unknown in nature.
A team of scientists from the United States and Sweden announced that they have discovered AEG within cyanobacteria which are believed to be some of the most primitive organisms on Earth. Cyanobacteria sometimes appear as mats or scums on the surface of reservoirs and lakes during hot summer months. Their tolerance for extreme habitats is remarkable, ranging from the hot springs of Yellowstone to the tundra of the Arctic.
"Our discovery of AEG in cyanobacteria was unexpected," explains Dr. Paul Alan Cox, co-author of the paper that appeared in the journal PLOS ONE. The American team members are based at the Institute for Ethnomedicine in Jackson Hole, and serve as adjunct faculty at Weber State University in Ogden, Utah.
"While we were writing our manuscript," Cox says, "we learned that our colleagues at the Stockholm University Department of Analytical Chemistry had made a similar discovery, so we asked them to join us on the paper."
To determine how widespread AEG production is among cyanobacteria, the scientists analyzed pristine cyanobacterial cultures from the Pasteur Culture Collection of Paris, France. They also collected samples of cyanobacteria from Guam, Japan, Qatar, as well as in the Gobi desert of Mongolia, the latter sample being collected by famed Wyoming naturalist Derek Craighead. All were found to produce AEG.
Professor Leopold Ilag and his student Liying Jiang at Stockholm University's Department of Analytical Chemistry analyzed the same samples and came up with identical results: cyanobacteria produce AEG. While the analysis is certain, its significance for studies of the earliest forms of life on Earth remains unclear. Does the production of AEG by cyanobacteria represent an echo of the earliest life on Earth?
Read more at Science Daily
Subscribe to:
Posts (Atom)