Experts agree that rising Chinese labor costs and improving U.S. technology will gradually cause significant manufacturing activity to return to the United States.
When it does, a new interdisciplinary manufacturing venture called the Advanced Manufacturing Technology (AMTecH) group at the University of Iowa College of Engineering's Center for Computer Aided Design (CCAD) may well help lead the charge.
AMTecH was formed to design, create, and test -- both virtually and physically -- a wide variety of electromechanical and biomedical components, systems and processes. Currently, the group is working on projects ranging from printed circuit boards for automobiles and aircraft to replacement parts for damaged and failing human organs and tissue, says Tim Marler, AMTecH co-director.
"Electromechanical systems are one of two current branches of the AMTecHgroup," he says. "We want to simulate, analyze and test printed circuit boards and assemblies, because they are used in a wide range of products from missiles to power plants to cell phones. "The second branch of the group involves biomanufacturing and is led by my colleague and AMTecH co-director Ibrahim Ozbolat, assistant professor of mechanical and industrial engineering," says Marler. "The long-term goal of this branch is to create functioning human organs some five or 10 years from now. This is not far-fetched."
Using its facilities for engineering living tissue systems, the Biomanufacturing Laboratory at CCAD is working to develop and refine various 3D printing processes required for organ and tissue fabrication, Ozbolat says.
"One of the most promising research activities is bioprinting a glucose-sensitive pancreatic organ that can be grown in a lab and transplanted anywhere inside the body to regulate the glucose level of blood," says Ozbolat. He adds that the 3D printing, as well as virtual electronic manufacturing, being conducted at AMTecH are done nowhere else in Iowa.
In fact, the multi-arm bio printer being used in the lab is unique. Ozbolat and Howard Chen, a UI doctoral student in industrial engineering, designed it and Chen built it. It turns out that managing multiple arms without having them collide with one another is difficult enough that other printers used in other parts of the world avoid the problem by using simpler designs calling for single-arm printing. As Chen continues to refine his and Ozbolat's design, the UI printer currently gives the UI researchers a distinct advantage.
While bioprinters at other institutions use one arm with multiple heads to print multiple materials one after the other, the UI device with multiple arms can print several materials concurrently. This capability offers a time-saving advantage when attempting to print a human organ because one arm can be used to create blood vessels while the other arm is creating tissue-specific cells in between the blood vessels.
The biomanufacturing group, which consists of researchers from various disciplines including industrial, mechanical, electrical, polymer and biomedical engineers as well as medical researchers, is working on this and other projects, and collaborates with Dr. Nicholas Zavazava, professor of internal medicine, in the UI Roy J. and Lucille A. Carver College of Medicine. The group also works with researchers from the college's Ignacio V. Ponsetti Biochemistry and Cell Biology Laboratory.
In addition to receiving support from the National Institutes of Health for the artificial pancreas research, AMTecH is looking forward to continued support from the Electric Power Research Institute (EPRI) as well as seed funding from the UI for fostering commercialization of a new software product.
"When you look at the U.S. manufacturing environment and relevant technology, this is a perfect time to launch AMTecH," says Marler, who also serves as associate research scientist at CCAD and senior research scientist at CCAD's Virtual Soldier Research program.
Read more at Science Daily
Mar 9, 2013
Genetic Study of House Dust Mites Demonstrates Reversible Evolution
In evolutionary biology, there is a deeply rooted supposition that you can't go home again: Once an organism has evolved specialized traits, it can't return to the lifestyle of its ancestors.
There's even a name for this pervasive idea. Dollo's law states that evolution is unidirectional and irreversible. But this "law" is not universally accepted and is the topic of heated debate among biologists.
Now a research team led by two University of Michigan biologists has used a large-scale genetic study of the lowly house dust mite to uncover an example of reversible evolution that appears to violate Dollo's law.
The study shows that tiny free-living house dust mites, which thrive in the mattresses, sofas and carpets of even the cleanest homes, evolved from parasites, which in turn evolved from free-living organisms millions of years ago.
"All our analyses conclusively demonstrated that house dust mites have abandoned a parasitic lifestyle, secondarily becoming free-living, and then speciated in several habitats, including human habitations," according to Pavel Klimov and Barry OConnor of the U-M Department of Ecology and Evolutionary Biology.
Their paper, "Is permanent parasitism reversible? -- Critical evidence from early evolution of house dust mites," is scheduled to be published online March 8 in the journal Systematic Biology.
Mites are arachnids related to spiders (both have eight legs) and are among the most diverse animals on Earth. House dust mites, members of the family Pyroglyphidae, are the most common cause of allergic symptoms in humans, affecting up to 1.2 billion people worldwide.
Despite their huge impact on human health, the evolutionary relationships between these speck-sized creatures are poorly understood. According to Klimov and OConnor, there are 62 different published hypotheses arguing about whether today's free-living dust mites originated from a free-living ancestor or from a parasite -- an organism that lives on or in a host species and damages its host.
In their study, Klimov and OConnor evaluated all 62 hypotheses. Their project used large-scale DNA sequencing, the construction of detailed evolutionary trees called phylogenies, and sophisticated statistical analyses to test the hypotheses about the ancestral ecology of house dust mites.
On the phylogenetic tree they produced, house dust mites appear within a large lineage of parasitic mites, the Psoroptidia. These mites are full-time parasites of birds and mammals that never leave the bodies of their hosts. The U-M analysis shows that the immediate parasitic ancestors of house dust mites include skin mites, such as the psoroptic mange mites of livestock and the dog and cat ear mite.
"This result was so surprising that we decided to contact our colleagues to obtain their feedback prior to sending these data for publication," said Klimov, the first author of the paper and an assistant research scientist in the Department of Ecology and Evolutionary Biology.
The result was so surprising largely because it runs counter to the entrenched idea that highly specialized parasites cannot return to the free-living lifestyle of their ancestors.
"Parasites can quickly evolve highly sophisticated mechanisms for host exploitation and can lose their ability to function away from the host body," Klimov said. "They often experience degradation or loss of many genes because their functions are no longer required in a rich environment where hosts provide both living space and nutrients. Many researchers in the field perceive such specialization as evolutionarily irreversible."
The U-M findings also have human-health implications, said OConnor, a professor in the Department of Ecology and Evolutionary Biology and a curator of insects and arachnids at the U-M Museum of Zoology.
"Our study is an example of how asking a purely academic question may result in broad practical applications," he said. "Knowing phylogenetic relationships of house dust mites may provide insights into allergenic properties of their immune-response-triggering proteins and the evolution of genes encoding allergens."
The project started in 2006 with a grant from the National Science Foundation. The first step was to obtain specimens of many free-living and parasitic mites -- no simple task given that some mite species are associated with rare mammal or bird species around the world.
The research team relied on a network of 64 biologists in 19 countries to obtain specimens. In addition, Klimov and OConnor conducted field trips to North and South America, Europe, Asia and Africa. On one occasion, it took two years to obtain samples of an important species parasitizing African birds.
A total of around 700 mite species were collected for the study. For the genetic analysis, the same five nuclear genes were sequenced in each species.
How might the ecological shift from parasite to free-living state have occurred?
There is little doubt that early free-living dust mites were nest inhabitants -- the nests of birds and mammals are the principal habitat of all modern free-living species in the family Pyroglyphidae. Klimov and OConnor propose that a combination of several characteristics of their parasitic ancestors played an important role in allowing them to abandon permanent parasitism: tolerance of low humidity, development of powerful digestive enzymes that allowed them to feed on skin and keratinous (containing the protein keratin, which is found in human hair and fingernails) materials, and low host specificity with frequent shifts to unrelated hosts.
These features, which occur in almost all parasitic mites, were likely important precursors that enabled mite populations to thrive in host nests despite low humidity and scarce, low-quality food resources, according to Klimov and OConnor. For example, powerful enzymes allowed these mites to consume hard-to-digest feather and skin flakes composed of keratin.
Read more at Science Daily
There's even a name for this pervasive idea. Dollo's law states that evolution is unidirectional and irreversible. But this "law" is not universally accepted and is the topic of heated debate among biologists.
Now a research team led by two University of Michigan biologists has used a large-scale genetic study of the lowly house dust mite to uncover an example of reversible evolution that appears to violate Dollo's law.
The study shows that tiny free-living house dust mites, which thrive in the mattresses, sofas and carpets of even the cleanest homes, evolved from parasites, which in turn evolved from free-living organisms millions of years ago.
"All our analyses conclusively demonstrated that house dust mites have abandoned a parasitic lifestyle, secondarily becoming free-living, and then speciated in several habitats, including human habitations," according to Pavel Klimov and Barry OConnor of the U-M Department of Ecology and Evolutionary Biology.
Their paper, "Is permanent parasitism reversible? -- Critical evidence from early evolution of house dust mites," is scheduled to be published online March 8 in the journal Systematic Biology.
Mites are arachnids related to spiders (both have eight legs) and are among the most diverse animals on Earth. House dust mites, members of the family Pyroglyphidae, are the most common cause of allergic symptoms in humans, affecting up to 1.2 billion people worldwide.
Despite their huge impact on human health, the evolutionary relationships between these speck-sized creatures are poorly understood. According to Klimov and OConnor, there are 62 different published hypotheses arguing about whether today's free-living dust mites originated from a free-living ancestor or from a parasite -- an organism that lives on or in a host species and damages its host.
In their study, Klimov and OConnor evaluated all 62 hypotheses. Their project used large-scale DNA sequencing, the construction of detailed evolutionary trees called phylogenies, and sophisticated statistical analyses to test the hypotheses about the ancestral ecology of house dust mites.
On the phylogenetic tree they produced, house dust mites appear within a large lineage of parasitic mites, the Psoroptidia. These mites are full-time parasites of birds and mammals that never leave the bodies of their hosts. The U-M analysis shows that the immediate parasitic ancestors of house dust mites include skin mites, such as the psoroptic mange mites of livestock and the dog and cat ear mite.
"This result was so surprising that we decided to contact our colleagues to obtain their feedback prior to sending these data for publication," said Klimov, the first author of the paper and an assistant research scientist in the Department of Ecology and Evolutionary Biology.
The result was so surprising largely because it runs counter to the entrenched idea that highly specialized parasites cannot return to the free-living lifestyle of their ancestors.
"Parasites can quickly evolve highly sophisticated mechanisms for host exploitation and can lose their ability to function away from the host body," Klimov said. "They often experience degradation or loss of many genes because their functions are no longer required in a rich environment where hosts provide both living space and nutrients. Many researchers in the field perceive such specialization as evolutionarily irreversible."
The U-M findings also have human-health implications, said OConnor, a professor in the Department of Ecology and Evolutionary Biology and a curator of insects and arachnids at the U-M Museum of Zoology.
"Our study is an example of how asking a purely academic question may result in broad practical applications," he said. "Knowing phylogenetic relationships of house dust mites may provide insights into allergenic properties of their immune-response-triggering proteins and the evolution of genes encoding allergens."
The project started in 2006 with a grant from the National Science Foundation. The first step was to obtain specimens of many free-living and parasitic mites -- no simple task given that some mite species are associated with rare mammal or bird species around the world.
The research team relied on a network of 64 biologists in 19 countries to obtain specimens. In addition, Klimov and OConnor conducted field trips to North and South America, Europe, Asia and Africa. On one occasion, it took two years to obtain samples of an important species parasitizing African birds.
A total of around 700 mite species were collected for the study. For the genetic analysis, the same five nuclear genes were sequenced in each species.
How might the ecological shift from parasite to free-living state have occurred?
There is little doubt that early free-living dust mites were nest inhabitants -- the nests of birds and mammals are the principal habitat of all modern free-living species in the family Pyroglyphidae. Klimov and OConnor propose that a combination of several characteristics of their parasitic ancestors played an important role in allowing them to abandon permanent parasitism: tolerance of low humidity, development of powerful digestive enzymes that allowed them to feed on skin and keratinous (containing the protein keratin, which is found in human hair and fingernails) materials, and low host specificity with frequent shifts to unrelated hosts.
These features, which occur in almost all parasitic mites, were likely important precursors that enabled mite populations to thrive in host nests despite low humidity and scarce, low-quality food resources, according to Klimov and OConnor. For example, powerful enzymes allowed these mites to consume hard-to-digest feather and skin flakes composed of keratin.
Read more at Science Daily
Mar 8, 2013
Animals Back From the Brink
Bald eagles
America's national symbol flew off the Endangered Species List in 2007 and won't be landing on it again anytime soon. Hunting and pesticide contamination once decimated bald eagle populations, and by 1950 only 416 mating pairs soared over the lower 48 United States. A major threat to the eagles was the pesticide DDT, which Rachael Carson made infamous in the book Silent Spring. The pesticide built up in the food chain and caused the eagles' eggs to have fatally thin shells.
Aggressive protection programs and the banning of the pesticide DDT saved America's avian symbol. Now, the eagles are so numerous that the National Fish and Wildlife Service doesn't bother keeping yearly tallies. The last count of mating pairs in the lower 48 was 9,789 in 2006. Bald eagle watching is now a tourist attraction in many states. The eagles are out of immediate danger, but still fiercely protected by national laws which make it illegal even to collect feathers from naturally deceased birds without permission.
However, the protected status of bald eagles and their kin, the golden eagle, have caused problems for some Native American groups that need feathers from the birds for their religious ceremonies. U.S. government officials allot the feathers to religious leaders, but only after years of waiting on a long list.
Saltwater crocodile
The world's largest crocodile went the way of the dinosaurs in much of its range. The saltwater crocodile (Crocodylus porosus) once ambushed prey along coastal waterways from Vietnam to southern India and northern Australia. A global fashion for crocodile leather killed off the animals in most of continental Asia. Many were also killed out of fear or as food.
However, the crocodiles down under are staging a comeback. “Salties,” as Aussies call the crocs, rebounded in much of northern Australia, after their populations in the 1970s dropped to approximately five percent of their former numbers. Legal protections and bans on crocodile products have allowed the giant reptiles to bounce back. They are now considered a species of Least Concern by the International Union for Conservation of Nature.
One of the largest crocodiles ever captured recently died in captivity. The 20+ foot long reptile, known as Lolong, died in captivity in the Philippines on Feb. 10, 2013.
Wild Turkeys
The wild turkey, Ben Franklin's choice for America's national bird, suffered the same fate as the bald eagle. Wild turkeys (Meleagris gallopavo) were abundant from the Rocky Mountains to the Atlantic when European colonists first arrived. Then, hungry colonists and habitat loss nearly gobbled up the turkey. By the mid-1800's wild turkeys had been wiped out in much of the Midwest and Eastern U.S. The birds survived in remote parts of Missouri and Arkansas.
Attempts to restore populations using domesticated turkey's failed, possibly because the original domesticated turkeys hailed from Mexico and couldn't effectively go feral in the frigid north. Only birds transplanted from their holdouts in Missouri and Arkansas managed to survive reintroduction programs. The reintroduction of these birds was a huge success and turkeys have returned to much of their former range, and even areas where they never lived, like Hawaii and southern California.
Read more at Discovery News
America's national symbol flew off the Endangered Species List in 2007 and won't be landing on it again anytime soon. Hunting and pesticide contamination once decimated bald eagle populations, and by 1950 only 416 mating pairs soared over the lower 48 United States. A major threat to the eagles was the pesticide DDT, which Rachael Carson made infamous in the book Silent Spring. The pesticide built up in the food chain and caused the eagles' eggs to have fatally thin shells.
Aggressive protection programs and the banning of the pesticide DDT saved America's avian symbol. Now, the eagles are so numerous that the National Fish and Wildlife Service doesn't bother keeping yearly tallies. The last count of mating pairs in the lower 48 was 9,789 in 2006. Bald eagle watching is now a tourist attraction in many states. The eagles are out of immediate danger, but still fiercely protected by national laws which make it illegal even to collect feathers from naturally deceased birds without permission.
However, the protected status of bald eagles and their kin, the golden eagle, have caused problems for some Native American groups that need feathers from the birds for their religious ceremonies. U.S. government officials allot the feathers to religious leaders, but only after years of waiting on a long list.
Saltwater crocodile
The world's largest crocodile went the way of the dinosaurs in much of its range. The saltwater crocodile (Crocodylus porosus) once ambushed prey along coastal waterways from Vietnam to southern India and northern Australia. A global fashion for crocodile leather killed off the animals in most of continental Asia. Many were also killed out of fear or as food.
However, the crocodiles down under are staging a comeback. “Salties,” as Aussies call the crocs, rebounded in much of northern Australia, after their populations in the 1970s dropped to approximately five percent of their former numbers. Legal protections and bans on crocodile products have allowed the giant reptiles to bounce back. They are now considered a species of Least Concern by the International Union for Conservation of Nature.
One of the largest crocodiles ever captured recently died in captivity. The 20+ foot long reptile, known as Lolong, died in captivity in the Philippines on Feb. 10, 2013.
Wild Turkeys
The wild turkey, Ben Franklin's choice for America's national bird, suffered the same fate as the bald eagle. Wild turkeys (Meleagris gallopavo) were abundant from the Rocky Mountains to the Atlantic when European colonists first arrived. Then, hungry colonists and habitat loss nearly gobbled up the turkey. By the mid-1800's wild turkeys had been wiped out in much of the Midwest and Eastern U.S. The birds survived in remote parts of Missouri and Arkansas.
Attempts to restore populations using domesticated turkey's failed, possibly because the original domesticated turkeys hailed from Mexico and couldn't effectively go feral in the frigid north. Only birds transplanted from their holdouts in Missouri and Arkansas managed to survive reintroduction programs. The reintroduction of these birds was a huge success and turkeys have returned to much of their former range, and even areas where they never lived, like Hawaii and southern California.
Read more at Discovery News
Seismologists Appeal Manslaughter Verdict
The six scientists and one government official convicted of manslaughter over statements they made before a 2009 earthquake that killed 309 in the town of L'Aquila, Italy, have filed appeals against the verdict.
All seven met the March 6 deadline for filing, according to a Nature News blog.
Judge Marco Billi sentenced the seismologists and official to six years in prison on Oct. 22, 2012, after a yearlong trial. Three judges are expected to oversee the appeals trials, and in the meantime the prison sentences will remain on hold, Nature News reports.
The prosecutors contended that at a March 31 meeting in L'Aquila the defendants had downplayed the risks of a large earthquake after a series of tremors shook the Italian city in early 2009. On April 6, 2009, a magnitude-6.3 quake hit, and 29 people who would have fled their homes stayed put, only to be killed when the buildings collapsed.
At the controversial meeting, one of the defendants, earth scientist Enzo Boschi noted the uncertainty, saying a large earthquake was "unlikely," but saying that the possibility could not be excluded. However, a press conference that followed saw another telling citizens there was "no danger."
The verdict drew ire and condemnation from seismologists and other earth scientists around the globe.
"The idea is ridiculous, to hold scientists responsible for public policy," said Chris Goldfinger, a professor of geology and geophysics at Oregon State University, on the day of the verdict. "First, scientists have almost zero ability to predict earthquakes, and second, have no direct responsibility for public policy. Something has gone seriously wrong in the Italian legal system."
The defendants' attorneys, in their appeals, are asking for the verdict to be overturned and all charges dropped, Nature News reports. They are arguing that all of the statements made during the March 31 meeting were scientifically accurate, and that political authorities, not this panel, should have the responsibility of informing the public of the risk.
Read more at Discovery News
All seven met the March 6 deadline for filing, according to a Nature News blog.
Judge Marco Billi sentenced the seismologists and official to six years in prison on Oct. 22, 2012, after a yearlong trial. Three judges are expected to oversee the appeals trials, and in the meantime the prison sentences will remain on hold, Nature News reports.
The prosecutors contended that at a March 31 meeting in L'Aquila the defendants had downplayed the risks of a large earthquake after a series of tremors shook the Italian city in early 2009. On April 6, 2009, a magnitude-6.3 quake hit, and 29 people who would have fled their homes stayed put, only to be killed when the buildings collapsed.
At the controversial meeting, one of the defendants, earth scientist Enzo Boschi noted the uncertainty, saying a large earthquake was "unlikely," but saying that the possibility could not be excluded. However, a press conference that followed saw another telling citizens there was "no danger."
The verdict drew ire and condemnation from seismologists and other earth scientists around the globe.
"The idea is ridiculous, to hold scientists responsible for public policy," said Chris Goldfinger, a professor of geology and geophysics at Oregon State University, on the day of the verdict. "First, scientists have almost zero ability to predict earthquakes, and second, have no direct responsibility for public policy. Something has gone seriously wrong in the Italian legal system."
The defendants' attorneys, in their appeals, are asking for the verdict to be overturned and all charges dropped, Nature News reports. They are arguing that all of the statements made during the March 31 meeting were scientifically accurate, and that political authorities, not this panel, should have the responsibility of informing the public of the risk.
Read more at Discovery News
Britain's 'Atheist Church' Set to Go Global
Echoing with joyful song and with a congregation bent on leading better lives, this London church is like any other -- except there's no mention of God.
Britain's atheist church is barely three months old but it already has more "worshippers" than can fit into its services, while more than 200 non-believers worldwide have contacted organisers to ask how they can set up their own branch.
Officially named The Sunday Assembly, the church was the brainchild of Pippa Evans and Sanderson Jones, two comedians who suspected there might be an appetite for atheist gatherings that borrowed a few aspects of religious worship.
Held in an airy, ramshackle former church in north London, their quirky monthly meetings combine music, speeches and moral pondering with large doses of humour.
"There's so much about Church that has nothing to do with God -- it's about meeting people, it's about thinking about improving your life," said Jones, a gregarious 32-year-old with a bushy beard and a laugh like a thunderclap.
The Sunday Assembly's central tenets are to "help often, live better and wonder more" -- themes that would not be out of keeping with the teachings of any major world religion.
At last Sunday's service, which had a volunteering theme, songs included "Help" by the Beatles and "Holding Out For A Hero" by Bonnie Tyler.
The "sermon" was given by the founder of an education charity, while in a section called Pippa Is Trying Her Best, Evans had the congregation in stitches as she reported on her attempts at voluntary work.
The service ended with big cheers and -- this is Britain, after all -- shouts of "Who would like a cup of tea?"
Like many Western countries, Britain is becoming an increasingly faithless nation.
While a majority still consider themselves Christians, census data revealed in December that their numbers plummeted from 72 percent in 2001 to 59 percent in 2011.
The proportion of Britons with no religion, meanwhile, shot up from 15 percent to 25 percent over the same period.
But the Sunday Assembly's success -- 400 Londoners packed into last week's two services, while 60 had to be turned away at the door -- suggests many urban atheists crave the sense of community that comes with joining a church.
"You can spend all day in London not talking to anyone," said Evans. "I think people really want somewhere they can go and meet other people, which doesn't involve drinking and which you don't have to pay to get into."
It's an idea that is catching the attention of atheists further field.
Jones reels off the locations of would-be atheist "vicars" who have asked to set up new branches.
"Colombia, Bali, Mexico, Houston, Silicon Valley, Philadelphia, Ohio, Calgary, all across Britain, The Hague, Vienna... It's so ludicrously exciting that my head occasionally -- literally -- spins round."
The pair cheerfully admit that they have "ripped off" many elements of their services from the Christian Church. "You're asking people to do new things, so it makes sense for it to be familiar," said Jones.
Religious people have been broadly supportive of the aims of the atheist church. "The only thing is, they've said they'll have to think about what to do if it gets bigger," Evans laughed.
"Actually, the biggest aggression towards us has probably been from atheists saying that we're ruining atheism and not not believing in God properly. So that's quite funny."
The assembly met the approval of local vicar Dave Tomlinson, who came from his church two miles away to see what his new rivals were up to.
Read more at Discovery News
Britain's atheist church is barely three months old but it already has more "worshippers" than can fit into its services, while more than 200 non-believers worldwide have contacted organisers to ask how they can set up their own branch.
Officially named The Sunday Assembly, the church was the brainchild of Pippa Evans and Sanderson Jones, two comedians who suspected there might be an appetite for atheist gatherings that borrowed a few aspects of religious worship.
Held in an airy, ramshackle former church in north London, their quirky monthly meetings combine music, speeches and moral pondering with large doses of humour.
"There's so much about Church that has nothing to do with God -- it's about meeting people, it's about thinking about improving your life," said Jones, a gregarious 32-year-old with a bushy beard and a laugh like a thunderclap.
The Sunday Assembly's central tenets are to "help often, live better and wonder more" -- themes that would not be out of keeping with the teachings of any major world religion.
At last Sunday's service, which had a volunteering theme, songs included "Help" by the Beatles and "Holding Out For A Hero" by Bonnie Tyler.
The "sermon" was given by the founder of an education charity, while in a section called Pippa Is Trying Her Best, Evans had the congregation in stitches as she reported on her attempts at voluntary work.
The service ended with big cheers and -- this is Britain, after all -- shouts of "Who would like a cup of tea?"
Like many Western countries, Britain is becoming an increasingly faithless nation.
While a majority still consider themselves Christians, census data revealed in December that their numbers plummeted from 72 percent in 2001 to 59 percent in 2011.
The proportion of Britons with no religion, meanwhile, shot up from 15 percent to 25 percent over the same period.
But the Sunday Assembly's success -- 400 Londoners packed into last week's two services, while 60 had to be turned away at the door -- suggests many urban atheists crave the sense of community that comes with joining a church.
"You can spend all day in London not talking to anyone," said Evans. "I think people really want somewhere they can go and meet other people, which doesn't involve drinking and which you don't have to pay to get into."
It's an idea that is catching the attention of atheists further field.
Jones reels off the locations of would-be atheist "vicars" who have asked to set up new branches.
"Colombia, Bali, Mexico, Houston, Silicon Valley, Philadelphia, Ohio, Calgary, all across Britain, The Hague, Vienna... It's so ludicrously exciting that my head occasionally -- literally -- spins round."
The pair cheerfully admit that they have "ripped off" many elements of their services from the Christian Church. "You're asking people to do new things, so it makes sense for it to be familiar," said Jones.
Religious people have been broadly supportive of the aims of the atheist church. "The only thing is, they've said they'll have to think about what to do if it gets bigger," Evans laughed.
"Actually, the biggest aggression towards us has probably been from atheists saying that we're ruining atheism and not not believing in God properly. So that's quite funny."
The assembly met the approval of local vicar Dave Tomlinson, who came from his church two miles away to see what his new rivals were up to.
Read more at Discovery News
'Methuselah' Star Looks Older Than the Universe
The oldest known star appears to be older than the universe itself, but a new study is helping to clear up this seeming paradox.
Previous research had estimated that the Milky Way galaxy's so-called "Methuselah star" is up to 16 billion years old. That's a problem, since most researchers agree that the Big Bang that created the universe occurred about 13.8 billion years ago.
Now a team of astronomers has derived a new, less nonsensical age for the Methuselah star, incorporating information about its distance, brightness, composition and structure.
"Put all of those ingredients together, and you get an age of 14.5 billion years, with a residual uncertainty that makes the star's age compatible with the age of the universe," study lead author Howard Bond, of Pennsylvania State University and the Space Telescope Science Institute in Baltimore, said in a statement.
The uncertainty Bond refers to is plus or minus 800 million years, which means the star could actually be 13.7 billion years old — younger than the universe as it's currently understood, though just barely.
A mysterious, fast-moving star
Bond and his team used NASA's Hubble Space Telescope to study the Methuselah star, which is more formally known as HD 140283.
Scientists have known about HD 140283 for more than 100 years, since it cruises across the sky at a relatively rapid clip. The star moves at about 800,000 mph (1.3 million km/h) and covers the width of the full moon in the sky every 1,500 years or so, researchers said.
The star is just passing through the Earth's neck of the galactic woods and will eventually rocket back out to the Milky Way's halo, a population of ancient stars that surrounds the galaxy's familiar spiral disk.
The Methuselah star, which is just now bloating into a red giant, was probably born in a dwarf galaxy that the nascent Milky Way gobbled up more than 12 billion years ago, researchers said. The star's long, looping orbit is likely a residue of that dramatic act of cannibalism.
Distance makes the difference
Hubble's measurements allowed the astronomers to refine the distance to HD 140283 using the principle of parallax, in which a change in an observers' position — in this case, Hubble's varying position in Earth orbit — translates into a shift in the apparent position of an object.
They found that Methuselah lies 190.1 light-years away. With the star's distance known more precisely, the team was able to work out Methuselah's intrinsic brightness, a necessity for determining its age.
The scientists also applied current theory to learn more about the Methuselah star's burn rate, composition and internal structure, which also shed light on its likely age. For example, HD 140283 has a relatively high oxygen-to-iron ratio, which brings the star's age down from some of the earlier predictions, researchers said.
Read more at Discovery News
Previous research had estimated that the Milky Way galaxy's so-called "Methuselah star" is up to 16 billion years old. That's a problem, since most researchers agree that the Big Bang that created the universe occurred about 13.8 billion years ago.
Now a team of astronomers has derived a new, less nonsensical age for the Methuselah star, incorporating information about its distance, brightness, composition and structure.
"Put all of those ingredients together, and you get an age of 14.5 billion years, with a residual uncertainty that makes the star's age compatible with the age of the universe," study lead author Howard Bond, of Pennsylvania State University and the Space Telescope Science Institute in Baltimore, said in a statement.
The uncertainty Bond refers to is plus or minus 800 million years, which means the star could actually be 13.7 billion years old — younger than the universe as it's currently understood, though just barely.
A mysterious, fast-moving star
Bond and his team used NASA's Hubble Space Telescope to study the Methuselah star, which is more formally known as HD 140283.
Scientists have known about HD 140283 for more than 100 years, since it cruises across the sky at a relatively rapid clip. The star moves at about 800,000 mph (1.3 million km/h) and covers the width of the full moon in the sky every 1,500 years or so, researchers said.
The star is just passing through the Earth's neck of the galactic woods and will eventually rocket back out to the Milky Way's halo, a population of ancient stars that surrounds the galaxy's familiar spiral disk.
The Methuselah star, which is just now bloating into a red giant, was probably born in a dwarf galaxy that the nascent Milky Way gobbled up more than 12 billion years ago, researchers said. The star's long, looping orbit is likely a residue of that dramatic act of cannibalism.
Distance makes the difference
Hubble's measurements allowed the astronomers to refine the distance to HD 140283 using the principle of parallax, in which a change in an observers' position — in this case, Hubble's varying position in Earth orbit — translates into a shift in the apparent position of an object.
They found that Methuselah lies 190.1 light-years away. With the star's distance known more precisely, the team was able to work out Methuselah's intrinsic brightness, a necessity for determining its age.
The scientists also applied current theory to learn more about the Methuselah star's burn rate, composition and internal structure, which also shed light on its likely age. For example, HD 140283 has a relatively high oxygen-to-iron ratio, which brings the star's age down from some of the earlier predictions, researchers said.
Read more at Discovery News
Mar 7, 2013
Stone-Age Skeletons Unearthed in Sahara Desert
Archaeologists have uncovered 20 Stone-Age skeletons in and around a rock shelter in Libya's Sahara desert, according to a new study.
The skeletons date between 8,000 and 4,200 years ago, meaning the burial place was used for millennia.
"It must have been a place of memory," said study co-author Mary Anne Tafuri, an archaeologist at the University of Cambridge. "People throughout time have kept it, and they have buried their people, over and over, generation after generation."
About 15 women and children were buried in the rock shelter, while five men and juveniles were buried under giant stone heaps called tumuli outside the shelter during a later period, when the region turned to desert.
The findings, which are detailed in the March issue of the Journal of Anthropological Archaeology, suggest the culture changed with the climate.
Millennia of burials
From about 8,000 to 6,000 years ago, the Sahara desert region, called Wadi Takarkori, was filled with scrubby vegetation and seasonal green patches. Stunning rock art depicts ancient herding animals, such as cows, which require much more water to graze than the current environment could support, Tafuri said.
Tafuri and her colleague Savino di Lernia began excavating the archaeological site between 2003 and 2006. At the same site, archaeologists also uncovered huts, animal bones and pots with traces of the earliest fermented dairy products in Africa.
To date the skeletons, Tafuri measured the remains for concentrations of isotopes, or molecules of the same element with different weights.
The team concluded that the skeletons were buried over four millennia, with most of the remains in the rock shelter buried between 7,300 and 5,600 years ago.
The males and juveniles under the stone heaps were buried starting 4,500 years ago, when the region became more arid. Rock art confirms the dry up, as the cave paintings began to depict goats, which need much less water to graze than cows, Tafuri said.
The ancient people also grew up not far from the area where they were buried, based on a comparison of isotopes in tooth enamel, which forms early in childhood, with elements in the nearby environment.
Shift in culture?
The findings suggest the burial place was used for millennia by the same group of people. It also revealed a divided society.
"The exclusive use of the rock shelter for female and sub-adult burials points to a persistent division based on gender," wrote Marina Gallinaro, a researcher in African studies at Sapienza University of Rome, who was not involved in the study, in an email to LiveScience.
One possibility is that during the earlier period, women had a more critical role in the society, and families may have even traced their descent through the female line. But once the Sahara began its inexorable expansion into the region about 5,000 years ago, the culture shifted and men's prominence may have risen as a result, Gallinaro wrote.
The region as a whole is full of hundreds of sites yet to be excavated, said Luigi Boitani, a biologist at Sapienza University of Rome, who has worked on archaeological sites in the region but was not involved in the study.
"The area is an untapped treasure," Boitani said.
Read more at Discovery News
The skeletons date between 8,000 and 4,200 years ago, meaning the burial place was used for millennia.
"It must have been a place of memory," said study co-author Mary Anne Tafuri, an archaeologist at the University of Cambridge. "People throughout time have kept it, and they have buried their people, over and over, generation after generation."
About 15 women and children were buried in the rock shelter, while five men and juveniles were buried under giant stone heaps called tumuli outside the shelter during a later period, when the region turned to desert.
The findings, which are detailed in the March issue of the Journal of Anthropological Archaeology, suggest the culture changed with the climate.
Millennia of burials
From about 8,000 to 6,000 years ago, the Sahara desert region, called Wadi Takarkori, was filled with scrubby vegetation and seasonal green patches. Stunning rock art depicts ancient herding animals, such as cows, which require much more water to graze than the current environment could support, Tafuri said.
Tafuri and her colleague Savino di Lernia began excavating the archaeological site between 2003 and 2006. At the same site, archaeologists also uncovered huts, animal bones and pots with traces of the earliest fermented dairy products in Africa.
To date the skeletons, Tafuri measured the remains for concentrations of isotopes, or molecules of the same element with different weights.
The team concluded that the skeletons were buried over four millennia, with most of the remains in the rock shelter buried between 7,300 and 5,600 years ago.
The males and juveniles under the stone heaps were buried starting 4,500 years ago, when the region became more arid. Rock art confirms the dry up, as the cave paintings began to depict goats, which need much less water to graze than cows, Tafuri said.
The ancient people also grew up not far from the area where they were buried, based on a comparison of isotopes in tooth enamel, which forms early in childhood, with elements in the nearby environment.
Shift in culture?
The findings suggest the burial place was used for millennia by the same group of people. It also revealed a divided society.
"The exclusive use of the rock shelter for female and sub-adult burials points to a persistent division based on gender," wrote Marina Gallinaro, a researcher in African studies at Sapienza University of Rome, who was not involved in the study, in an email to LiveScience.
One possibility is that during the earlier period, women had a more critical role in the society, and families may have even traced their descent through the female line. But once the Sahara began its inexorable expansion into the region about 5,000 years ago, the culture shifted and men's prominence may have risen as a result, Gallinaro wrote.
The region as a whole is full of hundreds of sites yet to be excavated, said Luigi Boitani, a biologist at Sapienza University of Rome, who has worked on archaeological sites in the region but was not involved in the study.
"The area is an untapped treasure," Boitani said.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Russia Finds 'New Bacteria' in Antarctic Lake
Russian scientists believe they have found a wholly new type of bacteria in the subglacial Lake Vostok in Antarctica, the RIA Novosti news agency reported on March 7.
The samples obtained from the underground lake in May 2012 contained a bacteria which bore no resemblance to existing types, said Sergei Bulat of the genetics laboratory at the Saint Petersburg Institute of Nuclear Physics.
"After putting aside all possible elements of contamination, DNA was found that did not coincide with any of the well-known types in the global database," he said.
"We are calling this life form unclassified and unidentified," he added.
The discovery comes from samples collected in an expedition in 2012 where a Russian team drilled down to the surface of Lake Vostok, which is believed to have been covered by ice for more than a million years but has kept its liquid state.
Lake Vostok is the largest subglacial lake in Antarctica and scientists have long wanted to study its eco-system. The Russian team last year drilled almost four kilometres (2.34 miles) to reach the lake and take the samples.
Bulat said that the interest surrounded one particular form of bacteria whose DNA was less than 86 percent similar to previously existing forms.
"In terms of work with DNA this is basically zero. A level of 90 percent usually means that the organism is unknown."
He said it was not even possible to find the genetic descendants of the bacteria.
"If this had been found on Mars everyone would have undoubtedly said there is life on Mars. But this is bacteria from Earth."
Bulat said that new samples of water would be taken from Lake Vostok during a new expedition in May.
"If we manage to find the same group of organisms in this water we can say for sure that we have found new life on Earth that exists in no database," Bulat said.
Read more at Discovery News
The samples obtained from the underground lake in May 2012 contained a bacteria which bore no resemblance to existing types, said Sergei Bulat of the genetics laboratory at the Saint Petersburg Institute of Nuclear Physics.
"After putting aside all possible elements of contamination, DNA was found that did not coincide with any of the well-known types in the global database," he said.
"We are calling this life form unclassified and unidentified," he added.
The discovery comes from samples collected in an expedition in 2012 where a Russian team drilled down to the surface of Lake Vostok, which is believed to have been covered by ice for more than a million years but has kept its liquid state.
Lake Vostok is the largest subglacial lake in Antarctica and scientists have long wanted to study its eco-system. The Russian team last year drilled almost four kilometres (2.34 miles) to reach the lake and take the samples.
Bulat said that the interest surrounded one particular form of bacteria whose DNA was less than 86 percent similar to previously existing forms.
"In terms of work with DNA this is basically zero. A level of 90 percent usually means that the organism is unknown."
He said it was not even possible to find the genetic descendants of the bacteria.
"If this had been found on Mars everyone would have undoubtedly said there is life on Mars. But this is bacteria from Earth."
Bulat said that new samples of water would be taken from Lake Vostok during a new expedition in May.
"If we manage to find the same group of organisms in this water we can say for sure that we have found new life on Earth that exists in no database," Bulat said.
Read more at Discovery News
1st African American Man Dates Back 338,000 Years
A miniscule bit of DNA from an African American man now living in South Carolina has been traced back 338,000 years, according to a new study.
The man’s Y chromosome — a hereditary factor determining male sex — has a history that’s so old, it even predates the age of the oldest known Homo sapiens fossils, according to the report, published in the American Journal of Human Genetics.
The fellow’s chromosome turned out to carry a rare mutation, which researchers matched to a similar chromosome in the Mbo, a population living in a tiny area of western Cameroon in sub-Saharan Africa.
“Our analysis indicates this lineage diverged from previously known Y chromosomes about 338,000 ago, a time when anatomically modern humans had not yet evolved,” Michael Hammer, who worked on the study, said in a press release.
“This pushes back the time the last common Y chromosome ancestor lived by almost 70 percent.”
Hammer is an associate professor in the University of Arizona’s department of ecology and evolutionary biology and a research scientist at the UA’s Arizona Research Labs.
The DNA detective work began after the South Carolinian submitted a small tissue sample to the National Geographic Genographic Project. The researchers were shocked after they noticed none of the genetic markers used to assign lineages to known Y chromosome groupings were found.
They sent the man’s DNA sample to Family Tree DNA for sequencing. Fernando Mendez, a postdoctoral researcher in Hammer’s lab, led the effort to analyze the DNA sequence. It included more than 240,000 base pairs of the Y chromosome.
Searches through a huge database led to the Mbo connection.
The scientists could then estimate the emergence of the chromosome mutation based on rates of change, creating a sort of “family tree” for the chromosome.
The discovery doesn’t necessarily mean that we all descended from an ancestor living in western Cameroon.
“It is a misconception that the genealogy of a single genetic region reflects population divergence,” Hammer explained. “Instead, our results suggest that there are pockets of genetically isolated communities that together preserve a great deal of human diversity.”
Read more at Discovery News
The man’s Y chromosome — a hereditary factor determining male sex — has a history that’s so old, it even predates the age of the oldest known Homo sapiens fossils, according to the report, published in the American Journal of Human Genetics.
The fellow’s chromosome turned out to carry a rare mutation, which researchers matched to a similar chromosome in the Mbo, a population living in a tiny area of western Cameroon in sub-Saharan Africa.
“Our analysis indicates this lineage diverged from previously known Y chromosomes about 338,000 ago, a time when anatomically modern humans had not yet evolved,” Michael Hammer, who worked on the study, said in a press release.
“This pushes back the time the last common Y chromosome ancestor lived by almost 70 percent.”
Hammer is an associate professor in the University of Arizona’s department of ecology and evolutionary biology and a research scientist at the UA’s Arizona Research Labs.
The DNA detective work began after the South Carolinian submitted a small tissue sample to the National Geographic Genographic Project. The researchers were shocked after they noticed none of the genetic markers used to assign lineages to known Y chromosome groupings were found.
They sent the man’s DNA sample to Family Tree DNA for sequencing. Fernando Mendez, a postdoctoral researcher in Hammer’s lab, led the effort to analyze the DNA sequence. It included more than 240,000 base pairs of the Y chromosome.
Searches through a huge database led to the Mbo connection.
The scientists could then estimate the emergence of the chromosome mutation based on rates of change, creating a sort of “family tree” for the chromosome.
The discovery doesn’t necessarily mean that we all descended from an ancestor living in western Cameroon.
“It is a misconception that the genealogy of a single genetic region reflects population divergence,” Hammer explained. “Instead, our results suggest that there are pockets of genetically isolated communities that together preserve a great deal of human diversity.”
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Clues of Life's Origins Found in Galactic Cloud
You and I are both made up of an eclectic collection of organic molecules.
A lot of interesting molecules go into making up all life on Earth, from the amino acids which make up proteins to the nucleobases that encode our very DNA, but where they exactly come from (on a cosmic scale) is still one of science’s great mysteries. And as with any good mystery, the only solution will be to solve each of the separate pieces of the puzzle — and the latest piece of this puzzle has just been spotted in a huge gas cloud in the center of our galaxy.
Finding things like amino acids in space directly is a difficult business. So, instead of finding them directly, a team using West Virginia’s Green Bank Telescope, led by Anthony Remijan, discovered two other molecules – cyanomethanimine and ethanamine — both of which are precursor molecules. In other words, these molecules are the early steps in the chain of chemical reactions that go on to make the stuff of life.
Astrochemists are steadily discovering larger and more complex molecules in interstellar space. Recent years have seen the discoveries of glycolaldehyde, which is arguably the simplest type of sugar, and ethyl formate, one of the molecules responsible for the flavor and aroma of rum and raspberries. This latest discovery might not sound quite as appetizing, but it’s no less important.
When hunting for new molecules, astrochemists frequently turn their telescopes towards the galactic center. Drifting in the Milky Way’s core, is a hulking interstellar cloud known as Sagittarius B2 (or Sgr B2 for short). Spanning 150 light-years in size, Sgr B2 is one of the galaxy’s largest clouds, up to 40 times as dense as any other the Milky Way has to offer.
Sgr B2 is also something of a benchmark for molecule hunters. Roughly 25,000 light-years from Earth, and only about 390 light years from the supermassive black hole lurking in the galactic center, if any molecule can be found in the interstellar medium, it can be found here.
The two molecules that Remijan and his team found, cyanomethanimine and ethanamine, are expected to be precursors to the nucleobase adenine and the amino acid alanine, respectively. This makes them potentially very important discoveries. It’s expected that molecules like this form, and continue to react, on the surfaces of interstellar ice grains. These ice grains condense like hailstones in the freezing conditions of interstellar clouds. Once formed, some molecules escape into the vacuum of space, while others react further — forming increasingly complex molecules.
Remijan noted that, ”Finding these molecules in an interstellar gas cloud means that important building blocks for DNA and amino acids can ‘seed’ newly-formed planets with the chemical precursors for life.” Logical, because the vast interstellar clouds where these molecules are formed are the very same clouds that go on to collapse into stars and planets.
Extraterrestrial amino acids and DNA nucleobases like adenine have been found in meteorites, suggesting that there must be mechanisms occurring in space that create them. If they can be created in space, it’s likely that they can also been seen using telescopes. All we need to do is make sure our telescopes are sensitive enough, look in the right places, and make sure we know what it is we’re looking for.
Read more at Discovery News
A lot of interesting molecules go into making up all life on Earth, from the amino acids which make up proteins to the nucleobases that encode our very DNA, but where they exactly come from (on a cosmic scale) is still one of science’s great mysteries. And as with any good mystery, the only solution will be to solve each of the separate pieces of the puzzle — and the latest piece of this puzzle has just been spotted in a huge gas cloud in the center of our galaxy.
Finding things like amino acids in space directly is a difficult business. So, instead of finding them directly, a team using West Virginia’s Green Bank Telescope, led by Anthony Remijan, discovered two other molecules – cyanomethanimine and ethanamine — both of which are precursor molecules. In other words, these molecules are the early steps in the chain of chemical reactions that go on to make the stuff of life.
Astrochemists are steadily discovering larger and more complex molecules in interstellar space. Recent years have seen the discoveries of glycolaldehyde, which is arguably the simplest type of sugar, and ethyl formate, one of the molecules responsible for the flavor and aroma of rum and raspberries. This latest discovery might not sound quite as appetizing, but it’s no less important.
When hunting for new molecules, astrochemists frequently turn their telescopes towards the galactic center. Drifting in the Milky Way’s core, is a hulking interstellar cloud known as Sagittarius B2 (or Sgr B2 for short). Spanning 150 light-years in size, Sgr B2 is one of the galaxy’s largest clouds, up to 40 times as dense as any other the Milky Way has to offer.
Sgr B2 is also something of a benchmark for molecule hunters. Roughly 25,000 light-years from Earth, and only about 390 light years from the supermassive black hole lurking in the galactic center, if any molecule can be found in the interstellar medium, it can be found here.
The two molecules that Remijan and his team found, cyanomethanimine and ethanamine, are expected to be precursors to the nucleobase adenine and the amino acid alanine, respectively. This makes them potentially very important discoveries. It’s expected that molecules like this form, and continue to react, on the surfaces of interstellar ice grains. These ice grains condense like hailstones in the freezing conditions of interstellar clouds. Once formed, some molecules escape into the vacuum of space, while others react further — forming increasingly complex molecules.
Remijan noted that, ”Finding these molecules in an interstellar gas cloud means that important building blocks for DNA and amino acids can ‘seed’ newly-formed planets with the chemical precursors for life.” Logical, because the vast interstellar clouds where these molecules are formed are the very same clouds that go on to collapse into stars and planets.
Extraterrestrial amino acids and DNA nucleobases like adenine have been found in meteorites, suggesting that there must be mechanisms occurring in space that create them. If they can be created in space, it’s likely that they can also been seen using telescopes. All we need to do is make sure our telescopes are sensitive enough, look in the right places, and make sure we know what it is we’re looking for.
Read more at Discovery News
Mar 6, 2013
Help in Reading Foreign Languages
Recent research into how we learn is set to help people in their efforts to read a second or foreign language (SFL) more effectively. This will be good news for those struggling to develop linguistic skills in preparation for a move abroad, or to help in understanding foreign language forms, reports, contracts and instructions.
The ability to read a second or foreign language can be of great benefit to academics, business people, politicians, professionals and migrants trying to master an unfamiliar language. Surprisingly, little is known about how an ability to read in another language develops.
To address this, the research funded by the Economic and Social Research Council (ESRC), led by Professor Charles Alderson at Lancaster University, set out to identify the factors that determine how this skill develops in a number of different languages and what factors in people's first language influence progress in reading.
"Research into the diagnosis of the strengths and weaknesses of learners in SFL reading was virtually non-existent, making this research highly innovative. We've followed SFL reading ability development over several years and studied which cognitive and linguistic tasks are most promising for predicting literacy skills in several languages," says Alderson.
"Progress has already been made in understanding first language reading problems -- so we co-operated with psychologists who studied a group of dyslexic readers over many years. We used their instruments to see if the things that predict first-language dyslexia can help in understanding SFL reading and its problems," Alderson continues.
Contrary to much belief, the researchers found that ability to read in a first language is less important in SFL reading than proficiency in the new language being learned. It was concluded that the size of SFL vocabulary and understanding of text influence one another: vocabulary growth promotes comprehension and comprehension helps a reader to learn new words.
Findings will help language teachers devise more effective strategies for assisting learners and influence the accuracy of tests that predict SFL reading abilities. From data gathered from many countries and languages, the research suggests that reading well in one language has little bearing on the development of reading abilities in another.
Read more at Science Daily
The ability to read a second or foreign language can be of great benefit to academics, business people, politicians, professionals and migrants trying to master an unfamiliar language. Surprisingly, little is known about how an ability to read in another language develops.
To address this, the research funded by the Economic and Social Research Council (ESRC), led by Professor Charles Alderson at Lancaster University, set out to identify the factors that determine how this skill develops in a number of different languages and what factors in people's first language influence progress in reading.
"Research into the diagnosis of the strengths and weaknesses of learners in SFL reading was virtually non-existent, making this research highly innovative. We've followed SFL reading ability development over several years and studied which cognitive and linguistic tasks are most promising for predicting literacy skills in several languages," says Alderson.
"Progress has already been made in understanding first language reading problems -- so we co-operated with psychologists who studied a group of dyslexic readers over many years. We used their instruments to see if the things that predict first-language dyslexia can help in understanding SFL reading and its problems," Alderson continues.
Contrary to much belief, the researchers found that ability to read in a first language is less important in SFL reading than proficiency in the new language being learned. It was concluded that the size of SFL vocabulary and understanding of text influence one another: vocabulary growth promotes comprehension and comprehension helps a reader to learn new words.
Findings will help language teachers devise more effective strategies for assisting learners and influence the accuracy of tests that predict SFL reading abilities. From data gathered from many countries and languages, the research suggests that reading well in one language has little bearing on the development of reading abilities in another.
Read more at Science Daily
Rare Kissing Octopus Unveiled For the First Time
Scientists are unveiling a rare octopus that has never been on public display before.
And unlike other octopuses, where females have a nasty habit of eating their partners during sex, Larger Pacific Striped Octopuses mate by pressing their beaks and suckers against each other in an intimate embrace.
The beautiful creature can also morph from dark red to black-and-white stripes and spots and can shape-shift from flat to expanded.
The sea dweller will be on display starting today (Mar. 6) at the California Academy of Sciences in San Francisco.
"I'm thrilled that Academy visitors will have the opportunity to view this fascinating animal up close in the aquarium, where they'll see just why its beauty, unique mating technique and social habits are intriguing the cephalopod community," said Richard Ross, a biologist at the California Academy of Sciences, in a statement.
Octopuses are known for their clever antics, including their various means of disguise. For instance, the Atlantic longarm octopus (Macrotritopus defilippi) has been observed mimicking a flounder by swimming forward with its arms trailing behind like flounder fins. That octopus even contorted its soft body so both eyes moved to the left like a flounder's would.
And the mimic octopus (Thaumoctopus mimicus) can shift its color and shape in mind-boggling ways, impersonating everything from sea snakes and giant crabs to stingrays.
The Larger Pacific Striped Octopus was discovered in 1991, but it was largely forgotten for more than a decade. The species is so new that it still doesn't have a name.
Unlike other octopus species, females survive many years to lay several clutches of eggs, rather than dying after reproducing once.
Read more at Discovery News
And unlike other octopuses, where females have a nasty habit of eating their partners during sex, Larger Pacific Striped Octopuses mate by pressing their beaks and suckers against each other in an intimate embrace.
The beautiful creature can also morph from dark red to black-and-white stripes and spots and can shape-shift from flat to expanded.
The sea dweller will be on display starting today (Mar. 6) at the California Academy of Sciences in San Francisco.
"I'm thrilled that Academy visitors will have the opportunity to view this fascinating animal up close in the aquarium, where they'll see just why its beauty, unique mating technique and social habits are intriguing the cephalopod community," said Richard Ross, a biologist at the California Academy of Sciences, in a statement.
Octopuses are known for their clever antics, including their various means of disguise. For instance, the Atlantic longarm octopus (Macrotritopus defilippi) has been observed mimicking a flounder by swimming forward with its arms trailing behind like flounder fins. That octopus even contorted its soft body so both eyes moved to the left like a flounder's would.
And the mimic octopus (Thaumoctopus mimicus) can shift its color and shape in mind-boggling ways, impersonating everything from sea snakes and giant crabs to stingrays.
The Larger Pacific Striped Octopus was discovered in 1991, but it was largely forgotten for more than a decade. The species is so new that it still doesn't have a name.
Unlike other octopus species, females survive many years to lay several clutches of eggs, rather than dying after reproducing once.
Read more at Discovery News
Mysterious Beasts Discovered Near Loch Ness
A creature with a long, snake-like body, many legs and a voracious appetite was recently discovered near Loch Ness in northern Scotland. No, it wasn’t Nessie, the infamous Loch Ness monster. This newly discovered creature would only terrorize a leaf. It was a sawfly larvae, which resembles a caterpillar.
The new species of sawfly was one of the eight previously undiscovered species found by a biological survey of Dundreggan Estate in Scotland, reported Sky News. The other species were an aphid, two types of aphid parasites, three fungus gnats and a type of mite.
The new species join a list of more than 2,800 plants and animals cataloged at the 10,000-acre Dundreggan Estate in the County of Inverness, Scotland. The rugged highland landscape of the region is home to 20 mammals, 269 plants, 341 lichens, 92 birds, 354 beetles, 207 moths and 125 sawflies.
Studies related to a reforestation effort by the conservation group Trees for Life cataloged the species, 67 of which are now considered priorities for conservation.
“The surprisingly rich variety of life at Dundreggan highlights the vital importance of conservation work, and of protecting and enhancing habitats across the Highlands,” Trees for Life’s executive director Alan Watson Featherstone said in Sky News.”The discoveries are not only demonstrating that the estate is a special site for biological diversity; they are also revealing that there is still much to learn about Scotland’s biodiversity.”
Trees for Life has been working to restore 1,000 square miles (2,590 square kilometers) of the Caledonian Forest on Dundreggan Estate. The Caledonian Forest once covered much of northern Scotland, but has been reduced to less than one percent of its original area. That remnant represents the western-most extension of the vast boreal forests that came to dominate northern Eurasia after the last Ice Age.
From Discovery News
The new species of sawfly was one of the eight previously undiscovered species found by a biological survey of Dundreggan Estate in Scotland, reported Sky News. The other species were an aphid, two types of aphid parasites, three fungus gnats and a type of mite.
The new species join a list of more than 2,800 plants and animals cataloged at the 10,000-acre Dundreggan Estate in the County of Inverness, Scotland. The rugged highland landscape of the region is home to 20 mammals, 269 plants, 341 lichens, 92 birds, 354 beetles, 207 moths and 125 sawflies.
Studies related to a reforestation effort by the conservation group Trees for Life cataloged the species, 67 of which are now considered priorities for conservation.
“The surprisingly rich variety of life at Dundreggan highlights the vital importance of conservation work, and of protecting and enhancing habitats across the Highlands,” Trees for Life’s executive director Alan Watson Featherstone said in Sky News.”The discoveries are not only demonstrating that the estate is a special site for biological diversity; they are also revealing that there is still much to learn about Scotland’s biodiversity.”
Trees for Life has been working to restore 1,000 square miles (2,590 square kilometers) of the Caledonian Forest on Dundreggan Estate. The Caledonian Forest once covered much of northern Scotland, but has been reduced to less than one percent of its original area. That remnant represents the western-most extension of the vast boreal forests that came to dominate northern Eurasia after the last Ice Age.
From Discovery News
Europa's Epsom Salt May Indicate Ocean Life
If you’ve ever found yourself wanting to know if there’s extraterrestrial life on Europa, this latest study into the Jovian moon’s icy crust should whet your appetite.
Using data from the powerful Keck II Telescope atop Mauna Kea in Hawai’i, astronomers Mike Brown, of the California Institute of Technology (Caltech), and Kevin Hand, of NASA’s Jet Propulsion Laboratory (JPL), have found strong evidence that suggests chemicals from Europa’s sub-surface ocean are leaking to the surface. In turn, chemicals from the surface are likely cycling into the ocean too. The research has been published in the Astrophysical Journal.
This discovery is profound when considering the live-giving potential of Jupiter’s largest moon — it is further proof that the sub-surface ocean isn’t cut off from the surface; chemicals are cycling into the ocean, potentially supporting a hypothetical Europan biosphere.
“We now have evidence that Europa’s ocean is not isolated — that the ocean and the surface talk to each other and exchange chemicals,” said Brown, in a Caltech press release. “That means that energy might be going into the ocean, which is important in terms of the possibilities for life there. It also means that if you’d like to know what’s in the ocean, you can just go to the surface and scrape some off.”
Using a high-resolution spectroscope attached to Keck’s infrared eye, Brown and Hand detected the spectroscopic fingerprint of magnesium sulfate salt — also known on Earth as epsomite, or, more commonly, Epsom salt — a mineral that could have only been formed through the oxidation of a mystery mineral that originated below the ice in a liquid environment.
“Magnesium should not be on the surface of Europa unless it’s coming from the ocean,” said Brown. “So that means ocean water gets onto the surface, and stuff on the surface presumably gets into the ocean water.”
It has long been known that sulfur ejected from sibling Jovian moon Io rains down on one of Europa’s hemispheres. Being tidally locked — as in, one side of Europa is always facing Jupiter — one hemisphere is always leading the moon’s orbit, while the other hemisphere is always trailing. The sulfur ejected from Io is directed down to Europa’s trailing hemisphere by Jupiter’s magnetic field. This process was first inferred during the Galileo mission to Jupiter from 1989 to 2003.
When analyzing the spectrum of the water ice and other minerals embedded in Europa’s surface, Brown and Hand detected a tiny, previously unnoticed “dip” in the spectrum at low latitudes in the trailing hemisphere.
At Hand’s JPL lab, the pair tested a variety of chemicals — including household chemicals like Draino — to see if they could replicate this dip in the spectrum. The most likely candidate was identified as magnesium sulfate. The chemical is likely formed when ionized sulfur (from Io) makes contact with another chemical on Europa’s surface: magnesium chloride.
Although it may sound like you need a degree in chemistry to understand these chemical reactions, the upshot is straightforward. Although magnesium chloride isn’t easy to detect, magnesium sulfate is. We know where the sulfur came from (Io) and we know that the magnesium could have only come from Europa’s oceans. The authors therefore deduce that Europa’s oceans are salty, just like Earth’s.
“If we’ve learned anything about life on Earth, it’s that where there’s liquid water, there’s generally life,” said Hand. “And of course our ocean is a nice salty ocean. Perhaps Europa’s salty ocean is also a wonderful place for life.”
This is another piece of evidence in favor of a ripe environment for life to thrive in Europa’s oceans. Below the cracked icy surface, a 100 kilometer-deep ocean exists in a liquid state, heated by the tidal stresses inflicted on Europa’s core. Previous observations have detected the presence of liquid water “lakes” in pockets near the surface — indicative that some of that water escapes to the surface, allowing nutrients through the cracks in the ice.
Read more at Discovery News
Using data from the powerful Keck II Telescope atop Mauna Kea in Hawai’i, astronomers Mike Brown, of the California Institute of Technology (Caltech), and Kevin Hand, of NASA’s Jet Propulsion Laboratory (JPL), have found strong evidence that suggests chemicals from Europa’s sub-surface ocean are leaking to the surface. In turn, chemicals from the surface are likely cycling into the ocean too. The research has been published in the Astrophysical Journal.
This discovery is profound when considering the live-giving potential of Jupiter’s largest moon — it is further proof that the sub-surface ocean isn’t cut off from the surface; chemicals are cycling into the ocean, potentially supporting a hypothetical Europan biosphere.
“We now have evidence that Europa’s ocean is not isolated — that the ocean and the surface talk to each other and exchange chemicals,” said Brown, in a Caltech press release. “That means that energy might be going into the ocean, which is important in terms of the possibilities for life there. It also means that if you’d like to know what’s in the ocean, you can just go to the surface and scrape some off.”
Using a high-resolution spectroscope attached to Keck’s infrared eye, Brown and Hand detected the spectroscopic fingerprint of magnesium sulfate salt — also known on Earth as epsomite, or, more commonly, Epsom salt — a mineral that could have only been formed through the oxidation of a mystery mineral that originated below the ice in a liquid environment.
“Magnesium should not be on the surface of Europa unless it’s coming from the ocean,” said Brown. “So that means ocean water gets onto the surface, and stuff on the surface presumably gets into the ocean water.”
It has long been known that sulfur ejected from sibling Jovian moon Io rains down on one of Europa’s hemispheres. Being tidally locked — as in, one side of Europa is always facing Jupiter — one hemisphere is always leading the moon’s orbit, while the other hemisphere is always trailing. The sulfur ejected from Io is directed down to Europa’s trailing hemisphere by Jupiter’s magnetic field. This process was first inferred during the Galileo mission to Jupiter from 1989 to 2003.
When analyzing the spectrum of the water ice and other minerals embedded in Europa’s surface, Brown and Hand detected a tiny, previously unnoticed “dip” in the spectrum at low latitudes in the trailing hemisphere.
At Hand’s JPL lab, the pair tested a variety of chemicals — including household chemicals like Draino — to see if they could replicate this dip in the spectrum. The most likely candidate was identified as magnesium sulfate. The chemical is likely formed when ionized sulfur (from Io) makes contact with another chemical on Europa’s surface: magnesium chloride.
Although it may sound like you need a degree in chemistry to understand these chemical reactions, the upshot is straightforward. Although magnesium chloride isn’t easy to detect, magnesium sulfate is. We know where the sulfur came from (Io) and we know that the magnesium could have only come from Europa’s oceans. The authors therefore deduce that Europa’s oceans are salty, just like Earth’s.
“If we’ve learned anything about life on Earth, it’s that where there’s liquid water, there’s generally life,” said Hand. “And of course our ocean is a nice salty ocean. Perhaps Europa’s salty ocean is also a wonderful place for life.”
This is another piece of evidence in favor of a ripe environment for life to thrive in Europa’s oceans. Below the cracked icy surface, a 100 kilometer-deep ocean exists in a liquid state, heated by the tidal stresses inflicted on Europa’s core. Previous observations have detected the presence of liquid water “lakes” in pockets near the surface — indicative that some of that water escapes to the surface, allowing nutrients through the cracks in the ice.
Read more at Discovery News
Mar 5, 2013
Enormous Prehistoric Camel Roamed Arctic
Remains of an extinct, giant camel have been unearthed not in a desert, but in the High Arctic, according to a Nature Communications report.
It’s the furthest north camel remains have ever been found and this one was on Ellesmere Island. The camel was also likely at least 30 percent bigger than camels are today. If you think of that in human terms, it would be roughly like an average-sized man standing about 8 feet tall, so these were some big camels.
“These bones represent the first evidence of camels living in the High Arctic region,” co-author Natalia Rybczynski of the Canadian Museum of Nature was quoted as saying in a press release. “It extends the previous range of camels in North America northward by about 1,200 km (746 miles), and suggests that the lineage that gave rise to modern camels may have been originally adapted to living in an Arctic forest environment.”
This particular camel, which was a close relative of a fossil genus called Paracamelus, lived 3.5 million years ago, but camels in general originated about 45 million years ago during the mid-Eocene Period in North America. They dispersed to Eurasia by 7 million years ago using the Bering land bridge that joined what is now Alaska to Russia.
It’s interesting to consider their history, as today we associate camels with desert environments and places like Egypt.
The Arctic camel instead lived in a boreal-type forest. What is now the Arctic was significantly warmer 3.5 million years ago. For example, it had temperatures that were chilly, but not ultra freezing cold. (Other researchers are studying such past climates to better understand present global warming, but the human effects on climate now are unprecedented.)
The discovery of the ancient camel could help to explain how camels evolved some of their distinctive physical characteristics.
Read more at Discovery News
It’s the furthest north camel remains have ever been found and this one was on Ellesmere Island. The camel was also likely at least 30 percent bigger than camels are today. If you think of that in human terms, it would be roughly like an average-sized man standing about 8 feet tall, so these were some big camels.
“These bones represent the first evidence of camels living in the High Arctic region,” co-author Natalia Rybczynski of the Canadian Museum of Nature was quoted as saying in a press release. “It extends the previous range of camels in North America northward by about 1,200 km (746 miles), and suggests that the lineage that gave rise to modern camels may have been originally adapted to living in an Arctic forest environment.”
This particular camel, which was a close relative of a fossil genus called Paracamelus, lived 3.5 million years ago, but camels in general originated about 45 million years ago during the mid-Eocene Period in North America. They dispersed to Eurasia by 7 million years ago using the Bering land bridge that joined what is now Alaska to Russia.
It’s interesting to consider their history, as today we associate camels with desert environments and places like Egypt.
The Arctic camel instead lived in a boreal-type forest. What is now the Arctic was significantly warmer 3.5 million years ago. For example, it had temperatures that were chilly, but not ultra freezing cold. (Other researchers are studying such past climates to better understand present global warming, but the human effects on climate now are unprecedented.)
The discovery of the ancient camel could help to explain how camels evolved some of their distinctive physical characteristics.
Read more at Discovery News
Gnarly Mummy Head Reveals Medieval Science
In the second century, an ethnically Greek Roman named Galen became doctor to the gladiators. His glimpses into the human body via these warriors' wounds, combined with much more systematic dissections of animals, became the basis of Islamic and European medicine for centuries.
Galen's texts wouldn't be challenged for anatomical supremacy until the Renaissance, when human dissections —- often in public -— surged in popularity. But doctors in medieval Europe weren't as idle as it may seem, as a new analysis of the oldest-known preserved human dissection in Europe reveals.
The gruesome specimen, now in a private collection, consists of a human head and shoulders with the top of the skull and brain removed. Rodent nibbles and insect larvae trails mar the face. The arteries are filled with a red "metal wax" compound that helped preserve the body.
The preparation of the specimen was surprisingly advanced. Radiocarbon dating puts the age of the body between A.D. 1200 and A.D.1280, an era once considered part of Europe's anti-scientific "Dark Ages." In fact, said study researcher Philippe Charlier, a physician and forensic scientist at University Hospital R. Poincare in France, the new specimen suggests surprising anatomical expertise during this time period.
"It's state-of-the-art," Charlier told LiveScience. "I suppose that the preparator did not do this just one time, but several times, to be so good at this."
Myths of the middle ages
Historians in the 1800s referred to the Dark Ages as a time of illiteracy and barbarianism, generally pinpointing the time period as between the fall of the Roman Empire and somewhere in the Middle Ages. To some, the Dark Ages didn't end until the 1400s, at the advent of the Renaissance.
But modern historians see the Middle Ages quite differently. That's because continued scholarship has found that the medieval period wasn't so ignorant after all.
"There was considerable scientific progress in the later Middle Ages, in particular from the 13th century onward," said James Hannam, an historian and author of "The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution" (Regnery Publishing, 2011).
For centuries, the advancements of the Middle Ages were forgotten, Hannam told LiveScience. In the 16th and 17th centuries, it became an "intellectual fad," he said, for thinkers to cite ancient Greek and Roman sources rather than scientists of the Middle Ages. In some cases, this involved straight-up fudging. Renaissance mathematician Copernicus, for example, took some of his thinking on the motion of the Earth from Jean Buridan, a French priest who lived between about 1300 and 1358, Hannam said. But Copernicus credited the ancient Roman poet Virgil as his inspiration.
Much of this selective memory stemmed from anti-Catholic feelings by Protestants, who split from the church in the 1500s.
As a result, "there was lots of propaganda about how the Catholic Church had been holding back human progress, and it was great that we were all Protestants now," Hannam said.
Anatomical dark ages?
From this anti-Catholic sentiment arose a great many myths, such as the idea that everyone believed the world to be flat until Christopher Columbus sailed to the Americas. ("They thought nothing of the sort," Hannam said.)
Similarly, Renaissance propagandists spread the rumor that the Medieval Christian church banned autopsy and human dissection, holding back medical progress.
In fact, Hannam said, many societies have banned or limited the carving up of human corpses, from the ancient Greeks and Romans to early Europeans (that's why Galen was stuck dissecting animals and peering into gladiator wounds). But autopsies and dissection were not under a blanket church ban in the Middle Ages. In fact, the church sometimes ordered autopsies, often for the purpose of looking for signs of holiness in the body of a supposedly saintly person.
The first example of one of these "holy autopsies" came in 1308, when nuns conducted a dissection of the body of Chiara of Montefalco, an abbess who would be canonized as a saint in 1881. The nuns reported finding a tiny crucifix in the abbess' heart, as well as three gallstones in her gallbladder, which they saw as symbolic of the Holy Trinity.
Other autopsies were entirely secular. In 1286, an Italian physician conducted autopsies in order to pinpoint the origin of an epidemic, according to Charlier and his colleagues.
Some of the belief that the church frowned on autopsies may have come from a misinterpretation of a papal edict from 1299, in which the Pope forbade the boiling of the bones of dead Crusaders. That practice ensured Crusaders' bones could be shipped back home for burial, but the Pope declared the soldiers should be buried where they fell.
"That was interpreted in the 19th century as actually being a stricture against human dissection, which would have surprised the Pope," Hannam said.
Well-studied head
While more investigation of the body was going on in the Middle Ages than previously realized, the 1200s remain the "dark ages" in the sense that little is known about human anatomical dissections during this time period, Charlier said. When he and his colleagues began examining the head-and-shoulders specimen, they suspected it would be from the 1400s or 1500s.
"We did not think it was so antique," Charlier said.
But radiocarbon dating put the specimen firmly in the 1200s, making it the oldest European anatomical preparation known. Most surprisingly, Charlier said, the veins and arteries are filled with a mixture of beeswax, lime and cinnabar mercury. This would have helped preserve the body as well as give the circulatory system some color, as cinnabar mercury has a red tint.
Thus, the man's body was not simply dissected and tossed away; it was preserved, possibly for continued medical education, Charlier said. The man's identity, however, is forever lost. He could have been a prisoner, an institutionalized person, or perhaps a pauper whose body was never claimed, the researchers write this month in the journal Archives of Medical Science.
Read more at Discovery News
Galen's texts wouldn't be challenged for anatomical supremacy until the Renaissance, when human dissections —- often in public -— surged in popularity. But doctors in medieval Europe weren't as idle as it may seem, as a new analysis of the oldest-known preserved human dissection in Europe reveals.
The gruesome specimen, now in a private collection, consists of a human head and shoulders with the top of the skull and brain removed. Rodent nibbles and insect larvae trails mar the face. The arteries are filled with a red "metal wax" compound that helped preserve the body.
The preparation of the specimen was surprisingly advanced. Radiocarbon dating puts the age of the body between A.D. 1200 and A.D.1280, an era once considered part of Europe's anti-scientific "Dark Ages." In fact, said study researcher Philippe Charlier, a physician and forensic scientist at University Hospital R. Poincare in France, the new specimen suggests surprising anatomical expertise during this time period.
"It's state-of-the-art," Charlier told LiveScience. "I suppose that the preparator did not do this just one time, but several times, to be so good at this."
Myths of the middle ages
Historians in the 1800s referred to the Dark Ages as a time of illiteracy and barbarianism, generally pinpointing the time period as between the fall of the Roman Empire and somewhere in the Middle Ages. To some, the Dark Ages didn't end until the 1400s, at the advent of the Renaissance.
But modern historians see the Middle Ages quite differently. That's because continued scholarship has found that the medieval period wasn't so ignorant after all.
"There was considerable scientific progress in the later Middle Ages, in particular from the 13th century onward," said James Hannam, an historian and author of "The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution" (Regnery Publishing, 2011).
For centuries, the advancements of the Middle Ages were forgotten, Hannam told LiveScience. In the 16th and 17th centuries, it became an "intellectual fad," he said, for thinkers to cite ancient Greek and Roman sources rather than scientists of the Middle Ages. In some cases, this involved straight-up fudging. Renaissance mathematician Copernicus, for example, took some of his thinking on the motion of the Earth from Jean Buridan, a French priest who lived between about 1300 and 1358, Hannam said. But Copernicus credited the ancient Roman poet Virgil as his inspiration.
Much of this selective memory stemmed from anti-Catholic feelings by Protestants, who split from the church in the 1500s.
As a result, "there was lots of propaganda about how the Catholic Church had been holding back human progress, and it was great that we were all Protestants now," Hannam said.
Anatomical dark ages?
From this anti-Catholic sentiment arose a great many myths, such as the idea that everyone believed the world to be flat until Christopher Columbus sailed to the Americas. ("They thought nothing of the sort," Hannam said.)
Similarly, Renaissance propagandists spread the rumor that the Medieval Christian church banned autopsy and human dissection, holding back medical progress.
In fact, Hannam said, many societies have banned or limited the carving up of human corpses, from the ancient Greeks and Romans to early Europeans (that's why Galen was stuck dissecting animals and peering into gladiator wounds). But autopsies and dissection were not under a blanket church ban in the Middle Ages. In fact, the church sometimes ordered autopsies, often for the purpose of looking for signs of holiness in the body of a supposedly saintly person.
The first example of one of these "holy autopsies" came in 1308, when nuns conducted a dissection of the body of Chiara of Montefalco, an abbess who would be canonized as a saint in 1881. The nuns reported finding a tiny crucifix in the abbess' heart, as well as three gallstones in her gallbladder, which they saw as symbolic of the Holy Trinity.
Other autopsies were entirely secular. In 1286, an Italian physician conducted autopsies in order to pinpoint the origin of an epidemic, according to Charlier and his colleagues.
Some of the belief that the church frowned on autopsies may have come from a misinterpretation of a papal edict from 1299, in which the Pope forbade the boiling of the bones of dead Crusaders. That practice ensured Crusaders' bones could be shipped back home for burial, but the Pope declared the soldiers should be buried where they fell.
"That was interpreted in the 19th century as actually being a stricture against human dissection, which would have surprised the Pope," Hannam said.
Well-studied head
While more investigation of the body was going on in the Middle Ages than previously realized, the 1200s remain the "dark ages" in the sense that little is known about human anatomical dissections during this time period, Charlier said. When he and his colleagues began examining the head-and-shoulders specimen, they suspected it would be from the 1400s or 1500s.
"We did not think it was so antique," Charlier said.
But radiocarbon dating put the specimen firmly in the 1200s, making it the oldest European anatomical preparation known. Most surprisingly, Charlier said, the veins and arteries are filled with a mixture of beeswax, lime and cinnabar mercury. This would have helped preserve the body as well as give the circulatory system some color, as cinnabar mercury has a red tint.
Thus, the man's body was not simply dissected and tossed away; it was preserved, possibly for continued medical education, Charlier said. The man's identity, however, is forever lost. He could have been a prisoner, an institutionalized person, or perhaps a pauper whose body was never claimed, the researchers write this month in the journal Archives of Medical Science.
Read more at Discovery News
Why First 30 Hours Critical for Killing HIV
News that a baby seems to have been functionally cured of HIV after early intervention has piqued the interest of the medical community worldwide and raised hopes that the procedure could be used to more easily and effectively prevent mother-to-child transmission of the virus. More than 3 million children are currently living with the virus that causes AIDS.
Doctors made the announcement Sunday at the 20th annual Conference on Retroviruses and Opportunistic Infections (CROI) in Atlanta.
The timing of the intervention -- about 30 hours after the baby was born -- may have been the key to success, doctors involved with the case said.
“Early treatment most likely contributed to the outcome of this child, but whether it's the only intervention that allowed this outcome is unclear and requires further study,” said Deborah Persaud, associate professor at Johns Hopkins Children’s Center and author of the report on the baby.
The current procedure for high-risk infants is a regimen of smaller doses of antiretroviral drugs until a blood test confirms HIV when the baby is six weeks old. But knowing that this baby was at high risk, Dr. Hannah Gay decided not to wait for the results, and started the baby on high doses of three standard drugs used for HIV-positive babies as soon as the baby was transferred to her at the University of Mississippi Medical Center. The child is now 2 ½ and has no signs of the functioning virus.
While pediatric AIDS researchers were surprised, and while they reiterate the need for the procedure to be repeated, they said it makes sense that the treatment worked. Persaud offered one hypothesis that she and Dr. Katherine Luzuriaga, an immunologist at the University of Massachusetts, plan on testing in clinical trials in the next few months: The HIV virus usually establishes itself in reservoirs where it stays in a dormant state.
“One can imagine if you can halt virus replication very quickly in an infant through the use of very early antiviral therapy,” Persaud said, “that what we've accomplished with this is that the reservoirs are not given the opportunity to be established.
Researchers suspect that those viral reservoirs might be different in children than in adults, Luzuriaga added.
“If we start early, we appear to be able to control viral replication extra tightly and we end up with lower amounts of the viral reservoir,” Luzuriaga said.
Early detection and treatment of diseases is something Dr. Josh Petrikin is familiar with as a neonatologist at Children’s Mercy Hospital in Kansas City, Missouri.
“Depending on the diagnosis, it can be critical -- even life-saving” to start treatment early for a variety of diseases, he said.
All babies in the U.S. get some form of newborn screen, testing for up to 70 various diseases and conditions depending on the state -- nearly all of which can be affected by an early diagnosis. HIV isn’t included on the screen, but the American College of Obstetricians and Gynecologists recommends screening for pregnant women. In the future, genomic medicine could replace the current screening protocol, Petrikin said. Like many others, he’ll be watching for published work on the potential HIV cure.
Read more at Discovery News
Doctors made the announcement Sunday at the 20th annual Conference on Retroviruses and Opportunistic Infections (CROI) in Atlanta.
The timing of the intervention -- about 30 hours after the baby was born -- may have been the key to success, doctors involved with the case said.
“Early treatment most likely contributed to the outcome of this child, but whether it's the only intervention that allowed this outcome is unclear and requires further study,” said Deborah Persaud, associate professor at Johns Hopkins Children’s Center and author of the report on the baby.
The current procedure for high-risk infants is a regimen of smaller doses of antiretroviral drugs until a blood test confirms HIV when the baby is six weeks old. But knowing that this baby was at high risk, Dr. Hannah Gay decided not to wait for the results, and started the baby on high doses of three standard drugs used for HIV-positive babies as soon as the baby was transferred to her at the University of Mississippi Medical Center. The child is now 2 ½ and has no signs of the functioning virus.
While pediatric AIDS researchers were surprised, and while they reiterate the need for the procedure to be repeated, they said it makes sense that the treatment worked. Persaud offered one hypothesis that she and Dr. Katherine Luzuriaga, an immunologist at the University of Massachusetts, plan on testing in clinical trials in the next few months: The HIV virus usually establishes itself in reservoirs where it stays in a dormant state.
“One can imagine if you can halt virus replication very quickly in an infant through the use of very early antiviral therapy,” Persaud said, “that what we've accomplished with this is that the reservoirs are not given the opportunity to be established.
Researchers suspect that those viral reservoirs might be different in children than in adults, Luzuriaga added.
“If we start early, we appear to be able to control viral replication extra tightly and we end up with lower amounts of the viral reservoir,” Luzuriaga said.
Early detection and treatment of diseases is something Dr. Josh Petrikin is familiar with as a neonatologist at Children’s Mercy Hospital in Kansas City, Missouri.
“Depending on the diagnosis, it can be critical -- even life-saving” to start treatment early for a variety of diseases, he said.
All babies in the U.S. get some form of newborn screen, testing for up to 70 various diseases and conditions depending on the state -- nearly all of which can be affected by an early diagnosis. HIV isn’t included on the screen, but the American College of Obstetricians and Gynecologists recommends screening for pregnant women. In the future, genomic medicine could replace the current screening protocol, Petrikin said. Like many others, he’ll be watching for published work on the potential HIV cure.
Read more at Discovery News
Mystery SETI Signal Set Rules of Engagement
In the Fall of 1967, a small team of radio astronomers came face-to-face with a profound mystery that they didn’t want to be true.
At one point they half-seriously thought about destroying the data and staying stone silent. That’s because announcing it to the world could open a Pandora’s box for science, and derail at least one astronomer’s PhD thesis.
A small group of radio astronomers in the United Kingdom had stumbled upon clock-precision radio pulses coming from deep space. The signal was unlike anything ever seen before or even predicted in astronomy. In the absence of a natural explanation, the researchers pondered, for three long weeks, whether this was really a “hello” from an extraterrestrial civilization.
The team faced a cultural black hole with regard to the public response. How do you verify the signal? How do you announce it to the world? Do you send a reply to the stars, or is that too dangerous?
As is the case in our compulsive universe, nature was cleverer than imagined. This story of mistaken identity, as recently researched by Alan Penny of the University of St. Andrews, set the stage for preparing a SETI protocol for how to announce the discovery of the real deal sometime in the future.
Little Green Men
In July 1967, a novel long-wavelength radio telescope, the largest of its time, began operations at the Mullard Radio Astronomy Observatory in Lord’s Bridge, Cambridgeshire, U.K.
Antony Hewish built the array to study radio emissions from quasars, a very distant, energetic and utterly mysterious newly discovered class of object (today we know they are the black hole powered cores of active galaxies). Graduate student Jocelyn Bell took home a 98-foot long paper strip chart of the telescope’s observations (this was a time before microcomputers and wide use of digital data processing and storage by astronomers).
While marking up the chart in her attic, she noticed a peculiar source that was flickering like a police car strobe light. As any good scientist would do, she checked for instrument noise, malfunction or man made clutter. (In 1963 the all-sky glow of the cosmic microwave background was first suspected to be radio contamination from pigeon droppings in a radio communications antenna.)
Bell realized that the source moved with the Earth’s rotation. The source really is in the sky and somewhere out there in the galaxy!
The telescope’s builder, Hewish, was intrigued and installed a faster strip recorder. The ghostly source reappeared in On Nov. 28 and the astronomers recorded pulses that lasted a fraction of a second but recurred precisely every 1.3 seconds. The eye-opener was that the pulses were so short in duration that they had be coming from something smaller than a few thousand miles across. This might place the source on an Earth-sized planet rather than a star!
What’s more, the unusually narrow frequency of the signal mimicked the signature of man made radar transmissions.
In the absence of a physical explanation, the astronomers toyed with the possibility that the signal could be coming from an alien civilization.
The planet hypothesis was easily testable. If the source was orbiting a star, the signals should show a rhythmic frequency shift as the planet approached and then receded from us along a racetrack orbit. This is the high water mark in the team’s taking the E.T. hypothesis seriously. As far back as September, the phenomenon was nicknamed LGM for Little Green Men (just as spurious mechanical malfunctions are dismissed as “gremlins”).
What if a planet’s motion was detected? Should the British government be alerted to hold a press conference announcing something so momentous as an “hello” from space aliens? Or should the data be trashed in favor of protecting Earth from being tempted to talk to E.T.? What if a hostile, invading civilization was “fishing” across interstellar space for a response to their beacon?
By the end of December, the LGM hypothesis quickly evaporated after astronomers learned the source is not orbiting anything.
The final nail in the coffin came when a second, and then a third pulsating source was found in December. It is very improbable that civilizations at three different stars widely separated in space and time should be simultaneously targeting us at exactly the same radio frequencies.
Finally, in February 1968, Hewish published a paper in the journal Nature that described the phenomena as a signal from a neutron star — the crushed remnant of a massive star.
The details of the mechanism were not then known — only later astrophysicists described the signal as being a rotating beacon of radiation from the whirling neutron star, or pulsar.
In late 1968, a different team identified the pulsar as being in the center of the Crab Nebula, a supernova remnant (shown at top).
Dress Rehearsal
This story offer parallels to what might happen when astronomers someday stumble across real evidence for E.T.
The SETI folks have set up and published a protocol for how the receipt of an artificial signal would be confirmed and then openly announced to the world. But as with the pulsars, a strange phenomenon may appear when a new window on the universe opens up.
This could, perhaps, be some kind of future neutrino telescope, or a gravity wave detector. A non-SETI team of researchers could pick up a signal that cannot be explained by any known phenomena. It could take years to rule out any imaginable astrophysical processes.
Meanwhile, in the age of the Internet and social media, I think rumor and speculation would spread vastly faster than it could in 1967. Such a detection could not remain as tightly under wraps as it did back then.
Read more at Discovery News
At one point they half-seriously thought about destroying the data and staying stone silent. That’s because announcing it to the world could open a Pandora’s box for science, and derail at least one astronomer’s PhD thesis.
A small group of radio astronomers in the United Kingdom had stumbled upon clock-precision radio pulses coming from deep space. The signal was unlike anything ever seen before or even predicted in astronomy. In the absence of a natural explanation, the researchers pondered, for three long weeks, whether this was really a “hello” from an extraterrestrial civilization.
The team faced a cultural black hole with regard to the public response. How do you verify the signal? How do you announce it to the world? Do you send a reply to the stars, or is that too dangerous?
As is the case in our compulsive universe, nature was cleverer than imagined. This story of mistaken identity, as recently researched by Alan Penny of the University of St. Andrews, set the stage for preparing a SETI protocol for how to announce the discovery of the real deal sometime in the future.
Little Green Men
In July 1967, a novel long-wavelength radio telescope, the largest of its time, began operations at the Mullard Radio Astronomy Observatory in Lord’s Bridge, Cambridgeshire, U.K.
Antony Hewish built the array to study radio emissions from quasars, a very distant, energetic and utterly mysterious newly discovered class of object (today we know they are the black hole powered cores of active galaxies). Graduate student Jocelyn Bell took home a 98-foot long paper strip chart of the telescope’s observations (this was a time before microcomputers and wide use of digital data processing and storage by astronomers).
While marking up the chart in her attic, she noticed a peculiar source that was flickering like a police car strobe light. As any good scientist would do, she checked for instrument noise, malfunction or man made clutter. (In 1963 the all-sky glow of the cosmic microwave background was first suspected to be radio contamination from pigeon droppings in a radio communications antenna.)
Bell realized that the source moved with the Earth’s rotation. The source really is in the sky and somewhere out there in the galaxy!
The telescope’s builder, Hewish, was intrigued and installed a faster strip recorder. The ghostly source reappeared in On Nov. 28 and the astronomers recorded pulses that lasted a fraction of a second but recurred precisely every 1.3 seconds. The eye-opener was that the pulses were so short in duration that they had be coming from something smaller than a few thousand miles across. This might place the source on an Earth-sized planet rather than a star!
What’s more, the unusually narrow frequency of the signal mimicked the signature of man made radar transmissions.
In the absence of a physical explanation, the astronomers toyed with the possibility that the signal could be coming from an alien civilization.
The planet hypothesis was easily testable. If the source was orbiting a star, the signals should show a rhythmic frequency shift as the planet approached and then receded from us along a racetrack orbit. This is the high water mark in the team’s taking the E.T. hypothesis seriously. As far back as September, the phenomenon was nicknamed LGM for Little Green Men (just as spurious mechanical malfunctions are dismissed as “gremlins”).
What if a planet’s motion was detected? Should the British government be alerted to hold a press conference announcing something so momentous as an “hello” from space aliens? Or should the data be trashed in favor of protecting Earth from being tempted to talk to E.T.? What if a hostile, invading civilization was “fishing” across interstellar space for a response to their beacon?
By the end of December, the LGM hypothesis quickly evaporated after astronomers learned the source is not orbiting anything.
The final nail in the coffin came when a second, and then a third pulsating source was found in December. It is very improbable that civilizations at three different stars widely separated in space and time should be simultaneously targeting us at exactly the same radio frequencies.
Finally, in February 1968, Hewish published a paper in the journal Nature that described the phenomena as a signal from a neutron star — the crushed remnant of a massive star.
The details of the mechanism were not then known — only later astrophysicists described the signal as being a rotating beacon of radiation from the whirling neutron star, or pulsar.
In late 1968, a different team identified the pulsar as being in the center of the Crab Nebula, a supernova remnant (shown at top).
Dress Rehearsal
This story offer parallels to what might happen when astronomers someday stumble across real evidence for E.T.
The SETI folks have set up and published a protocol for how the receipt of an artificial signal would be confirmed and then openly announced to the world. But as with the pulsars, a strange phenomenon may appear when a new window on the universe opens up.
This could, perhaps, be some kind of future neutrino telescope, or a gravity wave detector. A non-SETI team of researchers could pick up a signal that cannot be explained by any known phenomena. It could take years to rule out any imaginable astrophysical processes.
Meanwhile, in the age of the Internet and social media, I think rumor and speculation would spread vastly faster than it could in 1967. Such a detection could not remain as tightly under wraps as it did back then.
Read more at Discovery News
Mar 4, 2013
Nature's Blockbuster Identity Thieves
The Serpent and the Rainbow Coloration
Two movies at the top of the U.S. box office charts, "Snitch" and "Identity Thief," involve people taking on false identities. Faking out predators and prey by mimicking another specie has been a plot line in nature for millions of years.
A dangerous example of look-alike animals is the copy-cat coloration of the harmless Mexican milk snake (Lampropeltis triangulum annulata, left) and highly venomous Texas coral snake (Micrurus tener, inset). The two species look similar, inhabit the same areas and even share a taste for dining on their fellow serpents. However, although the docile milk snakes are common pets, the coral snake is a relative of the cobras and injects a potent nerve venom, or neurotoxin, with its bite.
How can herpetologist avoid a deadly case of mistaken identity? Both snakes have a combination of red, black and yellow bands, but in a different order. The rhyme, “Red next to yellow, kill a fellow...red next to black, poison lack,” accurately describes the difference in coloration and danger of the two species.
Predators don't have humans' skill at remembering life-saving rhymes, so they tend to avoid both reptiles. This type of mimicry, called Batesian, allows a harmless species to bluff their way to survival. The odd thing is, in this case, how could a predator ever learn to avoid red, black and yellow banded serpents if the coral snakes killed any predator that tried to make a meal of them?
Evolution may have embedded the fear of coral snakes in some predators. An experiment published in Science found that some birds will instinctively avoid red-and-yellow ring patterns, although they will readily attack red-and-yellow stripes or green-and-blue rings. Birds aren't the only animals leery of serpents. Primates, including humans, seem to have evolved an ability to rapidly develop a fear response to snakes, according to numerous studies. However, these instinctual fears can be overcome by conditioning or, in the case of humans, education.
All the King's Butterflies
For both monarch (right) and viceroy (inset) butterflies, it's good to be the king. Monarch butterflies (Danaus plexippus) dine on milkweeds and store up toxins from the plants, making the insects foul tasting to birds. The viceroy (Limenitis archippus) was once thought to be a non-toxic usurper of the monarch's unpalatable crown, but it turns out that the viceroy can be just as noxious, according to research published in Nature.
This kind of mimicry, called Müllerian mimicry, allows both butterflies to benefit from each other's nasty flavor.
Both kings of the insect world wear orange and black robes, although regional variations of the viceroy butterfly have adapted their garments to match the monarch relative holding court nearby. In Florida, viceroys coloration matches that of the queen butterfly (Danaus gilippus) whereas in Mexico, viceroys mimic the coloration of the soldier butterfly (Danaus eresimus).
Where Hawks Dare
The vultures wheeling overhead were of little concern to the rodents scurrying from rock to rock, until suddenly one of the birds broke from the slowly circling flock and dove from the sky.
The hapless mouse now in the talons of that bird learned too late that that was no vulture. That was a zone-tailed hawk (Buteo albonotatus).
From the ground, the silhouette of the hawk (right) resembles that of a turkey vulture (Cathartes aura, left). The hawk even hold its wings like a vulture and flies along with vulture flocks. But, the two birds differ in their feeding preferences. Vultures rarely eat live prey, so small animals have little to fear from them. The hawk, on the other hand, eats any small animal it can get its claws on.
Read more at Discovery News
Two movies at the top of the U.S. box office charts, "Snitch" and "Identity Thief," involve people taking on false identities. Faking out predators and prey by mimicking another specie has been a plot line in nature for millions of years.
A dangerous example of look-alike animals is the copy-cat coloration of the harmless Mexican milk snake (Lampropeltis triangulum annulata, left) and highly venomous Texas coral snake (Micrurus tener, inset). The two species look similar, inhabit the same areas and even share a taste for dining on their fellow serpents. However, although the docile milk snakes are common pets, the coral snake is a relative of the cobras and injects a potent nerve venom, or neurotoxin, with its bite.
How can herpetologist avoid a deadly case of mistaken identity? Both snakes have a combination of red, black and yellow bands, but in a different order. The rhyme, “Red next to yellow, kill a fellow...red next to black, poison lack,” accurately describes the difference in coloration and danger of the two species.
Predators don't have humans' skill at remembering life-saving rhymes, so they tend to avoid both reptiles. This type of mimicry, called Batesian, allows a harmless species to bluff their way to survival. The odd thing is, in this case, how could a predator ever learn to avoid red, black and yellow banded serpents if the coral snakes killed any predator that tried to make a meal of them?
Evolution may have embedded the fear of coral snakes in some predators. An experiment published in Science found that some birds will instinctively avoid red-and-yellow ring patterns, although they will readily attack red-and-yellow stripes or green-and-blue rings. Birds aren't the only animals leery of serpents. Primates, including humans, seem to have evolved an ability to rapidly develop a fear response to snakes, according to numerous studies. However, these instinctual fears can be overcome by conditioning or, in the case of humans, education.
All the King's Butterflies
For both monarch (right) and viceroy (inset) butterflies, it's good to be the king. Monarch butterflies (Danaus plexippus) dine on milkweeds and store up toxins from the plants, making the insects foul tasting to birds. The viceroy (Limenitis archippus) was once thought to be a non-toxic usurper of the monarch's unpalatable crown, but it turns out that the viceroy can be just as noxious, according to research published in Nature.
This kind of mimicry, called Müllerian mimicry, allows both butterflies to benefit from each other's nasty flavor.
Both kings of the insect world wear orange and black robes, although regional variations of the viceroy butterfly have adapted their garments to match the monarch relative holding court nearby. In Florida, viceroys coloration matches that of the queen butterfly (Danaus gilippus) whereas in Mexico, viceroys mimic the coloration of the soldier butterfly (Danaus eresimus).
Where Hawks Dare
The vultures wheeling overhead were of little concern to the rodents scurrying from rock to rock, until suddenly one of the birds broke from the slowly circling flock and dove from the sky.
The hapless mouse now in the talons of that bird learned too late that that was no vulture. That was a zone-tailed hawk (Buteo albonotatus).
From the ground, the silhouette of the hawk (right) resembles that of a turkey vulture (Cathartes aura, left). The hawk even hold its wings like a vulture and flies along with vulture flocks. But, the two birds differ in their feeding preferences. Vultures rarely eat live prey, so small animals have little to fear from them. The hawk, on the other hand, eats any small animal it can get its claws on.
Read more at Discovery News
Dark Fantasies: When They Cross the Line
The cannibal cop case of Officer Gilberto Valle continues to dredge up gruesome details of what prosecutors allege was the 28-year-old’s plan to abduct and eat women. The defense’s response? It was pure fantasy.
While the details are disturbing, so too is the idea of “thought police." Where, exactly, does U.S. law draw the line between fantasy (no matter how dark) and conspiracy?
“In this country, it is a crime to conspire to commit a crime,” said Joe DeMarco, an Internet lawyer and former head of the cyber crimes unit at the U.S. Attorney’s Office in New York. “So, if we agreed to buy a kilo of cocaine and break it up into little bags and sell it, even if we never did it, it’s a conspiracy that violates the narcotics laws of the U.S., and we could go to jail for years and years...We like to stop a crime before it happens.”
But, criminal law doesn’t go after fantasies, said Marcus Asner, partner at Arnold and Porter, a New York law firm, and former chief of the major crimes unit at the U.S. Attorney’s Office in Manhattan. Criminal law separates between actus reus and mens rea, Latin terms meaning guilty act and guilty mind, allowing some leeway in guilty thoughts -- meaning that almost all defendants argue they were simply pretending.
“Suppose you’re at a target range and you don’t realize someone is behind the target, you pull the trigger and someone accidentally gets killed. In that case, you’re not guilty,” Asner said. “It was an accident. On the other hand, if you are shooting at something thinking it is a person, and you intend to kill a person, but in fact the person is not there and no one gets killed, there is no completed crime -- but we call that attempt and attempt is a crime.”
In reality, though, it’s blurry, he said. One key question that often helps define cases is whether the defendant took a substantial step toward the crime.
Take a case he argued in 2005, U.S. vs. Joseph Zito. Zito often got online to look for young girls, and eventually, he started chatting with an FBI agent he believed to be a young girl.
“The government figures out who he is but doesn’t arrest him until he gets in the car and shows up to meet the girl with sex paraphernalia in the trunk of his car,” Asner said. “We were very careful to make sure the guy did take a substantial step.”
One of the arguments against Valle is the specificity he used in his communication.
“This is someone who hasn’t just idly talked about cannibalism, but he’s named specific people,” DeMarco said.
Further complicating the “I was just pretending” defense is the Internet.
“The Internet allows people who have these niche interests to find each other in ways that are efficient, global, and, if you want, anonymous,” DeMarco said. “It enables people with disparate interests -- whether it be stamp collecting or cannibalism -- to find each other. And that can validate their interest, so they don’t feel alone or odd or unusual.”
Read more at Discovery News
While the details are disturbing, so too is the idea of “thought police." Where, exactly, does U.S. law draw the line between fantasy (no matter how dark) and conspiracy?
“In this country, it is a crime to conspire to commit a crime,” said Joe DeMarco, an Internet lawyer and former head of the cyber crimes unit at the U.S. Attorney’s Office in New York. “So, if we agreed to buy a kilo of cocaine and break it up into little bags and sell it, even if we never did it, it’s a conspiracy that violates the narcotics laws of the U.S., and we could go to jail for years and years...We like to stop a crime before it happens.”
But, criminal law doesn’t go after fantasies, said Marcus Asner, partner at Arnold and Porter, a New York law firm, and former chief of the major crimes unit at the U.S. Attorney’s Office in Manhattan. Criminal law separates between actus reus and mens rea, Latin terms meaning guilty act and guilty mind, allowing some leeway in guilty thoughts -- meaning that almost all defendants argue they were simply pretending.
“Suppose you’re at a target range and you don’t realize someone is behind the target, you pull the trigger and someone accidentally gets killed. In that case, you’re not guilty,” Asner said. “It was an accident. On the other hand, if you are shooting at something thinking it is a person, and you intend to kill a person, but in fact the person is not there and no one gets killed, there is no completed crime -- but we call that attempt and attempt is a crime.”
In reality, though, it’s blurry, he said. One key question that often helps define cases is whether the defendant took a substantial step toward the crime.
Take a case he argued in 2005, U.S. vs. Joseph Zito. Zito often got online to look for young girls, and eventually, he started chatting with an FBI agent he believed to be a young girl.
“The government figures out who he is but doesn’t arrest him until he gets in the car and shows up to meet the girl with sex paraphernalia in the trunk of his car,” Asner said. “We were very careful to make sure the guy did take a substantial step.”
One of the arguments against Valle is the specificity he used in his communication.
“This is someone who hasn’t just idly talked about cannibalism, but he’s named specific people,” DeMarco said.
Further complicating the “I was just pretending” defense is the Internet.
“The Internet allows people who have these niche interests to find each other in ways that are efficient, global, and, if you want, anonymous,” DeMarco said. “It enables people with disparate interests -- whether it be stamp collecting or cannibalism -- to find each other. And that can validate their interest, so they don’t feel alone or odd or unusual.”
Read more at Discovery News
In a First, Baby 'Cured' of HIV
Researchers say they have, for the first time, cured a baby born with HIV -- a development that could help improve treatment of babies infected at birth.
There is an important technical nuance: researchers insist on calling it a "functional cure" rather than a complete cure.
That is because the virus is not totally eradicated. Still, its presence is reduced to such a low level that a body can control it without the need for standard drug treatment.
The only fully cured AIDS patient recognized worldwide is the so-called "Berlin patient," American Timothy Brown. He is considered cured of HIV and leukemia five years after receiving bone marrow transplants from a rare donor naturally resistant to HIV. The marrow transplant was aimed at treating his leukemia.
But in this new case, the baby girl received nothing more invasive or complex than commonly available antiretroviral drugs. The difference, however, was the dosage and the timing: starting less than 30 hours after her birth.
It is that kind of aggressive treatment that likely yielded the "functional cure," researchers reported Sunday at the 20th annual Conference on Retroviruses and Opportunistic Infections (CROI) in Atlanta, Georgia.
What researchers call dormant HIV-infected cells often re-start infections in HIV-infected patients within a few weeks after antiretroviral treatment stops, forcing most people who have tested HIV-positive to stay on the drugs for life or risk the illness progressing.
"Prompt antiviral therapy in newborns that begins within days of exposure may help infants clear the virus and achieve long-term remission without lifelong treatment by preventing such viral hideouts from forming in the first place," said lead researcher Deborah Persaud, of Johns Hopkins Children's Center in Baltimore, Maryland.
It appears to be the first time this was achieved in a baby, she said.
The baby was infected by her HIV-positive mother, and her treatment with therapeutic doses of antiretroviral drugs began even before her own positive blood test came back.
The typical protocol for high-risk newborns is to give them smaller doses of the drugs until results from an HIV blood test is available at six weeks old.
Tests showed the baby's viral count steadily declined until it could not longer be detected 29 days after her birth.
The child was given follow-up treatment with antiretrovirals until 18 months, at which point doctors lost contact with her for 10 months. During that period she was not taking antiretrovirals.
Researchers then were able to do a series of blood tests -- and none gave an HIV-positive result.
Natural viral suppression without treatment is an exceedingly rare occurrence, seen in fewer than half a percent of HIV-infected adults, known as "elite controllers," whose immune systems are able to rein in viral replication and keep the virus at clinically undetectable levels.
Experts on HIV have long wanted to help all HIV patients achieve elite-controller status. Researchers say this new case offers hope as a game-changer, because it suggests prompt antiretroviral therapy in newborns indeed can do that.
Still, they said, their first priority is learning how to stop transmission of the virus from mother to newborn. ARV treatments of mothers currently stop transmission to newborns in 98 percent of cases, they say.
Read more at Discovery News
There is an important technical nuance: researchers insist on calling it a "functional cure" rather than a complete cure.
That is because the virus is not totally eradicated. Still, its presence is reduced to such a low level that a body can control it without the need for standard drug treatment.
The only fully cured AIDS patient recognized worldwide is the so-called "Berlin patient," American Timothy Brown. He is considered cured of HIV and leukemia five years after receiving bone marrow transplants from a rare donor naturally resistant to HIV. The marrow transplant was aimed at treating his leukemia.
But in this new case, the baby girl received nothing more invasive or complex than commonly available antiretroviral drugs. The difference, however, was the dosage and the timing: starting less than 30 hours after her birth.
It is that kind of aggressive treatment that likely yielded the "functional cure," researchers reported Sunday at the 20th annual Conference on Retroviruses and Opportunistic Infections (CROI) in Atlanta, Georgia.
What researchers call dormant HIV-infected cells often re-start infections in HIV-infected patients within a few weeks after antiretroviral treatment stops, forcing most people who have tested HIV-positive to stay on the drugs for life or risk the illness progressing.
"Prompt antiviral therapy in newborns that begins within days of exposure may help infants clear the virus and achieve long-term remission without lifelong treatment by preventing such viral hideouts from forming in the first place," said lead researcher Deborah Persaud, of Johns Hopkins Children's Center in Baltimore, Maryland.
It appears to be the first time this was achieved in a baby, she said.
The baby was infected by her HIV-positive mother, and her treatment with therapeutic doses of antiretroviral drugs began even before her own positive blood test came back.
The typical protocol for high-risk newborns is to give them smaller doses of the drugs until results from an HIV blood test is available at six weeks old.
Tests showed the baby's viral count steadily declined until it could not longer be detected 29 days after her birth.
The child was given follow-up treatment with antiretrovirals until 18 months, at which point doctors lost contact with her for 10 months. During that period she was not taking antiretrovirals.
Researchers then were able to do a series of blood tests -- and none gave an HIV-positive result.
Natural viral suppression without treatment is an exceedingly rare occurrence, seen in fewer than half a percent of HIV-infected adults, known as "elite controllers," whose immune systems are able to rein in viral replication and keep the virus at clinically undetectable levels.
Experts on HIV have long wanted to help all HIV patients achieve elite-controller status. Researchers say this new case offers hope as a game-changer, because it suggests prompt antiretroviral therapy in newborns indeed can do that.
Still, they said, their first priority is learning how to stop transmission of the virus from mother to newborn. ARV treatments of mothers currently stop transmission to newborns in 98 percent of cases, they say.
Read more at Discovery News
ADHD Doesn't Go Away
Picture someone with attention deficit hyperactivity disorder, or ADHD, and you probably conjure up an image of an elementary school-age boy. But an analysis of data from the first large, population-based study to follow kids through to adulthood shows that the neurobehavioral disorder rarely goes away with age.
Indeed, as ADHD patients make the transition to adulthood, the issues they face often multiply: they are more likely to have other psychiatric disorders and even commit suicide, reports a new study published online today in Pediatrics.
In fact, researchers found that only 37.5 percent of the adults who had been diagnosed with the disorder as a child were free of other psychiatric disorders, including alcohol and drug dependence, in their late 20s.
Very few of the children with ADHD were still being treated as adults -- although neuropsychiatric interviews confirmed that 29 percent still had it.
“I think there has been a view that ADHD is a childhood disorder, and it’s only relatively recently that people have been trained to detect it in adults,” said Nathan Blum, a developmental-behavioral pediatrician at Children’s Hospital in Philadelphia, who was not involved in the study.
Among the adults who’d had ADHD as a child, 57 percent had at least one other psychiatric disorder, compared with 35 percent of the controls. Just under 2 percent percent had died; of the seven deaths, three were suicides. Of the controls, less than 1 percent had died. Of those 37 deaths, five were from suicide. And 2.7 percent were incarcerated at the time of recruitment for the study.
“It’s deeply concerning and indicative of the nature and severity of the condition,” said study author Dr. William Barbaresi of Boston Children’s Hospital. “ADHD tends to be trivialized still. (These statistics) are a direct indicator of why we have to pay attention to it and its associated conditions.”
ADHD affects about 7 percent of all children. This study followed all children in Rochester, Minn., born between 1976 and 1982, and whose families allowed access to medical records. Of those 5,718 kids, 367 were diagnosed with ADHD.
The analysis of the data is ongoing; the authors have published previous reports on long-term school outcomes and health care costs of childhood cases of ADHD, among many others. And because the population of Rochester, home to the Mayo Clinic, is largely white and middle-class, researchers believe the outcomes may be more troubling if a more diverse group were studied.
Plus, the 37.5 percent who were still alive, not in jail, and free of other psychiatric disorders weren’t necessarily leading blissful lives.
The analysis published today did not look for a host of other issues associated with adult ADHD, such as job retention, performance, and long-term relationship status, Barbaresi said. That’s still to be analyzed. Other forthcoming studies might parse out the difference in adult outcomes in kids who had been treated for ADHD, vs. those who hadn’t.
Adult treatment of ADHD can be effective, Barbaresi said, but it’s more complicated than treating a child. Most children, he said, gradually stop taking stimulant medication.
“If an adult comes to a physician at age 30 having had childhood ADHD, wondering whether or not it’s still an issue because they’re facing, it’s much more challenging because they likely have other issues by then as well, other mental health problems or substance abuse issues,” Barbaresi said. “A doctor would be reluctant to prescribe stimulant medication to someone who has another mental health problem.”
And while stimulant medications can be timed so that a child can function during a school day, there are limitations for adults, Blum said. The medication starts working shortly after taking it, and then wears off. 24-hour meds are less effective.
“We need to start building a treatment model that treats it as a chronic illness like diabetes, and work from the beginning to keep kids in treatment,” Barbaresi said. “I hope this will be a wake-up call about the need to rethink this condition and to stop the sensationalist perspectives of focusing on the evils of stimulant medications.”
Read more at Discovery News
Indeed, as ADHD patients make the transition to adulthood, the issues they face often multiply: they are more likely to have other psychiatric disorders and even commit suicide, reports a new study published online today in Pediatrics.
In fact, researchers found that only 37.5 percent of the adults who had been diagnosed with the disorder as a child were free of other psychiatric disorders, including alcohol and drug dependence, in their late 20s.
Very few of the children with ADHD were still being treated as adults -- although neuropsychiatric interviews confirmed that 29 percent still had it.
“I think there has been a view that ADHD is a childhood disorder, and it’s only relatively recently that people have been trained to detect it in adults,” said Nathan Blum, a developmental-behavioral pediatrician at Children’s Hospital in Philadelphia, who was not involved in the study.
Among the adults who’d had ADHD as a child, 57 percent had at least one other psychiatric disorder, compared with 35 percent of the controls. Just under 2 percent percent had died; of the seven deaths, three were suicides. Of the controls, less than 1 percent had died. Of those 37 deaths, five were from suicide. And 2.7 percent were incarcerated at the time of recruitment for the study.
“It’s deeply concerning and indicative of the nature and severity of the condition,” said study author Dr. William Barbaresi of Boston Children’s Hospital. “ADHD tends to be trivialized still. (These statistics) are a direct indicator of why we have to pay attention to it and its associated conditions.”
ADHD affects about 7 percent of all children. This study followed all children in Rochester, Minn., born between 1976 and 1982, and whose families allowed access to medical records. Of those 5,718 kids, 367 were diagnosed with ADHD.
The analysis of the data is ongoing; the authors have published previous reports on long-term school outcomes and health care costs of childhood cases of ADHD, among many others. And because the population of Rochester, home to the Mayo Clinic, is largely white and middle-class, researchers believe the outcomes may be more troubling if a more diverse group were studied.
Plus, the 37.5 percent who were still alive, not in jail, and free of other psychiatric disorders weren’t necessarily leading blissful lives.
The analysis published today did not look for a host of other issues associated with adult ADHD, such as job retention, performance, and long-term relationship status, Barbaresi said. That’s still to be analyzed. Other forthcoming studies might parse out the difference in adult outcomes in kids who had been treated for ADHD, vs. those who hadn’t.
Adult treatment of ADHD can be effective, Barbaresi said, but it’s more complicated than treating a child. Most children, he said, gradually stop taking stimulant medication.
“If an adult comes to a physician at age 30 having had childhood ADHD, wondering whether or not it’s still an issue because they’re facing, it’s much more challenging because they likely have other issues by then as well, other mental health problems or substance abuse issues,” Barbaresi said. “A doctor would be reluctant to prescribe stimulant medication to someone who has another mental health problem.”
And while stimulant medications can be timed so that a child can function during a school day, there are limitations for adults, Blum said. The medication starts working shortly after taking it, and then wears off. 24-hour meds are less effective.
“We need to start building a treatment model that treats it as a chronic illness like diabetes, and work from the beginning to keep kids in treatment,” Barbaresi said. “I hope this will be a wake-up call about the need to rethink this condition and to stop the sensationalist perspectives of focusing on the evils of stimulant medications.”
Read more at Discovery News
Mar 3, 2013
Short Algorithm, Long-Range Consequences
In the last decade, theoretical computer science has seen remarkable progress on the problem of solving graph Laplacians -- the esoteric name for a calculation with hordes of familiar applications in scheduling, image processing, online product recommendation, network analysis, and scientific computing, to name just a few. Only in 2004 did researchers first propose an algorithm that solved graph Laplacians in "nearly linear time," meaning that the algorithm's running time didn't increase exponentially with the size of the problem.
At this year's ACM Symposium on the Theory of Computing, MIT researchers will present a new algorithm for solving graph Laplacians that is not only faster than its predecessors, but also drastically simpler. "The 2004 paper required fundamental innovations in multiple branches of mathematics and computer science, but it ended up being split into three papers that I think were 130 pages in aggregate," says Jonathan Kelner, an associate professor of applied mathematics at MIT who led the new research. "We were able to replace it with something that would fit on a blackboard."
The MIT researchers -- Kelner; Lorenzo Orecchia, an instructor in applied mathematics; and Kelner's students Aaron Sidford and Zeyuan Zhu -- believe that the simplicity of their algorithm should make it both faster and easier to implement in software than its predecessors. But just as important is the simplicity of their conceptual analysis, which, they argue, should make their result much easier to generalize to other contexts.
Overcoming resistance
A graph Laplacian is a matrix -- a big grid of numbers -- that describes a graph, a mathematical abstraction common in computer science. A graph is any collection of nodes, usually depicted as circles, and edges, depicted as lines that connect the nodes. In a logistics problem, the nodes might represent tasks to be performed, while in an online recommendation engine, they might represent titles of movies.
In many graphs, the edges are "weighted," meaning that they have different numbers associated with them. Those numbers could represent the cost -- in time, money or energy -- of moving from one step to another in a complex logistical operation, or they could represent the strength of the correlations between the movie preferences of customers of an online video service.
The Laplacian of a graph describes the weights between all the edges, but it can also be interpreted as a series of linear equations. Solving those equations is crucial to many techniques for analyzing graphs.
One intuitive way to think about graph Laplacians is to imagine the graph as a big electrical circuit and the edges as resistors. The weights of the edges describe the resistance of the resistors; solving the Laplacian tells you how much current would flow between any two points in the graph.
Earlier approaches to solving graph Laplacians considered a series of ever-simpler approximations of the graph of interest. Solving the simplest provided a good approximation of the next simplest, which provided a good approximation of the next simplest, and so on. But the rules for constructing the sequence of graphs could get very complex, and proving that the solution of the simplest was a good approximation of the most complex required considerable mathematical ingenuity.
Looping back
The MIT researchers' approach is much more straightforward. The first thing they do is find a "spanning tree" for the graph. A tree is a particular kind of graph that has no closed loops. A family tree is a familiar example; there, a loop might mean that someone was both parent and sibling to the same person. A spanning tree of a graph is a tree that touches all of the graph's nodes but dispenses with the edges that create loops. Efficient algorithms for constructing spanning trees are well established.
The spanning tree in hand, the MIT algorithm then adds back just one of the missing edges, creating a loop. A loop means that two nodes are connected by two different paths; on the circuit analogy, the voltage would have to be the same across both paths. So the algorithm sticks in values for current flow that balance the loop. Then it adds back another missing edge and rebalances.
In even a simple graph, values that balance one loop could imbalance another one. But the MIT researchers showed that, remarkably, this simple, repetitive process of adding edges and rebalancing will converge on the solution of the graph Laplacian. Nor did the demonstration of that convergence require sophisticated mathematics: "Once you find the right way of thinking about the problem, everything just falls into place," Kelner explains.
Paradigm shift
Daniel Spielman, a professor of applied mathematics and computer science at Yale University, was Kelner's thesis advisor and one of two co-authors of the 2004 paper. According to Spielman, his algorithm solved Laplacians in nearly linear time "on problems of astronomical size that you will never ever encounter unless it's a much bigger universe than we know. Jon and colleagues' algorithm is actually a practical one."
Spielman points out that in 2010, researchers at Carnegie Mellon University also presented a practical algorithm for solving Laplacians. Theoretical analysis shows that the MIT algorithm should be somewhat faster, but "the strange reality of all these things is, you do a lot of analysis to make sure that everything works, but you sometimes get unusually lucky, or unusually unlucky, when you implement them. So we'll have to wait to see which really is the case."
Read more at Science Daily
At this year's ACM Symposium on the Theory of Computing, MIT researchers will present a new algorithm for solving graph Laplacians that is not only faster than its predecessors, but also drastically simpler. "The 2004 paper required fundamental innovations in multiple branches of mathematics and computer science, but it ended up being split into three papers that I think were 130 pages in aggregate," says Jonathan Kelner, an associate professor of applied mathematics at MIT who led the new research. "We were able to replace it with something that would fit on a blackboard."
The MIT researchers -- Kelner; Lorenzo Orecchia, an instructor in applied mathematics; and Kelner's students Aaron Sidford and Zeyuan Zhu -- believe that the simplicity of their algorithm should make it both faster and easier to implement in software than its predecessors. But just as important is the simplicity of their conceptual analysis, which, they argue, should make their result much easier to generalize to other contexts.
Overcoming resistance
A graph Laplacian is a matrix -- a big grid of numbers -- that describes a graph, a mathematical abstraction common in computer science. A graph is any collection of nodes, usually depicted as circles, and edges, depicted as lines that connect the nodes. In a logistics problem, the nodes might represent tasks to be performed, while in an online recommendation engine, they might represent titles of movies.
In many graphs, the edges are "weighted," meaning that they have different numbers associated with them. Those numbers could represent the cost -- in time, money or energy -- of moving from one step to another in a complex logistical operation, or they could represent the strength of the correlations between the movie preferences of customers of an online video service.
The Laplacian of a graph describes the weights between all the edges, but it can also be interpreted as a series of linear equations. Solving those equations is crucial to many techniques for analyzing graphs.
One intuitive way to think about graph Laplacians is to imagine the graph as a big electrical circuit and the edges as resistors. The weights of the edges describe the resistance of the resistors; solving the Laplacian tells you how much current would flow between any two points in the graph.
Earlier approaches to solving graph Laplacians considered a series of ever-simpler approximations of the graph of interest. Solving the simplest provided a good approximation of the next simplest, which provided a good approximation of the next simplest, and so on. But the rules for constructing the sequence of graphs could get very complex, and proving that the solution of the simplest was a good approximation of the most complex required considerable mathematical ingenuity.
Looping back
The MIT researchers' approach is much more straightforward. The first thing they do is find a "spanning tree" for the graph. A tree is a particular kind of graph that has no closed loops. A family tree is a familiar example; there, a loop might mean that someone was both parent and sibling to the same person. A spanning tree of a graph is a tree that touches all of the graph's nodes but dispenses with the edges that create loops. Efficient algorithms for constructing spanning trees are well established.
The spanning tree in hand, the MIT algorithm then adds back just one of the missing edges, creating a loop. A loop means that two nodes are connected by two different paths; on the circuit analogy, the voltage would have to be the same across both paths. So the algorithm sticks in values for current flow that balance the loop. Then it adds back another missing edge and rebalances.
In even a simple graph, values that balance one loop could imbalance another one. But the MIT researchers showed that, remarkably, this simple, repetitive process of adding edges and rebalancing will converge on the solution of the graph Laplacian. Nor did the demonstration of that convergence require sophisticated mathematics: "Once you find the right way of thinking about the problem, everything just falls into place," Kelner explains.
Paradigm shift
Daniel Spielman, a professor of applied mathematics and computer science at Yale University, was Kelner's thesis advisor and one of two co-authors of the 2004 paper. According to Spielman, his algorithm solved Laplacians in nearly linear time "on problems of astronomical size that you will never ever encounter unless it's a much bigger universe than we know. Jon and colleagues' algorithm is actually a practical one."
Spielman points out that in 2010, researchers at Carnegie Mellon University also presented a practical algorithm for solving Laplacians. Theoretical analysis shows that the MIT algorithm should be somewhat faster, but "the strange reality of all these things is, you do a lot of analysis to make sure that everything works, but you sometimes get unusually lucky, or unusually unlucky, when you implement them. So we'll have to wait to see which really is the case."
Read more at Science Daily
Misplaced Molecules: New Insights Into the Causes of Dementia
A shortage of a protein called TDP-43 caused muscle wasting and stunted nerve cells. This finding supports the idea that malfunction of this protein plays a decisive role in ALS and FTD. The study is published in the Proceedings of the National Academy of Sciences (PNAS).
ALS is an incurable neurological disease which manifests as rapidly progressing muscle wasting. Both limbs and respiratory muscles are affected. This leads to impaired mobility and breathing problems. Patients commonly die within a few years after the symptoms emerged. In rare cases, of which the British physicist Stephen Hawking is the most notable, patients can live with the disease for a long time. In Germany estimates show over 150,000 patients suffering from ALS -- an average of 1 in 500 people.
Proteins gone astray
Over the last few years, there has been increasing evidence that ALS and FTD -- a form of dementia associated with changes in personality and social behaviour -- may have similar or even the same origins. The symptoms overlap and common factors have also been found at the microscopic level. In many cases, particles accumulate and form clumps in the patient's nerve cells: this applies particularly to the TDP-43 protein.
"Normally, this protein is located in the cell nucleus and is involved in processing genetic information," explains molecular biologist Dr. Bettina Schmid, who works at the DZNE Munich site and at LMU. "However, in cases of disease, TDP-43 accumulates outside the nucleus forming aggregates." Schmid explains that it is not yet clear whether these clumps are harmful. "However, the protein's normal function is clearly disrupted. It no longer reaches the nucleus to perform its actual task. There seems to be a relationship between this malfunction and the disease."
Studies on zebrafish
However, until now little was known about the function of TDP-43. What are the consequences when this protein becomes non-functional? In order to answer this question, the team led by Bettina Schmid cooperated with the research group of Prof. Christian Haass to investigate the larvae of specially bred zebrafish. Their genetic code had been modified in such a way that no TDP-43 was produced in the organism of the fish. The result: the young fish showed massive muscle wasting and died a few days after hatching. Moreover, the extensions of the nerve cells which control the muscles were abnormal.
"To some extent, these are symptoms typical of ALS and FTD. Therefore, a loss of function of TDP-43 does seem to play a critical role in the disease," says Haass, Site Speaker of the DZNE Munich Site and chair of Metabolic Biochemistry at LMU.
Read more at Science Daily
ALS is an incurable neurological disease which manifests as rapidly progressing muscle wasting. Both limbs and respiratory muscles are affected. This leads to impaired mobility and breathing problems. Patients commonly die within a few years after the symptoms emerged. In rare cases, of which the British physicist Stephen Hawking is the most notable, patients can live with the disease for a long time. In Germany estimates show over 150,000 patients suffering from ALS -- an average of 1 in 500 people.
Proteins gone astray
Over the last few years, there has been increasing evidence that ALS and FTD -- a form of dementia associated with changes in personality and social behaviour -- may have similar or even the same origins. The symptoms overlap and common factors have also been found at the microscopic level. In many cases, particles accumulate and form clumps in the patient's nerve cells: this applies particularly to the TDP-43 protein.
"Normally, this protein is located in the cell nucleus and is involved in processing genetic information," explains molecular biologist Dr. Bettina Schmid, who works at the DZNE Munich site and at LMU. "However, in cases of disease, TDP-43 accumulates outside the nucleus forming aggregates." Schmid explains that it is not yet clear whether these clumps are harmful. "However, the protein's normal function is clearly disrupted. It no longer reaches the nucleus to perform its actual task. There seems to be a relationship between this malfunction and the disease."
Studies on zebrafish
However, until now little was known about the function of TDP-43. What are the consequences when this protein becomes non-functional? In order to answer this question, the team led by Bettina Schmid cooperated with the research group of Prof. Christian Haass to investigate the larvae of specially bred zebrafish. Their genetic code had been modified in such a way that no TDP-43 was produced in the organism of the fish. The result: the young fish showed massive muscle wasting and died a few days after hatching. Moreover, the extensions of the nerve cells which control the muscles were abnormal.
"To some extent, these are symptoms typical of ALS and FTD. Therefore, a loss of function of TDP-43 does seem to play a critical role in the disease," says Haass, Site Speaker of the DZNE Munich Site and chair of Metabolic Biochemistry at LMU.
Read more at Science Daily
Subscribe to:
Posts (Atom)