Millions of people suffering from multiple sclerosis, Parkinson's, muscular dystrophy, spinal cord injuries or amputees could soon interact with their computers and surroundings using just their eyes, thanks to a new device that costs less than £40.
Composed from off-the-shelf materials, the new device can work out exactly where a person is looking by tracking their eye movements, allowing them to control a cursor on a screen just like a normal computer mouse.
The technology comprises an eye-tracking device and "smart" software that have been presented July 13, in IOP Publishing's Journal of Neural Engineering. Researchers from Imperial College London demonstrated its functionality by getting a group of people to play the classic computer game Pong without any kind of handset. In addition users were able to browse the web and write emails "hands-off."
A video of somebody using the device to play Pong can be viewed here (https://www.youtube.com/watch?v=zapK5wvYU84)
The GT3D device is made up of two fast video game console cameras, costing less than £20 each, that are attached, outside of the line of vision, to a pair of glasses that cost just £3. The cameras constantly take pictures of the eye, working out where the pupil is pointing, and from this the researchers can use a set of calibrations to work out exactly where a person is looking on the screen.
Even more impressively, the researchers are also able to use more detailed calibrations to work out the 3D gaze of the subjects -- in other words, how far into the distance they were looking. It is believed that this could allow people to control an electronic wheelchair simply by looking where they want to go or control a robotic prosthetic arm.
To demonstrate the effectiveness of the eye-tracker, the researchers got subjects to play the video game Pong. In this game, the subject used his or her eyes to move a bat to hit a ball that was bouncing around the screen -- a feat that is difficult to accomplish with other read-out mechanisms such as brain waves (EEG).
Dr Aldo Faisal, Lecturer in Neurotechnology at Imperial's Department of Bioengineering and the Department of Computing, is confident in the ability to utilise eye movements given that six of the subjects, who had never used their eyes as a control input before, could still register a respectable score within 20 per cent of the able bodied users after just 10 minutes of using the device for the first time.
The commercially viable device uses just one watt of power and can transmit data wirelessly over Wi-Fi or via USB into any Windows or Linux computer.
The GT3D system has also solved the 'Midas touch problem', allowing users to click on an item on the screen using their eyes, instead of a mouse button.
This problem has previously been resolved by staring at an icon for a prolonged period or blinking; however, the latter is part of our natural behaviour and happens unintentionally. Instead, the researchers calibrated the system so that a simple wink would represent a mouse click, which only occurs voluntarily unlike the blink.
Dr Faisal said: "Crucially, we have achieved two things: we have built a 3D eye tracking system hundreds of times cheaper than commercial systems and used it to build a real-time brain machine interface that allows patients to interact more smoothly and more quickly than existing invasive technologies that are tens of thousands of times more expensive.
Read more at Science Daily
Jul 14, 2012
In the Mind of the Psychopath
Ice cold, hard and emotionless. Such is the psychopath -- we think. Until we get a glimpse behind the mask. Researchers have for decades been almost unanimous in their accord with the popular perception that psychopaths are made in a certain way, and will forever remain that way.
But Aina Gullhaugen, a researcher at the Norwegian University of Science and Technology, disagrees.
Nature or nurture?
"A lot has happened over the past few years in psychiatry," Gullhaugen says. "But the discipline is still characterized by the attitude that a certain group of people are put together in such a way that they cannot be treated. There is little in the textbooks that says that these people have had a hard life. Until now the focus has been directed at their antisocial behaviour and lack of empathy. And the explanation for this is based on biology, instead of looking at what these people have experienced."
Through her experience as a psychologist, Gullhaugen has found, in fact, that there is a discrepancy between the formal characteristics of psychopathy and what she has experienced in meeting psychopaths.
Gullhaugen thought if psychopathic criminals are as hardened as traditional descriptions would have it, you would not find vulnerabilities and psychiatric disorders among them. She wondered if perhaps we have asked the wrong questions, and studied the issue in the wrong way.
With the same intense desire to get behind the mask as Clarice had in her meeting with Hannibal Lecter in the movie "The Silence of the Lambs," Gullhaugen has burrowed into the minds of psychopaths.
Hannibal's pain
"Hannibal Lecter is perhaps the most famous psychopath from the fictional world," says Gullhaugen. "His character in the books and movies is an excellent illustration of the cold mask some have thought that psychopaths have. Because it is a mask. Inside the head of the cannibal and serial killer were tenderness and pain, deep emotions and empathy."
Author Thomas Harris is said to have based his Hannibal figure on real life serial killers, after he conducted research at the FBI's Behavioral Science Unit. Harris showed how Hannibal's behaviour was influenced by the psychological damage that occurred during his childhood. Such things, Gullhaugen says, can be treated.
Hannibal Lecter is fiction. But Gullhaugen has immersed herself in the scientific literature and made a comparison between the figure of Hannibal and individual studies of offenders who demonstrate a high degree of psychopathy. "I have gone through all the studies that have been published internationally over the past 30 years," she says. "I have also conducted a study of the psychological needs of Norwegian high-security and detention prisoners."
Every published study of these so-called worst offenders shows that they all have a background that includes grotesque physical and /or psychological abuse during childhood. The result of Gullhaugen's efforts can be found in her article, "Looking for the Hannibal behind the Cannibal: Current status of case research."
"Without exception, these people have been injured in the company of their caregivers," she says. "And many of the descriptions made it clear that their later ruthlessness was an attempt to address this damage, but in an inappropriate or bad way."
Incomplete surveys
Gullhaugen has wondered about the methods that have been used to study psychopaths. "One way to examine emotional reactions is to show people pictures of different situations, and then study the response," she says.
"First the subject is often shown benign or neutral images, where you could be expected to be happy and relaxed. The physical reaction is a calm pulse, no sweat on the skin and the like. Then, suddenly there is a picture of a gun aimed at you. Most people will react to this, right? But when psychopaths do not respond in the expected way, we conclude that they have a biological defect," she says.
Gullhaugen wants us to put ourselves into the everyday lives that psychopaths often come from. Criminal gangs, perhaps, or a tough upbringing in which the need to be unaffected and strong is mandatory and always present. Perhaps guns are a part of everyday life. Perhaps a cold and almost emotionless reaction is the only rational reaction, seen from their perspective, and is what they have got used to.
"I found that research on the psychopath's emotions was incomplete," she says. "We need other tests and instruments to measure the feelings of these people, if there are feelings to measure."
She has now done exactly that. While Gullhaugen has not replaced conventional survey methods, such as a diagnostic interview, use of a checklist for psychopathy and neuropsychological tests, she has added more methods to see if she might get other results. To this end, she has used questionnaires that measure a number of interpersonal and emotional aspects of Norwegian high-security and detention prisoners.
The results suggest that the so-called gold standard for the study of psychopathy should at best be changed, and at worst, be replaced.
Need and want closeness
"There is no doubt that these are people with what we call relational needs," says Gullhaugen. "In the aforementioned case descriptions and my own study, it became clear that they both have the desire and the need for close relationships, and that they care. At the same time it is equally clear that they find it almost impossible to achieve and maintain such relationships."
Gullhaugen's study demonstrated that where the most common survey methods would have shown that individuals reported good self-esteem, low depression and a sense of general wellbeing, other methods show that psychopaths suffer from underlying psychological pain.
"Isn't it strange that someone who claims to have a great life can also answer that his or her life experiences have had a catastrophic or tremendous influence on him, or...?"
Gullhaugen's question is rhetorical, of course. She explains that in some cases, the interviewees were people who almost didn't dare to answer questions for fear that someone in prison would get access to the information.
"They may have a vested interest in appearing a certain way," she says. "At the same time they reveal a little bit of what is behind the mask when they answer the various questions in private, without any of us present."
Extreme parenting style
One of the features that characterizes criminal psychopaths is an abnormal upbringing, as they describe it. Gullhaugen's research reveals that psychopaths as children have experienced an upbringing, or parenting style, that is quite different from the so-called normal part of the population.
"If you think of a scale of parental care that goes from nothing, the absence of care, all the way to the totally obsessive parent, most parents are in the 'middle,' " explains Gullhaugen. "The same applies to how we feel about parental control. On a scale from 'not caring' all the way to 'totally controlling', most have parents who end up in the middle."
"But it is different for psychopaths. More than half of the psychopaths I have studied reported that they had been exposed to a parenting style that could be placed on either extreme of these scales. Either they lived in a situation where no one cared, where the child is subjected to total control and must be submissive, or the child has been subjected to a neglectful parenting style."
This, says the researcher, is an example of how the psychopath's behaviour is not unrelated to his or her life experiences. And it provides the basis for a more nuanced picture of these people's feelings, and a starting point for treatment.
"The attachment patterns show that these children feel rejected. To a much greater degree than in the general population, their parents have an authoritarian style that compromises the child's own will and independence. This is something that can cause the psychopath to later act ruthlessly to others, more or less consciously to get what he or she needs. This kind of relationship -- or the total absence of a caregiver, pure neglect -- is a part of the picture that can be drawn of the psychopath's upbringing," the researcher says.
Gullhaugen says that she has not studied enough cases to draw any final conclusions about this, but that three other studies show the same tendency.
Not exactly a birthday present
"It's hard to say exactly what has created the psychopath's rock hard mask," says Gullhaugen. "But as others have said before me: You do not get a personality disorder for your eighteenth birthday present. I have seen what children and young people with these kinds of characteristics experience and what it is like for them, through my work in child and adolescent psychiatry. Of course, not all reckless behaviour is explained by a bad upbringing, but we do not inherit everything either. That is my main point."
Gullhaugen reminds us that biology and environment mutually influence each other. The personality disorder that results can be seen as the sum total of a number of biological and psychological factors.
"The combination of the individual's biological foundation, temperament, personality, and vulnerability are important components," said Gullhaugen. "The individual's relational vulnerability is the very essence of the personality disorder, in my opinion."
"I see that these people are apprehensive when they meet me. I see a clear vulnerability in them through behaviour that betrays insecurity and discomfort on the inside. And now we have research that confirms the hurt, suffering and nuances of their feelings."
Almost like you and me
Gullhaugen found few significant differences between psychopaths and her "normal" group when, in her own study of Norwegian prisoners, she examined the ability to experience a wide range of emotions. The differences that she found showed that psychopaths generally experience more negative emotions, such as irritability, hostility, and shame. But they do not feel guilty.
"They have more primitive emotions such as anger and anxiety," says Gullhaugen. "This is what I found in the studies I conducted of strong psychopathic individuals who had committed serious criminal acts."
When it comes to more positive feelings, however, there was little or no difference, suggesting that the psychopath's emotional life is more nuanced than first thought.
Read more at Science Daily
But Aina Gullhaugen, a researcher at the Norwegian University of Science and Technology, disagrees.
Nature or nurture?
"A lot has happened over the past few years in psychiatry," Gullhaugen says. "But the discipline is still characterized by the attitude that a certain group of people are put together in such a way that they cannot be treated. There is little in the textbooks that says that these people have had a hard life. Until now the focus has been directed at their antisocial behaviour and lack of empathy. And the explanation for this is based on biology, instead of looking at what these people have experienced."
Through her experience as a psychologist, Gullhaugen has found, in fact, that there is a discrepancy between the formal characteristics of psychopathy and what she has experienced in meeting psychopaths.
Gullhaugen thought if psychopathic criminals are as hardened as traditional descriptions would have it, you would not find vulnerabilities and psychiatric disorders among them. She wondered if perhaps we have asked the wrong questions, and studied the issue in the wrong way.
With the same intense desire to get behind the mask as Clarice had in her meeting with Hannibal Lecter in the movie "The Silence of the Lambs," Gullhaugen has burrowed into the minds of psychopaths.
Hannibal's pain
"Hannibal Lecter is perhaps the most famous psychopath from the fictional world," says Gullhaugen. "His character in the books and movies is an excellent illustration of the cold mask some have thought that psychopaths have. Because it is a mask. Inside the head of the cannibal and serial killer were tenderness and pain, deep emotions and empathy."
Author Thomas Harris is said to have based his Hannibal figure on real life serial killers, after he conducted research at the FBI's Behavioral Science Unit. Harris showed how Hannibal's behaviour was influenced by the psychological damage that occurred during his childhood. Such things, Gullhaugen says, can be treated.
Hannibal Lecter is fiction. But Gullhaugen has immersed herself in the scientific literature and made a comparison between the figure of Hannibal and individual studies of offenders who demonstrate a high degree of psychopathy. "I have gone through all the studies that have been published internationally over the past 30 years," she says. "I have also conducted a study of the psychological needs of Norwegian high-security and detention prisoners."
Every published study of these so-called worst offenders shows that they all have a background that includes grotesque physical and /or psychological abuse during childhood. The result of Gullhaugen's efforts can be found in her article, "Looking for the Hannibal behind the Cannibal: Current status of case research."
"Without exception, these people have been injured in the company of their caregivers," she says. "And many of the descriptions made it clear that their later ruthlessness was an attempt to address this damage, but in an inappropriate or bad way."
Incomplete surveys
Gullhaugen has wondered about the methods that have been used to study psychopaths. "One way to examine emotional reactions is to show people pictures of different situations, and then study the response," she says.
"First the subject is often shown benign or neutral images, where you could be expected to be happy and relaxed. The physical reaction is a calm pulse, no sweat on the skin and the like. Then, suddenly there is a picture of a gun aimed at you. Most people will react to this, right? But when psychopaths do not respond in the expected way, we conclude that they have a biological defect," she says.
Gullhaugen wants us to put ourselves into the everyday lives that psychopaths often come from. Criminal gangs, perhaps, or a tough upbringing in which the need to be unaffected and strong is mandatory and always present. Perhaps guns are a part of everyday life. Perhaps a cold and almost emotionless reaction is the only rational reaction, seen from their perspective, and is what they have got used to.
"I found that research on the psychopath's emotions was incomplete," she says. "We need other tests and instruments to measure the feelings of these people, if there are feelings to measure."
She has now done exactly that. While Gullhaugen has not replaced conventional survey methods, such as a diagnostic interview, use of a checklist for psychopathy and neuropsychological tests, she has added more methods to see if she might get other results. To this end, she has used questionnaires that measure a number of interpersonal and emotional aspects of Norwegian high-security and detention prisoners.
The results suggest that the so-called gold standard for the study of psychopathy should at best be changed, and at worst, be replaced.
Need and want closeness
"There is no doubt that these are people with what we call relational needs," says Gullhaugen. "In the aforementioned case descriptions and my own study, it became clear that they both have the desire and the need for close relationships, and that they care. At the same time it is equally clear that they find it almost impossible to achieve and maintain such relationships."
Gullhaugen's study demonstrated that where the most common survey methods would have shown that individuals reported good self-esteem, low depression and a sense of general wellbeing, other methods show that psychopaths suffer from underlying psychological pain.
"Isn't it strange that someone who claims to have a great life can also answer that his or her life experiences have had a catastrophic or tremendous influence on him, or...?"
Gullhaugen's question is rhetorical, of course. She explains that in some cases, the interviewees were people who almost didn't dare to answer questions for fear that someone in prison would get access to the information.
"They may have a vested interest in appearing a certain way," she says. "At the same time they reveal a little bit of what is behind the mask when they answer the various questions in private, without any of us present."
Extreme parenting style
One of the features that characterizes criminal psychopaths is an abnormal upbringing, as they describe it. Gullhaugen's research reveals that psychopaths as children have experienced an upbringing, or parenting style, that is quite different from the so-called normal part of the population.
"If you think of a scale of parental care that goes from nothing, the absence of care, all the way to the totally obsessive parent, most parents are in the 'middle,' " explains Gullhaugen. "The same applies to how we feel about parental control. On a scale from 'not caring' all the way to 'totally controlling', most have parents who end up in the middle."
"But it is different for psychopaths. More than half of the psychopaths I have studied reported that they had been exposed to a parenting style that could be placed on either extreme of these scales. Either they lived in a situation where no one cared, where the child is subjected to total control and must be submissive, or the child has been subjected to a neglectful parenting style."
This, says the researcher, is an example of how the psychopath's behaviour is not unrelated to his or her life experiences. And it provides the basis for a more nuanced picture of these people's feelings, and a starting point for treatment.
"The attachment patterns show that these children feel rejected. To a much greater degree than in the general population, their parents have an authoritarian style that compromises the child's own will and independence. This is something that can cause the psychopath to later act ruthlessly to others, more or less consciously to get what he or she needs. This kind of relationship -- or the total absence of a caregiver, pure neglect -- is a part of the picture that can be drawn of the psychopath's upbringing," the researcher says.
Gullhaugen says that she has not studied enough cases to draw any final conclusions about this, but that three other studies show the same tendency.
Not exactly a birthday present
"It's hard to say exactly what has created the psychopath's rock hard mask," says Gullhaugen. "But as others have said before me: You do not get a personality disorder for your eighteenth birthday present. I have seen what children and young people with these kinds of characteristics experience and what it is like for them, through my work in child and adolescent psychiatry. Of course, not all reckless behaviour is explained by a bad upbringing, but we do not inherit everything either. That is my main point."
Gullhaugen reminds us that biology and environment mutually influence each other. The personality disorder that results can be seen as the sum total of a number of biological and psychological factors.
"The combination of the individual's biological foundation, temperament, personality, and vulnerability are important components," said Gullhaugen. "The individual's relational vulnerability is the very essence of the personality disorder, in my opinion."
"I see that these people are apprehensive when they meet me. I see a clear vulnerability in them through behaviour that betrays insecurity and discomfort on the inside. And now we have research that confirms the hurt, suffering and nuances of their feelings."
Almost like you and me
Gullhaugen found few significant differences between psychopaths and her "normal" group when, in her own study of Norwegian prisoners, she examined the ability to experience a wide range of emotions. The differences that she found showed that psychopaths generally experience more negative emotions, such as irritability, hostility, and shame. But they do not feel guilty.
"They have more primitive emotions such as anger and anxiety," says Gullhaugen. "This is what I found in the studies I conducted of strong psychopathic individuals who had committed serious criminal acts."
When it comes to more positive feelings, however, there was little or no difference, suggesting that the psychopath's emotional life is more nuanced than first thought.
Read more at Science Daily
Jul 13, 2012
Pluto: Not a Planet; Still Very Interesting
With the discovery of a fifth moon orbiting Pluto came the inevitable protests about the little world's planetary status: Can it be called a planet yet?
Sorry Pluto fans, this latest revelation can't supersize Pluto's standing in the Planetary Rotary Club, but it does provide a fascinating glimpse at the dwarf planet's history.
Caltech planetary astronomy professor Mike Brown, discoverer of over 100 minor planetary bodies in the Kuiper Belt and often accredited with being responsible for Pluto's "demotion," said that although having a fifth moon may not affect its planetary status, "it does mean something."
"It's a really good reminder that you don't have to be a planet to be interesting," Brown told Discovery News.
Brown led the Palomar Observatory team that discovered the distant world Eris in 2005. At the time, Eris was believed to be the tenth planet of the solar system, orbiting further out than Pluto.
But in an effort to define what a planet actually is -- spurred-on by the fact that more small worlds would likely to be found in the Kuiper Belt -- the International Astronomical Union (IAU) set out the controversial criteria for planetary status in 2006.
Sadly, as Pluto crosses the orbit of Neptune, it cannot "clear its own orbit" and is therefore a minor body along with Eris and other small worlds in the Kuiper Belt that are now classified by the IAU as "dwarf planets."
But now, with the continuing discoveries of small moons orbiting Pluto, there have been calls to turn over the IAU's ruling.
"All the people clamoring about whether it means Pluto might be a planet are essentially saying: 'See? Pluto is interesting and complex thus shouldn't it be a planet?'" Brown added, "and the answer is: 'No; the solar system is full of interesting and complex things that are not planets.'
"Titan is bizarre with methane lakes; Europa has huge below ground oceans; Uranian satellites once had ice volcanoes. But they're not planets, they are just a subset of the cool things that the universe does in our backyard."
One of the "cool" things to come from the discovery of the fifth moon (nicknamed "P5") is the question of how did it form? Was Pluto hit by a large object long ago in the solar system's history, generating the debris we see as a system of moons?
"That 5th moon really hammers home the idea that Pluto was, well, hammered home at some point," Brown said.
Read more at Discovery News
Sorry Pluto fans, this latest revelation can't supersize Pluto's standing in the Planetary Rotary Club, but it does provide a fascinating glimpse at the dwarf planet's history.
Caltech planetary astronomy professor Mike Brown, discoverer of over 100 minor planetary bodies in the Kuiper Belt and often accredited with being responsible for Pluto's "demotion," said that although having a fifth moon may not affect its planetary status, "it does mean something."
"It's a really good reminder that you don't have to be a planet to be interesting," Brown told Discovery News.
Brown led the Palomar Observatory team that discovered the distant world Eris in 2005. At the time, Eris was believed to be the tenth planet of the solar system, orbiting further out than Pluto.
But in an effort to define what a planet actually is -- spurred-on by the fact that more small worlds would likely to be found in the Kuiper Belt -- the International Astronomical Union (IAU) set out the controversial criteria for planetary status in 2006.
Sadly, as Pluto crosses the orbit of Neptune, it cannot "clear its own orbit" and is therefore a minor body along with Eris and other small worlds in the Kuiper Belt that are now classified by the IAU as "dwarf planets."
But now, with the continuing discoveries of small moons orbiting Pluto, there have been calls to turn over the IAU's ruling.
"All the people clamoring about whether it means Pluto might be a planet are essentially saying: 'See? Pluto is interesting and complex thus shouldn't it be a planet?'" Brown added, "and the answer is: 'No; the solar system is full of interesting and complex things that are not planets.'
"Titan is bizarre with methane lakes; Europa has huge below ground oceans; Uranian satellites once had ice volcanoes. But they're not planets, they are just a subset of the cool things that the universe does in our backyard."
One of the "cool" things to come from the discovery of the fifth moon (nicknamed "P5") is the question of how did it form? Was Pluto hit by a large object long ago in the solar system's history, generating the debris we see as a system of moons?
"That 5th moon really hammers home the idea that Pluto was, well, hammered home at some point," Brown said.
Read more at Discovery News
Most Complete Pre-Human Skeleton Found
South African scientists said Thursday they had uncovered the most complete skeleton yet of an ancient relative of man, hidden in a rock excavated from an archaeological site three years ago.
The remains of a juvenile hominid skeleton, of the Australopithecus (southern ape) sediba species, constitute the "most complete early human ancestor skeleton ever discovered," according to University of Witwatersrand palaeontologist Lee Berger.
"We have discovered parts of a jaw and critical aspects of the body including what appear to be a complete femur (thigh bone), ribs, vertebrae and other important limb elements, some never before seen in such completeness in the human fossil record," said Berger, a lead professor in the finding.
The latest discovery of what is thought to be around two million years old, was made in a three-foot (one meter) wide rock that lay unnoticed for years in a laboratory until a technician noticed a tooth sticking out of the black stone last month.
The technician, Justin Mukanka, said: "I was lifting the block up, I just realized that there is a tooth."
It was then scanned to reveal significant parts of an A. sediba skeleton, dubbed Karabo, whose other other parts were first discovered in 2009. Parts of three other skeletons were discovered in 2008 in the world-famous Cradle of Humankind site north of Johannesburg.
It is not certain whether the species, which had long arms, a small brain and a thumb possibly used for precision gripping, was a direct ancestor of humans' genus, Homo, or simply a close relative.
"It appears that we now have some of the most critical and complete remains of the skeleton," said Berger.
Other team members were equally enthusiastic.
"It's like putting together the pieces of a puzzle," university laboratory manager Bonita De Klerk told AFP.
The skeleton of what has been dubbed Karabo and is thought to date back to around two million years old, would have been aged between nine and 13 years when the upright-walking tree climber died.
Remains of four A. sediba skeletons have been discovered in South Africa's Malapa cave, 30 miles (50 kilometers) north of Johannesburg, since 2008. The individuals are believed to have fallen into a pit in the cave and died.
The sediba fossils are arguably the most complete remains of any hominids found and are possibly one of the most significant palaeoanthropological discoveries in recent time.
The Cradle of Humankind, now a World Heritage Site, is the oldest continuous paleontological dig in the world.
Read more at Discovery News
The remains of a juvenile hominid skeleton, of the Australopithecus (southern ape) sediba species, constitute the "most complete early human ancestor skeleton ever discovered," according to University of Witwatersrand palaeontologist Lee Berger.
"We have discovered parts of a jaw and critical aspects of the body including what appear to be a complete femur (thigh bone), ribs, vertebrae and other important limb elements, some never before seen in such completeness in the human fossil record," said Berger, a lead professor in the finding.
The latest discovery of what is thought to be around two million years old, was made in a three-foot (one meter) wide rock that lay unnoticed for years in a laboratory until a technician noticed a tooth sticking out of the black stone last month.
The technician, Justin Mukanka, said: "I was lifting the block up, I just realized that there is a tooth."
It was then scanned to reveal significant parts of an A. sediba skeleton, dubbed Karabo, whose other other parts were first discovered in 2009. Parts of three other skeletons were discovered in 2008 in the world-famous Cradle of Humankind site north of Johannesburg.
It is not certain whether the species, which had long arms, a small brain and a thumb possibly used for precision gripping, was a direct ancestor of humans' genus, Homo, or simply a close relative.
"It appears that we now have some of the most critical and complete remains of the skeleton," said Berger.
Other team members were equally enthusiastic.
"It's like putting together the pieces of a puzzle," university laboratory manager Bonita De Klerk told AFP.
The skeleton of what has been dubbed Karabo and is thought to date back to around two million years old, would have been aged between nine and 13 years when the upright-walking tree climber died.
Remains of four A. sediba skeletons have been discovered in South Africa's Malapa cave, 30 miles (50 kilometers) north of Johannesburg, since 2008. The individuals are believed to have fallen into a pit in the cave and died.
The sediba fossils are arguably the most complete remains of any hominids found and are possibly one of the most significant palaeoanthropological discoveries in recent time.
The Cradle of Humankind, now a World Heritage Site, is the oldest continuous paleontological dig in the world.
Read more at Discovery News
Jul 12, 2012
Solar System Ice: Source of Earth's Water
Scientists have long believed that comets and, or a type of very primitive meteorite called carbonaceous chondrites were the sources of early Earth's volatile elements -- which include hydrogen, nitrogen, and carbon -- and possibly organic material, too. Understanding where these volatiles came from is crucial for determining the origins of both water and life on the planet. New research led by Carnegie's Conel Alexander focuses on frozen water that was distributed throughout much of the early Solar System, but probably not in the materials that aggregated to initially form Earth.
The evidence for this ice is now preserved in objects like comets and water-bearing carbonaceous chondrites. The team's findings contradict prevailing theories about the relationship between these two types of bodies and suggest that meteorites, and their parent asteroids, are the most-likely sources of Earth's water. Their work is published July 12 by Science Express.
Looking at the ratio of hydrogen to its heavy isotope deuterium in frozen water (H2O), scientists can get an idea of the relative distance from the Sun at which objects containing the water were formed. Objects that formed farther out should generally have higher deuterium content in their ice than objects that formed closer to the Sun, and objects that formed in the same regions should have similar hydrogen isotopic compositions. Therefore, by comparing the deuterium content of water in carbonaceous chondrites to the deuterium content of comets, it is possible to tell if they formed in similar reaches of the Solar System.
It has been suggested that both comets and carbonaceous chondrites formed beyond the orbit of Jupiter, perhaps even at the edges of our Solar System, and then moved inward, eventually bringing their bounty of volatiles and organic material to Earth. If this were true, then the ice found in comets and the remnants of ice preserved in carbonaceous chondrites in the form of hydrated silicates, such as clays, would have similar isotopic compositions.
Alexander's team included Carnegie's Larry Nitler, Marilyn Fogel, and Roxane Bowden, as well as Kieren Howard from the Natural History Museum in London and Kingsborough Community College of the City University of New York and Christopher Herd of the University of Alberta. They analyzed samples from 85 carbonaceous chondrites, and were able to show that carbonaceous chondrites likely did not form in the same regions of the Solar System as comets because they have much lower deuterium content. If so, this result directly contradicts the two most-prominent models for how the Solar System developed its current architecture.
The team suggests that carbonaceous chondrites formed instead in the asteroid belt that exists between the orbits of Mars and Jupiter. What's more, they propose that most of the volatile elements on Earth arrived from a variety of chondrites, not from comets.
Read more at Science Daily
The evidence for this ice is now preserved in objects like comets and water-bearing carbonaceous chondrites. The team's findings contradict prevailing theories about the relationship between these two types of bodies and suggest that meteorites, and their parent asteroids, are the most-likely sources of Earth's water. Their work is published July 12 by Science Express.
Looking at the ratio of hydrogen to its heavy isotope deuterium in frozen water (H2O), scientists can get an idea of the relative distance from the Sun at which objects containing the water were formed. Objects that formed farther out should generally have higher deuterium content in their ice than objects that formed closer to the Sun, and objects that formed in the same regions should have similar hydrogen isotopic compositions. Therefore, by comparing the deuterium content of water in carbonaceous chondrites to the deuterium content of comets, it is possible to tell if they formed in similar reaches of the Solar System.
It has been suggested that both comets and carbonaceous chondrites formed beyond the orbit of Jupiter, perhaps even at the edges of our Solar System, and then moved inward, eventually bringing their bounty of volatiles and organic material to Earth. If this were true, then the ice found in comets and the remnants of ice preserved in carbonaceous chondrites in the form of hydrated silicates, such as clays, would have similar isotopic compositions.
Alexander's team included Carnegie's Larry Nitler, Marilyn Fogel, and Roxane Bowden, as well as Kieren Howard from the Natural History Museum in London and Kingsborough Community College of the City University of New York and Christopher Herd of the University of Alberta. They analyzed samples from 85 carbonaceous chondrites, and were able to show that carbonaceous chondrites likely did not form in the same regions of the Solar System as comets because they have much lower deuterium content. If so, this result directly contradicts the two most-prominent models for how the Solar System developed its current architecture.
The team suggests that carbonaceous chondrites formed instead in the asteroid belt that exists between the orbits of Mars and Jupiter. What's more, they propose that most of the volatile elements on Earth arrived from a variety of chondrites, not from comets.
Read more at Science Daily
Geneticists Evolve Fruit Flies With the Ability to Count
A team of geneticists has announced that they have successfully bred fruit flies with the capacity to count.
After repeatedly subjecting fruit flies to a stimulus designed to teach numerical skills, the evolutionary geneticists finally hit on a generation of flies that could count — it took 40 tries before the species’ evolution occurred.
The findings, announced at the First Joint Congress on Evolutionary Biology in Canada, could lead to a better understanding of how we process numbers and the genetics behind dyscalculia — a learning disability that affects a person’s ability to count and do basic arithmetic.
“The obvious next step is to see how [the flies'] neuro-architecture has changed,” said geneticist Tristan Long, of Canada’s Wilfrid Laurier University, who admits far more research is needed to delve into what the results actually mean. Primarily, this will involve comparing the genetic make-up of an evolved fruit fly with that of a standard test fly to pinpoint the mutation.
The research team, made up of geneticists from Wilfrid Laurier University in Canada and the University of California, repeatedly subjected test flies to a 20-minute mathematics training session. The flies were exposed to two, three or four flashes of light, with two or four flashes coinciding with a shake of the container the flies were kept in.
Following a pause, the flies were again subjected to the flashing light. None prepared themselves for a repeat of the shake since they could not discern a difference between two, three or four flashes — until, that is, the 40th generation of descendants were put to the test.
The findings back-up the theory that numerical skills such as mental arithmetic are ancient constructs. Some of the more unusual natural fans of numeracy include salamanders, newborn chicks and mongoose lemurs, all of which have demonstrated basic skills in the lab.
Read more at Wired Science
After repeatedly subjecting fruit flies to a stimulus designed to teach numerical skills, the evolutionary geneticists finally hit on a generation of flies that could count — it took 40 tries before the species’ evolution occurred.
The findings, announced at the First Joint Congress on Evolutionary Biology in Canada, could lead to a better understanding of how we process numbers and the genetics behind dyscalculia — a learning disability that affects a person’s ability to count and do basic arithmetic.
“The obvious next step is to see how [the flies'] neuro-architecture has changed,” said geneticist Tristan Long, of Canada’s Wilfrid Laurier University, who admits far more research is needed to delve into what the results actually mean. Primarily, this will involve comparing the genetic make-up of an evolved fruit fly with that of a standard test fly to pinpoint the mutation.
The research team, made up of geneticists from Wilfrid Laurier University in Canada and the University of California, repeatedly subjected test flies to a 20-minute mathematics training session. The flies were exposed to two, three or four flashes of light, with two or four flashes coinciding with a shake of the container the flies were kept in.
Following a pause, the flies were again subjected to the flashing light. None prepared themselves for a repeat of the shake since they could not discern a difference between two, three or four flashes — until, that is, the 40th generation of descendants were put to the test.
The findings back-up the theory that numerical skills such as mental arithmetic are ancient constructs. Some of the more unusual natural fans of numeracy include salamanders, newborn chicks and mongoose lemurs, all of which have demonstrated basic skills in the lab.
Read more at Wired Science
'First' Americans Were Not Alone
The first known people to settle America can now be divided into at least two cultures, the Clovis and the recently discovered "Western Stemmed" tradition, according to new research.
Researchers excavating an Oregon cave, found traces and unique tools made by a second people, who lived more than 13,200 years ago. The discovery, described in the latest issue of Science, strengthens the idea that that people moved into the Americas in several waves of migrations, not just one.
"From our results, it is likely that we have at least two independent migration events to the lower 48 states," co-author Eske Willerslev of the University of Copenhagen's Center for GeoGenetics told Discovery News. "Additionally, we previously showed by sequencing the first ancient human genome (that of a 4,000-year-old paleoeskimo) that there have been at least two independent migrations into the Arctic parts of North America, so as I see it, it's likely we have at least around four migration events."
Willerslev added that three of these groups came from Asia, but the origins of the Clovis culture remain a mystery. What's now clear is that the newly discovered Western Stemmed culture was present at least 13,200 years ago, during or even before the Clovis culture in western North America.
The Clovis culture is defined by its "points," used for hunting. Lead author Dennis Jenkins explained that Clovis points are generally large "and have one or more distinctive flute flakes removed from the base so that a channel runs from the base up the blade roughly half way or slightly more to the tip."
Western Stemmed points, on the other hand, "are narrower, sometimes thicker, and thinned by percussion and pressure flakes from the edges to the midline." They were used as dart and thrusting spear tips, while Clovis points are generally assumed to be lance points.
The researchers aren't certain why these technologies diverged, probably long ago, from a common weapon-making tradition in Siberia or Asia. Since the early Americans only used one or the other method, the technologies suggest that the Clovis culture may have arisen in the Southeastern United States and moved west, while the Western Stemmed tradition began, perhaps earlier, in the West and moved east.
Jenkins, an archaeologist at the University of Oregon's Museum of Natural and Cultural History, and his team analyzed Western Stemmed points from Paisley Caves, located about 220 miles southeast of Eugene, Oregon. The researchers also studied dried human feces, bones, sagebrush twigs and other artifacts excavated from well-stratified layers of silt in the ancient caves.
Based on the analysis, it's believed that the people who lived at the same time as the Clovis were "broad range foragers, taking large game whenever possible, but also well adapted to a desert mosaic plant community similar, but not identical to, that of the northern Great Basin today," Jenkins shared.
If the oldest fossilized feces found in the caves (dating to 14,300 years ago) belonged to the Western Stemmed occupations, then the individuals hunted now-extinct horses, camels and elephants, in addition to deer, elk, mountain sheep, bison, waterfowl, rabbits and other animals.
Read more at Discovery News
Researchers excavating an Oregon cave, found traces and unique tools made by a second people, who lived more than 13,200 years ago. The discovery, described in the latest issue of Science, strengthens the idea that that people moved into the Americas in several waves of migrations, not just one.
"From our results, it is likely that we have at least two independent migration events to the lower 48 states," co-author Eske Willerslev of the University of Copenhagen's Center for GeoGenetics told Discovery News. "Additionally, we previously showed by sequencing the first ancient human genome (that of a 4,000-year-old paleoeskimo) that there have been at least two independent migrations into the Arctic parts of North America, so as I see it, it's likely we have at least around four migration events."
Willerslev added that three of these groups came from Asia, but the origins of the Clovis culture remain a mystery. What's now clear is that the newly discovered Western Stemmed culture was present at least 13,200 years ago, during or even before the Clovis culture in western North America.
The Clovis culture is defined by its "points," used for hunting. Lead author Dennis Jenkins explained that Clovis points are generally large "and have one or more distinctive flute flakes removed from the base so that a channel runs from the base up the blade roughly half way or slightly more to the tip."
Western Stemmed points, on the other hand, "are narrower, sometimes thicker, and thinned by percussion and pressure flakes from the edges to the midline." They were used as dart and thrusting spear tips, while Clovis points are generally assumed to be lance points.
The researchers aren't certain why these technologies diverged, probably long ago, from a common weapon-making tradition in Siberia or Asia. Since the early Americans only used one or the other method, the technologies suggest that the Clovis culture may have arisen in the Southeastern United States and moved west, while the Western Stemmed tradition began, perhaps earlier, in the West and moved east.
Jenkins, an archaeologist at the University of Oregon's Museum of Natural and Cultural History, and his team analyzed Western Stemmed points from Paisley Caves, located about 220 miles southeast of Eugene, Oregon. The researchers also studied dried human feces, bones, sagebrush twigs and other artifacts excavated from well-stratified layers of silt in the ancient caves.
Based on the analysis, it's believed that the people who lived at the same time as the Clovis were "broad range foragers, taking large game whenever possible, but also well adapted to a desert mosaic plant community similar, but not identical to, that of the northern Great Basin today," Jenkins shared.
If the oldest fossilized feces found in the caves (dating to 14,300 years ago) belonged to the Western Stemmed occupations, then the individuals hunted now-extinct horses, camels and elephants, in addition to deer, elk, mountain sheep, bison, waterfowl, rabbits and other animals.
Read more at Discovery News
More Extinctions Expected in Amazon
As deforestation has accelerated in the Brazilian Amazon over the last 40 years, scientists have been watching for an equally rapid rate of extinctions among animals that are losing their habitats.
But so far, no species have disappeared from the region as a whole, and only a small percentage of those predicted to be at risk have gone extinct on a local basis. Instead, there has been a delay between forest loss and species loss, putting the Amazon in debt.
It owes extinctions and nature will come soon to collect.
Over the next 40 years, found a new study, if deforestation and development occur at their current rate, up to 90 percent of predicted local extinctions will finally occur.
The silver lining is that there may be a window of opportunity for protecting threatened species before they're gone. Conservationist's eyes are now turned to the Brazilian government, whose upcoming rulings about deforestation regulations and development issues will determine whether many species stay or go.
"While we haven't seen extinctions yet, we're basically just stacking them up for the future -- there is a whole list of extinctions waiting to happen," said Robert Ewers, an ecologist at Imperial College London. "Decisions in the Brazilian congress will have very different futures depending on which whey they go."
Forty percent of the world's tropical rainforest and a large portion of its biodiversity lie in the Brazilian Amazon, which has also borne the brunt of deforestation in the last few decades.
And while experts have long expressed concerns about the species loss that is sure to follow habitat destruction, until now researchers have not made any hard-number estimates of how many species we can expect to lose in the region as a result of human activities.
To fill in that gap, Ewers and colleagues created a model that took into account rates of deforestation throughout the Brazilian Amazon from the 1970s through 2008. Then, referencing studies that relate habitat loss with losses of mammals, birds and amphibians, they projected future extinction rates through 2050. They also considered four scenarios with varying levels of protection and destruction.
Compared to previous studies that have predicted quick and devastating species loss from deforestation, the researchers report today in the journal Science that extinctions have so far, been happening much more slowly than expected. It can take generations for species to finally reach their tipping points in response to changing conditions.
But if Brazil proceeds with a "business as usual" scenario, between 80 and 90 percent of predicted extinctions will finally occur over the next 40 years, the study found. That adds up to an estimated 40 to 50 species of birds, mammals and amphibians that will likely go extinct by 2050, Ewers said, and another 100 expected to be lost after that.
Using a grid that considered the Amazon in 50-kilometer (31-mile) by 50-km (31-mile) squares, the researchers were also able to pinpoint regions where species loss is likely to be most extreme. Those results differed depending on which scenario they looked at. And in some cases, certain types of animals fared worse in certain regions.
In an extreme scenario with extra-high levels deforestation, at least 10 species of amphibians, 15 species of mammals and 30 species of birds could disappear from about half of the Amazon.
Such regional detail should prove invaluable for planning conservation efforts.
"Because we can now say exactly which parts of the Amazon are likely to have higher debt," Ewers said, "it means that we can get in there and do conservation actions to save them in those locations."
Read more at Discovery News
But so far, no species have disappeared from the region as a whole, and only a small percentage of those predicted to be at risk have gone extinct on a local basis. Instead, there has been a delay between forest loss and species loss, putting the Amazon in debt.
It owes extinctions and nature will come soon to collect.
Over the next 40 years, found a new study, if deforestation and development occur at their current rate, up to 90 percent of predicted local extinctions will finally occur.
The silver lining is that there may be a window of opportunity for protecting threatened species before they're gone. Conservationist's eyes are now turned to the Brazilian government, whose upcoming rulings about deforestation regulations and development issues will determine whether many species stay or go.
"While we haven't seen extinctions yet, we're basically just stacking them up for the future -- there is a whole list of extinctions waiting to happen," said Robert Ewers, an ecologist at Imperial College London. "Decisions in the Brazilian congress will have very different futures depending on which whey they go."
Forty percent of the world's tropical rainforest and a large portion of its biodiversity lie in the Brazilian Amazon, which has also borne the brunt of deforestation in the last few decades.
And while experts have long expressed concerns about the species loss that is sure to follow habitat destruction, until now researchers have not made any hard-number estimates of how many species we can expect to lose in the region as a result of human activities.
To fill in that gap, Ewers and colleagues created a model that took into account rates of deforestation throughout the Brazilian Amazon from the 1970s through 2008. Then, referencing studies that relate habitat loss with losses of mammals, birds and amphibians, they projected future extinction rates through 2050. They also considered four scenarios with varying levels of protection and destruction.
Compared to previous studies that have predicted quick and devastating species loss from deforestation, the researchers report today in the journal Science that extinctions have so far, been happening much more slowly than expected. It can take generations for species to finally reach their tipping points in response to changing conditions.
But if Brazil proceeds with a "business as usual" scenario, between 80 and 90 percent of predicted extinctions will finally occur over the next 40 years, the study found. That adds up to an estimated 40 to 50 species of birds, mammals and amphibians that will likely go extinct by 2050, Ewers said, and another 100 expected to be lost after that.
Using a grid that considered the Amazon in 50-kilometer (31-mile) by 50-km (31-mile) squares, the researchers were also able to pinpoint regions where species loss is likely to be most extreme. Those results differed depending on which scenario they looked at. And in some cases, certain types of animals fared worse in certain regions.
In an extreme scenario with extra-high levels deforestation, at least 10 species of amphibians, 15 species of mammals and 30 species of birds could disappear from about half of the Amazon.
Such regional detail should prove invaluable for planning conservation efforts.
"Because we can now say exactly which parts of the Amazon are likely to have higher debt," Ewers said, "it means that we can get in there and do conservation actions to save them in those locations."
Read more at Discovery News
Jul 11, 2012
Golden Crusade Hoard Found in Israel
Israeli archaeologists have found buried treasure: more than 100 gold dinar coins from the time of the Crusades, bearing the names and legends of local sultans, blessings and more -- and worth as much as $500,000.
The joint team from Tel Aviv University and Israel’s Nature and Parks Authority were working at Apollonia National Park, an ancient Roman settlement on the coast used by the Crusaders between 1241 and 1265, when they literally found a pot of gold.
“All in all, we found some 108 dinars and quarter dinars, which makes it one of the largest gold coin hoards discovered in a medieval site in the land of Israel,” Prof. Oren Tal, chairman of Tel Aviv University’s Department of Archaeology, told FoxNews.com.
The Christian order of the Knights Hospitaller had taken up residence in the castle in Apollonia; it was one of their most important fortresses in the area. The hoard of coins was buried on the eve of the site's downfall after a long siege by a large and well-prepared Muslim army.
Since its destruction in late April 1265 it was never resettled. As the destruction of the well-fortified castle grew near, one of the Crusader’s leaders sought to hide his stash in a potsherd, possibly to retrieve it later on.
“It was in a small juglet, and it was partly broken. The idea was to put something broken in the ground and fill it with sand, in order to hide the gold coins within,” Tal told FoxNews.com. “If by chance somebody were to find the juglet, he won’t excavate it, he won’t look inside it to find the gold coins.”
“Once we started to sift it, the gold came out.”
The hoard of coins themselves -- found on June 21, 2012, by Mati Johananoff, a student of TAU Department of Archaeology -- date to the times of the Fatimid empire, which dominated northern Africa and parts of the Middle East at the time. Tal estimates their date to the 10th or 11th century, although they were circulated in the 13th century.
“Some were minted some 250 to 300 years before they were used by the Hospitaller knights,” he explained. The coins are covered in icons and inscriptions: the names and legends of local sultans, Tal said, as well as blessings.
Some also bear a date, and even a mint mark, a code that indicates where it was minted, whether Alexandria, Tripoli, or another ancient mint.
Read more at Discovery News
The joint team from Tel Aviv University and Israel’s Nature and Parks Authority were working at Apollonia National Park, an ancient Roman settlement on the coast used by the Crusaders between 1241 and 1265, when they literally found a pot of gold.
“All in all, we found some 108 dinars and quarter dinars, which makes it one of the largest gold coin hoards discovered in a medieval site in the land of Israel,” Prof. Oren Tal, chairman of Tel Aviv University’s Department of Archaeology, told FoxNews.com.
The Christian order of the Knights Hospitaller had taken up residence in the castle in Apollonia; it was one of their most important fortresses in the area. The hoard of coins was buried on the eve of the site's downfall after a long siege by a large and well-prepared Muslim army.
Since its destruction in late April 1265 it was never resettled. As the destruction of the well-fortified castle grew near, one of the Crusader’s leaders sought to hide his stash in a potsherd, possibly to retrieve it later on.
“It was in a small juglet, and it was partly broken. The idea was to put something broken in the ground and fill it with sand, in order to hide the gold coins within,” Tal told FoxNews.com. “If by chance somebody were to find the juglet, he won’t excavate it, he won’t look inside it to find the gold coins.”
“Once we started to sift it, the gold came out.”
The hoard of coins themselves -- found on June 21, 2012, by Mati Johananoff, a student of TAU Department of Archaeology -- date to the times of the Fatimid empire, which dominated northern Africa and parts of the Middle East at the time. Tal estimates their date to the 10th or 11th century, although they were circulated in the 13th century.
“Some were minted some 250 to 300 years before they were used by the Hospitaller knights,” he explained. The coins are covered in icons and inscriptions: the names and legends of local sultans, Tal said, as well as blessings.
Some also bear a date, and even a mint mark, a code that indicates where it was minted, whether Alexandria, Tripoli, or another ancient mint.
Read more at Discovery News
Pluto Now Has Five (Yes, Five) Moons
Pluto's neighborhood is getting crowded.
According to new observations by the Hubble Space Telescope, the dwarf planet isn't only accompanied by the moons Charon, Nix, Hydra and the not-so-glamorously-named "P4," it also has a fifth satellite, nicknamed, unsurprisingly, "P5."
According to Sky & Telescope, P5 was announced by the IAU's Central Bureau for Astronomical Telegrams last night and it's a dinky moon, potentially smaller than P4, which was discovered a year ago in July 2011.
P4 is thought to have a diameter of between 8 to 21 miles (13 to 34 kilometers), whereas Pluto's largest moon Charon measures 648 miles (1,043 kilometers). Nix and Hydra have diameters of 20 to 70 miles (32 to 113 kilometers).
P5 is orbiting Pluto at a distance of around 26,000 miles (42,000 kilometers) in the same plane as Pluto's other moons, indicating that Pluto may have been hit in the solar system's history, spewing debris that accumulated in orbit, creating the system of satellites we see today.
The continuing discoveries of small moons around Pluto is causing some concern for scientists with NASA's New Horizons mission that, in 2015, will make a flyby of the little world.
As cautioned by New Horizons lead scientist Alan Stern last year, it's not so much that Pluto plays host to more small moons, the growing concern is for the potential clouds of dust and other small debris that the increasingly populated satellite system may generate.
"Even more worrisome than the possibility of many small moons themselves is the concern that these moons will generate debris rings, or even 3-D debris clouds around Pluto that could pose an impact hazard to New Horizons as it flies through the system at high speed," Stern said in a November 2011 mission update.
Read more at Discovery News
According to new observations by the Hubble Space Telescope, the dwarf planet isn't only accompanied by the moons Charon, Nix, Hydra and the not-so-glamorously-named "P4," it also has a fifth satellite, nicknamed, unsurprisingly, "P5."
According to Sky & Telescope, P5 was announced by the IAU's Central Bureau for Astronomical Telegrams last night and it's a dinky moon, potentially smaller than P4, which was discovered a year ago in July 2011.
P4 is thought to have a diameter of between 8 to 21 miles (13 to 34 kilometers), whereas Pluto's largest moon Charon measures 648 miles (1,043 kilometers). Nix and Hydra have diameters of 20 to 70 miles (32 to 113 kilometers).
P5 is orbiting Pluto at a distance of around 26,000 miles (42,000 kilometers) in the same plane as Pluto's other moons, indicating that Pluto may have been hit in the solar system's history, spewing debris that accumulated in orbit, creating the system of satellites we see today.
The continuing discoveries of small moons around Pluto is causing some concern for scientists with NASA's New Horizons mission that, in 2015, will make a flyby of the little world.
As cautioned by New Horizons lead scientist Alan Stern last year, it's not so much that Pluto plays host to more small moons, the growing concern is for the potential clouds of dust and other small debris that the increasingly populated satellite system may generate.
"Even more worrisome than the possibility of many small moons themselves is the concern that these moons will generate debris rings, or even 3-D debris clouds around Pluto that could pose an impact hazard to New Horizons as it flies through the system at high speed," Stern said in a November 2011 mission update.
Read more at Discovery News
Banana's Genes Unpeeled
Bananas are a staple food around the world. But the humble yellow fruit faces pests and diseases that threaten to wipe it out across the globe, from convenience stores in Iowa to rural markets in Uganda.
In an effort to save bananas from imminent demise, scientists have now sequenced the banana genome for the first time, a challenging feat and a major advance in the field.
The accomplishment opens the way for developing better banana crops that are naturally resilient against parasites and other stresses.
“The banana is very important, especially for tropical and subtropical countries,” said Angélique D’Hont, a geneticist at CIRAD, an agricultural research center in Montpelier, France. “Because the future of the banana is in danger, the sequence will help to produce resistant bananas and avoid the utilization of pesticides. It will be much easier now to identify genes which are important.”
Bananas were first domesticated 7,000 years ago in Southeast Asia. As people migrated, and crossed their own plants with other species along the way, bananas gradually became seedless, delicious and totally sterile.
Instead of multiplying through sexual reproduction, which mixes up the gene pool, bananas are cultivated through vegetative propagation, which involves simply cutting off a section of one plant to grow on its own. It’s the same process used to grow several other major African crops, including cassava, sweet potatoes and yams.
As a result, every single Cavendish banana -- the variety that makes up about half of all bananas eaten around the world -- is an exact clone of every other Cavendish banana.
The shape, color and flavor of these popular fruits are predictable and consistent. But parasites and diseases have adapted to the Cavendish, D’Hont said, making it necessary to use large amounts of pesticides to keep banana crops from collapsing -- up to 50 applications a year in some places.
To decipher the banana’s genetic strengths and weaknesses, D’Hont and a large group of colleagues spent two years sequencing a variety of banana called Musa acuminate, which is a simpler relative of the Cavendish.
Once they put together the sequence, the researchers report today in the journal Nature, they discovered several genes that may be involved in pest resistance.
Among other findings, the researchers identified genes involved in ripening after the application of ethylene, which is often added to green bananas during transport. The sequence also revealed that the banana duplicated its entire genome three times (making an extra copy of every single gene in its genome) -- including once 100 million years ago and once 60 million years ago
Putting together the sequence took so long because, compared to many other crops, the banana genome is extremely complex. Even though all bananas are clones of each other, the original gene forms that came from mother and father plants remain different from each other -- unlike in seeded crops that tend to become inbred, said Simon Chan, a plant biologist at the University of California, Davis.
What’s more, bananas have three copies of each chromosome, just like other seedless plants. And for many genes, all three copies are different.
The variety of banana used in the new study had just two of each chromosome, making it simpler than the Cavendish. But by finally deciphering its sequence, scientists will be able to move on to our beloved breakfast fruit and compare the differences.
Knowing the genetic sequence of bananas is a major step toward isolating key genes that will eventually lead to a better banana, Chan said. Future varieties may be able to resist both droughts and diseases, while still tasting good and traveling well.
Read more at Discovery News
In an effort to save bananas from imminent demise, scientists have now sequenced the banana genome for the first time, a challenging feat and a major advance in the field.
The accomplishment opens the way for developing better banana crops that are naturally resilient against parasites and other stresses.
“The banana is very important, especially for tropical and subtropical countries,” said Angélique D’Hont, a geneticist at CIRAD, an agricultural research center in Montpelier, France. “Because the future of the banana is in danger, the sequence will help to produce resistant bananas and avoid the utilization of pesticides. It will be much easier now to identify genes which are important.”
Bananas were first domesticated 7,000 years ago in Southeast Asia. As people migrated, and crossed their own plants with other species along the way, bananas gradually became seedless, delicious and totally sterile.
Instead of multiplying through sexual reproduction, which mixes up the gene pool, bananas are cultivated through vegetative propagation, which involves simply cutting off a section of one plant to grow on its own. It’s the same process used to grow several other major African crops, including cassava, sweet potatoes and yams.
As a result, every single Cavendish banana -- the variety that makes up about half of all bananas eaten around the world -- is an exact clone of every other Cavendish banana.
The shape, color and flavor of these popular fruits are predictable and consistent. But parasites and diseases have adapted to the Cavendish, D’Hont said, making it necessary to use large amounts of pesticides to keep banana crops from collapsing -- up to 50 applications a year in some places.
To decipher the banana’s genetic strengths and weaknesses, D’Hont and a large group of colleagues spent two years sequencing a variety of banana called Musa acuminate, which is a simpler relative of the Cavendish.
Once they put together the sequence, the researchers report today in the journal Nature, they discovered several genes that may be involved in pest resistance.
Among other findings, the researchers identified genes involved in ripening after the application of ethylene, which is often added to green bananas during transport. The sequence also revealed that the banana duplicated its entire genome three times (making an extra copy of every single gene in its genome) -- including once 100 million years ago and once 60 million years ago
Putting together the sequence took so long because, compared to many other crops, the banana genome is extremely complex. Even though all bananas are clones of each other, the original gene forms that came from mother and father plants remain different from each other -- unlike in seeded crops that tend to become inbred, said Simon Chan, a plant biologist at the University of California, Davis.
What’s more, bananas have three copies of each chromosome, just like other seedless plants. And for many genes, all three copies are different.
The variety of banana used in the new study had just two of each chromosome, making it simpler than the Cavendish. But by finally deciphering its sequence, scientists will be able to move on to our beloved breakfast fruit and compare the differences.
Knowing the genetic sequence of bananas is a major step toward isolating key genes that will eventually lead to a better banana, Chan said. Future varieties may be able to resist both droughts and diseases, while still tasting good and traveling well.
Read more at Discovery News
Sun Turns NYC into 'Manhattanhenge'
New Yorkers will be treated to a special sight Thursday evening (July 12): It's one of two days a year when the setting sun aligns perfectly with Manhattan's street grid. As the sun sets on the Big Apple, it will light up both the north and south sides of every cross street.
The event has been dubbed "Manhattanhenge" for the way it turns New York City into a Stonehenge-like sun dial.
The sun sets perfectly in line with the Manhattan street grid twice a year, explains astrophysicist Neil deGrasse Tyson on the Hayden Planetarium website.
Earlier this year clouds interfered with the alignment when the sun set on May 29 at 8:17 p.m. EDT. Hopefully that won't be the case tomorrow; the best viewing time will be:
July 12 at 8:25 p.m. EDT
Tonight the sun isn't perfectly aligned with the grid, but will still put on a show displaying a full sun sitting on the horizon when looking down the cross streets, rather than the half orb. The best time to catch the full sun setting on New York City tonight is:
July 11 at 8:24 p.m. EDT
The best way to watch Manhattanhenge, Tyson says, is to get as far east as possible on one of the city's major cross streets, such as 14th, 23rd, 34th, 42nd or 57th streets, and look west toward New Jersey. (The streets immediately adjacent to these wide cross streets will work fine, too, but the view won't be quite as stunning.) Standing on 34th or 42nd street provides a particularly nice view, as the views include the Empire State Building and the Chrysler Building. It's a good idea to get to your spot 30 minutes early, so you can beat out the other sun worshippers.
Read more at Discovery News
The event has been dubbed "Manhattanhenge" for the way it turns New York City into a Stonehenge-like sun dial.
The sun sets perfectly in line with the Manhattan street grid twice a year, explains astrophysicist Neil deGrasse Tyson on the Hayden Planetarium website.
Earlier this year clouds interfered with the alignment when the sun set on May 29 at 8:17 p.m. EDT. Hopefully that won't be the case tomorrow; the best viewing time will be:
July 12 at 8:25 p.m. EDT
Tonight the sun isn't perfectly aligned with the grid, but will still put on a show displaying a full sun sitting on the horizon when looking down the cross streets, rather than the half orb. The best time to catch the full sun setting on New York City tonight is:
July 11 at 8:24 p.m. EDT
The best way to watch Manhattanhenge, Tyson says, is to get as far east as possible on one of the city's major cross streets, such as 14th, 23rd, 34th, 42nd or 57th streets, and look west toward New Jersey. (The streets immediately adjacent to these wide cross streets will work fine, too, but the view won't be quite as stunning.) Standing on 34th or 42nd street provides a particularly nice view, as the views include the Empire State Building and the Chrysler Building. It's a good idea to get to your spot 30 minutes early, so you can beat out the other sun worshippers.
Read more at Discovery News
Jul 10, 2012
Rare Glimpse Into the Origin of Species
A new species of monkey flower, created by the union of two foreign plant species, has been discovered on the bank of a stream in Scotland. Genetic changes in this attractive yellow-flowered hybrid have allowed it to overcome infertility and made it a rare example of a brand new species that has originated in the wild in the last 150 years. Thousands of wild species and some crops are thought to have originated in this way, yet only a handful of examples exist where this type of species formation has occurred in recent history.
The ancestors of the new plant were brought from the Americas as botanical curiosities in the 1800s and were quickly adopted by Victorian gardeners. Soon after their arrival, they escaped the confines of British gardens and can now be found growing in the wild, along the banks of rivers and streams. Reproduction between these species produces hybrids that are now widespread in Britain. Yet, genetic differences between the two parents mean that the hybrids are infertile and cannot go beyond the first generation.
Dr Mario Vallejo-Marin, a plant evolutionary biologist at the University of Stirling, has documented the first examples of hybrid monkey flowers that have overcome these genetic barriers and show fully restored fertility. This fertile hybrid derived from 'immigrant' parents represents a new species, native to Scotland. Dr Vallejo-Marin has chosen to name this species Mimulus peregrinus, which translates as 'the wanderer'. The species is described in the open access journal PhytoKeys.
'The two American monkey flowers are unable to produce fertile hybrids due to differences in the amount of DNA present in each species, the equivalent of getting a sterile mule from crossing a horse and a donkey', said Dr. Vallejo-Marin. 'However, in rare cases, duplication of the entire hybrid DNA, known as polyploidization, can balance the amount of DNA and restore fertility. Our studies suggest that this is what has happened here.'
Read more at Science Daily
The ancestors of the new plant were brought from the Americas as botanical curiosities in the 1800s and were quickly adopted by Victorian gardeners. Soon after their arrival, they escaped the confines of British gardens and can now be found growing in the wild, along the banks of rivers and streams. Reproduction between these species produces hybrids that are now widespread in Britain. Yet, genetic differences between the two parents mean that the hybrids are infertile and cannot go beyond the first generation.
Dr Mario Vallejo-Marin, a plant evolutionary biologist at the University of Stirling, has documented the first examples of hybrid monkey flowers that have overcome these genetic barriers and show fully restored fertility. This fertile hybrid derived from 'immigrant' parents represents a new species, native to Scotland. Dr Vallejo-Marin has chosen to name this species Mimulus peregrinus, which translates as 'the wanderer'. The species is described in the open access journal PhytoKeys.
'The two American monkey flowers are unable to produce fertile hybrids due to differences in the amount of DNA present in each species, the equivalent of getting a sterile mule from crossing a horse and a donkey', said Dr. Vallejo-Marin. 'However, in rare cases, duplication of the entire hybrid DNA, known as polyploidization, can balance the amount of DNA and restore fertility. Our studies suggest that this is what has happened here.'
Read more at Science Daily
Hubble Unmasks Ghost Galaxies
Astronomers have used the NASA/ESA Hubble Space Telescope to study some of the smallest and faintest galaxies in our cosmic neighbourhood. These galaxies are fossils of the early Universe: they have barely changed for 13 billion years. The discovery could help explain the so-called "missing satellite" problem, where only a handful of satellite galaxies have been found around the Milky Way, against the thousands that are predicted by theories.
Astronomers have puzzled over why some extremely faint dwarf galaxies spotted in our Milky Way galaxy's backyard contain so few stars. The galaxies are thought to be some of the tiniest, oldest, and most pristine galaxies in the Universe. They have been discovered over the past decade by astronomers using automated computer techniques to search through the images of the Sloan Digital Sky Survey. But an international team of astronomers needed the NASA/ESA Hubble Space Telescope to help solve the mystery of why these galaxies are starved of stars, and why so few of them have been found.
Hubble views of three of these small galaxies, the Hercules, Leo IV and Ursa Major dwarf galaxies, reveal that they all started forming stars more than 13 billion years ago -- and then abruptly stopped -- all in the first billion years after the Universe was born in the Big Bang. In fact, the extreme age of their stars is similar to Messier 92, the oldest known globular cluster [1] in the Milky Way.
"These galaxies are all ancient and they're all the same age, so you know something came down like a guillotine and turned off the star formation at the same time in these galaxies," said Tom Brown of the Space Telescope Science Institute in Baltimore, USA, the study's leader. "The most likely explanation is a process called reionisation."
The relic galaxies are evidence for a transitional phase in the early Universe that shut down star-making factories in tiny galaxies. This phase seems to coincide with the time when the first stars burned off a fog of cold hydrogen, a process called reionisation. In this period, which began in the first billion years after the Big Bang, radiation from the first stars knocked electrons off primeval hydrogen atoms, ionising the Universe's cool hydrogen gas.
The same radiation that sparked universal reionisation also appears to have squelched star-making activities in dwarf galaxies, such as those in Brown's study. The small irregular galaxies were born about 100 million years before reionisation began and had just started to churn out stars at that time. Roughly 2000 light-years wide, these galaxies are the lightweight cousins of the more luminous and higher-mass star-making dwarf galaxies near our Milky Way. Unlike their higher-mass relatives, the puny galaxies were not massive enough to shield themselves from the harsh ultraviolet light. What little gas they had was stripped away as the flood of ultraviolet light rushed through them. Their gas supply depleted, the galaxies could not make new stars.
The discovery could help explain the so-called "missing satellite problem," where only a few dozen dwarf galaxies have been observed around the Milky Way while the computer simulations predict that thousands should exist. One possible explanation for the low number discovered to date is that there has been very little, or even no star formation in the smallest of these dwarf galaxies, leaving them virtually invisible.
The Sloan survey recently uncovered more than a dozen of these galaxies in our cosmic neighbourhood. These have very few stars -- only a few hundred or thousand -- but a great deal of dark matter, the underlying scaffolding upon which galaxies are built. Normal dwarf galaxies near the Milky Way contain 10 times more dark matter than the ordinary matter that makes up gas and stars, while in these so-called ultra-faint dwarf galaxies, dark matter outweighs ordinary matter by at least a factor of 100. Astronomers think the rest of the sky should contain dozens more of these ultra-faint dwarf galaxies with few stars, and the evidence for squelched star formation in the smallest of these dwarfs suggests that there may be still thousands more with essentially no stars at all.
Read more at Science Daily
Astronomers have puzzled over why some extremely faint dwarf galaxies spotted in our Milky Way galaxy's backyard contain so few stars. The galaxies are thought to be some of the tiniest, oldest, and most pristine galaxies in the Universe. They have been discovered over the past decade by astronomers using automated computer techniques to search through the images of the Sloan Digital Sky Survey. But an international team of astronomers needed the NASA/ESA Hubble Space Telescope to help solve the mystery of why these galaxies are starved of stars, and why so few of them have been found.
Hubble views of three of these small galaxies, the Hercules, Leo IV and Ursa Major dwarf galaxies, reveal that they all started forming stars more than 13 billion years ago -- and then abruptly stopped -- all in the first billion years after the Universe was born in the Big Bang. In fact, the extreme age of their stars is similar to Messier 92, the oldest known globular cluster [1] in the Milky Way.
"These galaxies are all ancient and they're all the same age, so you know something came down like a guillotine and turned off the star formation at the same time in these galaxies," said Tom Brown of the Space Telescope Science Institute in Baltimore, USA, the study's leader. "The most likely explanation is a process called reionisation."
The relic galaxies are evidence for a transitional phase in the early Universe that shut down star-making factories in tiny galaxies. This phase seems to coincide with the time when the first stars burned off a fog of cold hydrogen, a process called reionisation. In this period, which began in the first billion years after the Big Bang, radiation from the first stars knocked electrons off primeval hydrogen atoms, ionising the Universe's cool hydrogen gas.
The same radiation that sparked universal reionisation also appears to have squelched star-making activities in dwarf galaxies, such as those in Brown's study. The small irregular galaxies were born about 100 million years before reionisation began and had just started to churn out stars at that time. Roughly 2000 light-years wide, these galaxies are the lightweight cousins of the more luminous and higher-mass star-making dwarf galaxies near our Milky Way. Unlike their higher-mass relatives, the puny galaxies were not massive enough to shield themselves from the harsh ultraviolet light. What little gas they had was stripped away as the flood of ultraviolet light rushed through them. Their gas supply depleted, the galaxies could not make new stars.
The discovery could help explain the so-called "missing satellite problem," where only a few dozen dwarf galaxies have been observed around the Milky Way while the computer simulations predict that thousands should exist. One possible explanation for the low number discovered to date is that there has been very little, or even no star formation in the smallest of these dwarf galaxies, leaving them virtually invisible.
The Sloan survey recently uncovered more than a dozen of these galaxies in our cosmic neighbourhood. These have very few stars -- only a few hundred or thousand -- but a great deal of dark matter, the underlying scaffolding upon which galaxies are built. Normal dwarf galaxies near the Milky Way contain 10 times more dark matter than the ordinary matter that makes up gas and stars, while in these so-called ultra-faint dwarf galaxies, dark matter outweighs ordinary matter by at least a factor of 100. Astronomers think the rest of the sky should contain dozens more of these ultra-faint dwarf galaxies with few stars, and the evidence for squelched star formation in the smallest of these dwarfs suggests that there may be still thousands more with essentially no stars at all.
Read more at Science Daily
Why Sunburn Hurts
It's no secret that too much time in the sun causes pain, redness and a strong desire for aloe vera lotion. Now, researchers know why.
The ultraviolet B (UVB) wavelength of light damages skin cells' RNA molecules, new research finds. RNA, or ribonucleic acid, is part of the genetic machinery of the cell, encoding information to turn genetic instructions in DNA into proteins.
The RNA damaged by UVB light is of a sort that doesn't code for proteins, researchers reported online July 8 in the journal Nature Medicine. But when sun-damaged cells release this damaged non-coding micro-RNA, it provokes neighboring cells to flood the skin with inflammatory molecules, creating a chain reaction that ends with sunburn. In the long run, cumulative damage can raise the risk of skin cancer. In the short run, this process is how the skin heals from the burn.
"The inflammatory response is important to start the process of healing after cell death," study leader Richard Gallo of the University of California, San Diego School of Medicine said in a statement.
Though researchers have long known about some of the molecular effects of too much time tanning, this is the first time they've identified step one in the process of damage. Now that the cause has been identified, the researchers hope to find some way of stopping the process — for sun-sensitive patients, if not for ordinary sun-bathers.
"For example, diseases like psoriasis are treated by UV light, but a big side effect is that this treatment increases the risk of skin cancer," Gallo said, referring to a skin condition that causes flaking and redness. "Our discovery suggests a way to get the beneficial effects of UV therapy without actually exposing our patients to the harmful UV light. Also, some people have excess sensitivity to UV light, patients with lupus, for example. We are exploring if we can help them by blocking the pathway we discovered."
The researches made the discovery by exposing human skin cells to UVB light and following up with experiments in mice. Specific genes in mice can determine how likely they are to burn in the sun, Gallo said.
Read more at Discovery News
The ultraviolet B (UVB) wavelength of light damages skin cells' RNA molecules, new research finds. RNA, or ribonucleic acid, is part of the genetic machinery of the cell, encoding information to turn genetic instructions in DNA into proteins.
The RNA damaged by UVB light is of a sort that doesn't code for proteins, researchers reported online July 8 in the journal Nature Medicine. But when sun-damaged cells release this damaged non-coding micro-RNA, it provokes neighboring cells to flood the skin with inflammatory molecules, creating a chain reaction that ends with sunburn. In the long run, cumulative damage can raise the risk of skin cancer. In the short run, this process is how the skin heals from the burn.
"The inflammatory response is important to start the process of healing after cell death," study leader Richard Gallo of the University of California, San Diego School of Medicine said in a statement.
Though researchers have long known about some of the molecular effects of too much time tanning, this is the first time they've identified step one in the process of damage. Now that the cause has been identified, the researchers hope to find some way of stopping the process — for sun-sensitive patients, if not for ordinary sun-bathers.
"For example, diseases like psoriasis are treated by UV light, but a big side effect is that this treatment increases the risk of skin cancer," Gallo said, referring to a skin condition that causes flaking and redness. "Our discovery suggests a way to get the beneficial effects of UV therapy without actually exposing our patients to the harmful UV light. Also, some people have excess sensitivity to UV light, patients with lupus, for example. We are exploring if we can help them by blocking the pathway we discovered."
The researches made the discovery by exposing human skin cells to UVB light and following up with experiments in mice. Specific genes in mice can determine how likely they are to burn in the sun, Gallo said.
Read more at Discovery News
Ancient 'New York City' of Canada Discovered
Today New York City is the Big Apple of the Northeast but new research reveals that 500 years ago, at a time when Europeans were just beginning to visit the New World, a settlement on the north shore of Lake Ontario, in Canada, was the biggest, most complex, cosmopolitan place in the region.
Occupied between roughly A.D. 1500 and 1530, the so-called Mantle site was settled by the Wendat (Huron). Excavations at the site, between 2003 and 2005, have uncovered its 98 longhouses, a palisade of three rows (a fence made of heavy wooden stakes and used for defense) and about 200,000 artifacts. Dozens of examples of art have been unearthed showing haunting human faces and depictions of animals, with analysis ongoing.
Now, a scholarly book detailing the discoveries is being prepared and a documentary about the site called "Curse of the Axe" aired this week on the History Channel in Canada.
"This is an Indiana Jones moment, this is huge," said Ron Williamson, an archaeologist who led dig efforts at the site, in the documentary shown in a premiere at the Royal Ontario Museum. "It just seems to be a game-changer in every way."
Williamson is the founder of Archaeological Services Inc., a Canadian cultural resource management firm that excavated the site.
"It's the largest, most complex, cosmopolitan village of its time," said Williamson, also of the University of Toronto, in an interview with LiveScience. "All of the archaeologists, basically, when they see Mantle, they're just utterly stunned."
The Mantle people
Scientists estimate between 1,500 and 1,800 individuals inhabited the site, whose fields encompassed a Manhattan-size area. To clothe themselves they would have needed 7,000 deer hides annually, something that would have required hunting about 26 miles (40 km) in every direction from the site, Williamson said.
"When you think about a site like Mantle, 2,000 people, massive stockade around a community, a better analogy is that of a medieval town," Jennifer Birch, a post-doctoral researcher at the University of Georgia, said in the documentary. "While the cultures are very different, the societal form really isn't."
Despite its massive size, the site remained hidden for hundreds of years, likely escaping detection because its longhouses were primarily made of wood, which doesn't preserve well.
Not all of the 98 longhouses were in use at the same time, with more recent ones having been built on top of the older longhouses, as buildings are today. At one point 55 longhouses were in use at once.
Charred wood found in one of the post moulds suggested that when one of the longhouses burnt down the rest of the settlement was saved. Williamson said that this is remarkable considering the longhouses were made of wood, which was very flammable, and close together. "Somehow their 'fire department' did that."
Enemies become friends
Another curious discovery at Mantle is its apparently cosmopolitan nature. The art and pottery at the site show influences from all five nations of the Iroquois to the south in New York State, suggesting extensive contacts and trade.
For instance, among Mantle's discoveries are the earliest European goods ever found in the Great Lakes region of North America, predating the arrival of the first known European explorers by a century. They consist of two European copper beads and a wrought iron object, believed to be part of an ax, which was carefully buried near the center of the settlement.
A maker's mark on the wrought iron object was traced to northern Spain, and the fact that it was made of wrought iron suggests a 16th-century origin. In fact, in the early 16th century Basque fisherman and whalers sailed to the waters off Newfoundland and Labrador. It's believed that it would have been acquired by the aboriginal people there and exchanged up the St. Lawrence River until eventually reaching Mantle.
The people of Mantle, it seems, were on trading relations with the Iroquois of the St. Lawrence.
"Historically, we know that the Huron and the Iroquois were not only at odds, they were mortal enemies," Williamson said in the documentary.
In the period before Mantle there is evidence of widespread warfare throughout southern Ontario and New York as well as parts of Michigan and Quebec, a period known as "the dark times." Human remains from that period show evidence of scalping and torture.
Mantle, with its large size and palisade defense, may have discouraged this type of warfare, making an attack risky. Other settlements in southwest Ontario were getting larger and sites in New York were clustering together, suggesting that they too were becoming harder to attack.
Birch compares the situation at Mantle and other sites to what happened after World War II, with the formation of the United Nations and NATO, institutions that discouraged warfare, allowing for trade and cultural interaction.
Williamson noted that, sadly, with the arrival of Europeans, this peace did not last, with warfare intensifying in the 17th century. "When Europeans arrive the whole thing is re-fired over economic reasons related to the fur trade," he said in the interview.
Read more at Discovery News
Occupied between roughly A.D. 1500 and 1530, the so-called Mantle site was settled by the Wendat (Huron). Excavations at the site, between 2003 and 2005, have uncovered its 98 longhouses, a palisade of three rows (a fence made of heavy wooden stakes and used for defense) and about 200,000 artifacts. Dozens of examples of art have been unearthed showing haunting human faces and depictions of animals, with analysis ongoing.
Now, a scholarly book detailing the discoveries is being prepared and a documentary about the site called "Curse of the Axe" aired this week on the History Channel in Canada.
"This is an Indiana Jones moment, this is huge," said Ron Williamson, an archaeologist who led dig efforts at the site, in the documentary shown in a premiere at the Royal Ontario Museum. "It just seems to be a game-changer in every way."
Williamson is the founder of Archaeological Services Inc., a Canadian cultural resource management firm that excavated the site.
"It's the largest, most complex, cosmopolitan village of its time," said Williamson, also of the University of Toronto, in an interview with LiveScience. "All of the archaeologists, basically, when they see Mantle, they're just utterly stunned."
The Mantle people
Scientists estimate between 1,500 and 1,800 individuals inhabited the site, whose fields encompassed a Manhattan-size area. To clothe themselves they would have needed 7,000 deer hides annually, something that would have required hunting about 26 miles (40 km) in every direction from the site, Williamson said.
"When you think about a site like Mantle, 2,000 people, massive stockade around a community, a better analogy is that of a medieval town," Jennifer Birch, a post-doctoral researcher at the University of Georgia, said in the documentary. "While the cultures are very different, the societal form really isn't."
Despite its massive size, the site remained hidden for hundreds of years, likely escaping detection because its longhouses were primarily made of wood, which doesn't preserve well.
Not all of the 98 longhouses were in use at the same time, with more recent ones having been built on top of the older longhouses, as buildings are today. At one point 55 longhouses were in use at once.
Charred wood found in one of the post moulds suggested that when one of the longhouses burnt down the rest of the settlement was saved. Williamson said that this is remarkable considering the longhouses were made of wood, which was very flammable, and close together. "Somehow their 'fire department' did that."
Enemies become friends
Another curious discovery at Mantle is its apparently cosmopolitan nature. The art and pottery at the site show influences from all five nations of the Iroquois to the south in New York State, suggesting extensive contacts and trade.
For instance, among Mantle's discoveries are the earliest European goods ever found in the Great Lakes region of North America, predating the arrival of the first known European explorers by a century. They consist of two European copper beads and a wrought iron object, believed to be part of an ax, which was carefully buried near the center of the settlement.
A maker's mark on the wrought iron object was traced to northern Spain, and the fact that it was made of wrought iron suggests a 16th-century origin. In fact, in the early 16th century Basque fisherman and whalers sailed to the waters off Newfoundland and Labrador. It's believed that it would have been acquired by the aboriginal people there and exchanged up the St. Lawrence River until eventually reaching Mantle.
The people of Mantle, it seems, were on trading relations with the Iroquois of the St. Lawrence.
"Historically, we know that the Huron and the Iroquois were not only at odds, they were mortal enemies," Williamson said in the documentary.
In the period before Mantle there is evidence of widespread warfare throughout southern Ontario and New York as well as parts of Michigan and Quebec, a period known as "the dark times." Human remains from that period show evidence of scalping and torture.
Mantle, with its large size and palisade defense, may have discouraged this type of warfare, making an attack risky. Other settlements in southwest Ontario were getting larger and sites in New York were clustering together, suggesting that they too were becoming harder to attack.
Birch compares the situation at Mantle and other sites to what happened after World War II, with the formation of the United Nations and NATO, institutions that discouraged warfare, allowing for trade and cultural interaction.
Williamson noted that, sadly, with the arrival of Europeans, this peace did not last, with warfare intensifying in the 17th century. "When Europeans arrive the whole thing is re-fired over economic reasons related to the fur trade," he said in the interview.
Read more at Discovery News
Labels:
Archeology,
Geology,
History,
Human,
Science
Jul 9, 2012
'Frankenstein' Mummies Are a Mix of Corpses
Mummies found off the coast of Scotland are Frankenstein-like composites of several corpses, researchers say.
This mixing of remains was perhaps designed to combine different ancestries into a single lineage, archaeologists speculated.
The bodies were first unearthed in 2001 during excavations beneath the foundations of an approximately 3,000-year-old house on South Uist, an island in the Outer Hebrides off the west coast of Scotland. The building was one of three roundhouses at Cladh Hallan, a prehistoric village named after a nearby modern graveyard. The site was once populated in the Bronze Age from 2200 B.C. to 800 B.C. — scientists were digging here to learn more about this era in Britain, where little was known until recently.
The researchers had found what were apparently the remains of a teenage girl and a 3-year-old child at the site. However, two other bodies looked especially strange — those of a man and a woman found in tight fetal positions as if they had been tightly wrapped up, reminiscent of "mummy bundles" seen in South America and other parts of the world. These bodies were apparently mummified on purpose, the first evidence of deliberate mummification in the ancient Old World outside of Egypt.
Evidence for mummy mix-ups
Evidence of this mummification lies in how all the bones in both these bodies were still "articulated" or in the same positions as they were in life, revealing that sinew and perhaps skin were still holding them together when they were buried. Carbon dating these remains and their surroundings revealed these bodies were buried up to 600 years after death — to keep bodies from rotting to pieces after such a long time, they must have been intentionally preserved, unlike the bodies of animals also buried at the site, which had been left to decay.
Mineral alterations of the outer layer of the bones suggest they were entombed in acidic surroundings, such as those found in nearby peat bogs. Exposures to such bogs for a year or so would have mummified them, stopping microbes from decomposing the bodies by essentially tanning them in much the same way that animal skin is turned into leather.
Ancient writings suggest that embalming was practiced in prehistoric Europe, not just in Egypt. For instance, ancient Greek philosopher Poseidonius, writing in about 100 B.C., "visited Gaul and recorded that the Celts there embalmed the heads of their victims in cedar oil and kept them in chests," said researcher Mike Parker-Pearson, an archaeologist at the University of Sheffield in England.
Bizarrely, the man's remains were composed of bones from three different people, possessing the torso and limbs of one man, the skull and neck of another, and the lower jaw from a third, possibly a woman.
The researchers made this discovery of his Frankenstein-like nature by analyzing his skeleton — for instance, evidence of arthritis was seen on the vertebrae of the neck, but not on the rest of the spine, revealing these parts came from different bodies. Also, the lower jaw had all its teeth, whereas those of the upper jaw were entirely missing, and the condition of the lower jaw's teeth revealed they once interacted with a full set of teeth in his upper jaw, showing they originally belonged to another man.
To see if the woman's skeleton was also a composite, the researchers analyzed ancient DNA from the skull, lower jaw, right upper arm and right thighbone. This revealed that the lower jaw, arm bone and thighbone all came from different people. Data from the skull was inconclusive. (Oddly, the upper two teeth next to her front teeth had been removed and placed in each hand.)
The first composite was apparently assembled between 1260 B.C. and 1440 B.C., while the second composite was assembled between 1130 B.C. and 1310 B.C. "There is overlap, but the statistical probability is that they were assembled at different times," Parker-Pearson said.
Although one Frankenstein-like mix-up of body parts might be an accident, "the second instance makes this unlikely," Parker-Pearson said.
Mummification apparently took off in Britain about 1500 B.C. "at a time when land ownership — communal rather than private, most likely — was being marked by the construction of large-scale field systems," Parker-Pearson told LiveScience. "Rights to land would have depended on ancestral claims, so perhaps having the ancestors around 'in the flesh' was their prehistoric equivalent of a legal document."
"Merging different body parts of ancestors into a single person could represent the merging of different families and their lines of descent," Parker-Pearson said. "Perhaps this was a prelude to building the row of houses in which numerous different families are likely to have lived."
Mummies? Britain?
When the bones were first discovered, Parker-Pearson admitted, "some archaeologists were rightly skeptical," as mummification in the British Bronze Age was pretty much unheard of.
Even Parker-Pearson would've been skeptical of the finding, had he not studied the bones. "But since then, we have applied a battery of scientific methods, of which the ancient DNA analysis is the latest," he said. "Together with archaeological evidence from excavation, these analytical results make a fairly unassailable case for mummification and recombination."
"I don't think it implies any links with ancient Egypt or other distant civilizations at all," Parker-Pearson said about these findings. "Mummification is simple enough to do in your own kitchen, and has been surprisingly widespread among small-scale, traditional societies throughout the world in recent centuries."
Read more at Discovery News
This mixing of remains was perhaps designed to combine different ancestries into a single lineage, archaeologists speculated.
The bodies were first unearthed in 2001 during excavations beneath the foundations of an approximately 3,000-year-old house on South Uist, an island in the Outer Hebrides off the west coast of Scotland. The building was one of three roundhouses at Cladh Hallan, a prehistoric village named after a nearby modern graveyard. The site was once populated in the Bronze Age from 2200 B.C. to 800 B.C. — scientists were digging here to learn more about this era in Britain, where little was known until recently.
The researchers had found what were apparently the remains of a teenage girl and a 3-year-old child at the site. However, two other bodies looked especially strange — those of a man and a woman found in tight fetal positions as if they had been tightly wrapped up, reminiscent of "mummy bundles" seen in South America and other parts of the world. These bodies were apparently mummified on purpose, the first evidence of deliberate mummification in the ancient Old World outside of Egypt.
Evidence for mummy mix-ups
Evidence of this mummification lies in how all the bones in both these bodies were still "articulated" or in the same positions as they were in life, revealing that sinew and perhaps skin were still holding them together when they were buried. Carbon dating these remains and their surroundings revealed these bodies were buried up to 600 years after death — to keep bodies from rotting to pieces after such a long time, they must have been intentionally preserved, unlike the bodies of animals also buried at the site, which had been left to decay.
Mineral alterations of the outer layer of the bones suggest they were entombed in acidic surroundings, such as those found in nearby peat bogs. Exposures to such bogs for a year or so would have mummified them, stopping microbes from decomposing the bodies by essentially tanning them in much the same way that animal skin is turned into leather.
Ancient writings suggest that embalming was practiced in prehistoric Europe, not just in Egypt. For instance, ancient Greek philosopher Poseidonius, writing in about 100 B.C., "visited Gaul and recorded that the Celts there embalmed the heads of their victims in cedar oil and kept them in chests," said researcher Mike Parker-Pearson, an archaeologist at the University of Sheffield in England.
Bizarrely, the man's remains were composed of bones from three different people, possessing the torso and limbs of one man, the skull and neck of another, and the lower jaw from a third, possibly a woman.
The researchers made this discovery of his Frankenstein-like nature by analyzing his skeleton — for instance, evidence of arthritis was seen on the vertebrae of the neck, but not on the rest of the spine, revealing these parts came from different bodies. Also, the lower jaw had all its teeth, whereas those of the upper jaw were entirely missing, and the condition of the lower jaw's teeth revealed they once interacted with a full set of teeth in his upper jaw, showing they originally belonged to another man.
To see if the woman's skeleton was also a composite, the researchers analyzed ancient DNA from the skull, lower jaw, right upper arm and right thighbone. This revealed that the lower jaw, arm bone and thighbone all came from different people. Data from the skull was inconclusive. (Oddly, the upper two teeth next to her front teeth had been removed and placed in each hand.)
The first composite was apparently assembled between 1260 B.C. and 1440 B.C., while the second composite was assembled between 1130 B.C. and 1310 B.C. "There is overlap, but the statistical probability is that they were assembled at different times," Parker-Pearson said.
Although one Frankenstein-like mix-up of body parts might be an accident, "the second instance makes this unlikely," Parker-Pearson said.
Mummification apparently took off in Britain about 1500 B.C. "at a time when land ownership — communal rather than private, most likely — was being marked by the construction of large-scale field systems," Parker-Pearson told LiveScience. "Rights to land would have depended on ancestral claims, so perhaps having the ancestors around 'in the flesh' was their prehistoric equivalent of a legal document."
"Merging different body parts of ancestors into a single person could represent the merging of different families and their lines of descent," Parker-Pearson said. "Perhaps this was a prelude to building the row of houses in which numerous different families are likely to have lived."
Mummies? Britain?
When the bones were first discovered, Parker-Pearson admitted, "some archaeologists were rightly skeptical," as mummification in the British Bronze Age was pretty much unheard of.
Even Parker-Pearson would've been skeptical of the finding, had he not studied the bones. "But since then, we have applied a battery of scientific methods, of which the ancient DNA analysis is the latest," he said. "Together with archaeological evidence from excavation, these analytical results make a fairly unassailable case for mummification and recombination."
"I don't think it implies any links with ancient Egypt or other distant civilizations at all," Parker-Pearson said about these findings. "Mummification is simple enough to do in your own kitchen, and has been surprisingly widespread among small-scale, traditional societies throughout the world in recent centuries."
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Animals Navigate With Magnetic Cells
Salmon, turtles and many birds migrate up to thousands of miles at a time, presumably by sensing the Earth's magnetic field. Now, scientists have identified cells in the nose of trout that respond to magnetism, offering a biological explanation for how animals orient themselves and find their way, even when it's dark or foggy.
The discovery -- and particularly the new method that enabled it -- opens up avenues for all sorts of futuristic applications, including miniaturized GPS systems or gene therapies that would restore sight, hearing or smell to people who have lost those senses.
The ability to detect magnetic-sensitive cells in the lab could also help answer questions about whether people are at risk from magnetic fields produced by power lines and other equipment.
"The key point is really the method we established. Some people call it a game-changer," said Michael Winklhofer, a biogeophysicist at the University of Munich. "Previously, we didn't have a tool to collect these cells. Now, we can do some serious cell biology on them."
"There's no doubt that many animals have a magnetic sense, particularly migratory birds and fish," he added. "But the problem is, we still don't know how that works."
Winklhofer and colleagues chose to study the olfactory tissues of trout based on decade-old research, which showed that magnetic fields affected the electrical activity of nerves that carried information from the fishes' noses. Instead of grinding up the tissues for analysis, as older methods tended to do, the researchers gently isolated whole cells from the tissues and put them into petri dishes.
When the team applied rotating magnetic fields to those dishes, about one out of every 10,000 cells spun with the same frequency as the fields, the researchers report today in the Proceedings of the National Academy of Sciences. Illuminated by the light of the microscope, structures inside of these cells also shone brilliantly, making them easy to detect.
A closer look revealed crystals attached to inside the cell membranes that contained what appeared to be magnetite, an iron-rich magnetic material. Scientists don't yet know how these structures work, but Winklhofer suspects that they excite membranes inside neurons and trigger nerve impulses that send direction-related information to the brain.
Based on the abundance of magnetic cells in the samples, Winklhofer estimated that each fish had a total of between 10 and 100 of these cells in its nose. As expected, there were no magnetic cells in the animals' muscle tissue. But in work yet to be published, his group detected even more magnetic cells in the trout's lateral line, a sensory organ in fish that detects vibrations.
Because magnetic fields penetrate the entire body, magnetic-sensing cells could be sporadically spread throughout in other body parts, too, which would make sense. If the cells were too close together, they would begin to sense each other's magnetic fields instead of the larger fields around the planet. Like needles in a haystack, though, magnetic cells can be difficult to find, which is what makes the new method so valuable.
The new technique also makes it possible to look for magnetic cells in animals that don't necessarily use a sense of magnetism but may have retained the cells even as evolution made them obsolete. In a 2008 study, for example, German researchers analyzed Google Earth images and saw that cows and deer tended to stand facing magnetic north or south.
Some recent research suggests that even people might harbor magnetic cells that linger from our ancestral hunter-gatherer days. If so, magnetic fields from power lines could be causing stress inside of our cells, leading to unknown health effects.
Researchers also hope to identify the genes and proteins responsible for producing magnetic-sensing cells, which would go a long way toward explaining how migrating animals accomplish such amazing feats. These discoveries would also pave the way for applications, such as tiny GPS systems or even novel strategies for healing blindness and other sensory problems in people.
Read more at Discovery News
The discovery -- and particularly the new method that enabled it -- opens up avenues for all sorts of futuristic applications, including miniaturized GPS systems or gene therapies that would restore sight, hearing or smell to people who have lost those senses.
The ability to detect magnetic-sensitive cells in the lab could also help answer questions about whether people are at risk from magnetic fields produced by power lines and other equipment.
"The key point is really the method we established. Some people call it a game-changer," said Michael Winklhofer, a biogeophysicist at the University of Munich. "Previously, we didn't have a tool to collect these cells. Now, we can do some serious cell biology on them."
"There's no doubt that many animals have a magnetic sense, particularly migratory birds and fish," he added. "But the problem is, we still don't know how that works."
Winklhofer and colleagues chose to study the olfactory tissues of trout based on decade-old research, which showed that magnetic fields affected the electrical activity of nerves that carried information from the fishes' noses. Instead of grinding up the tissues for analysis, as older methods tended to do, the researchers gently isolated whole cells from the tissues and put them into petri dishes.
When the team applied rotating magnetic fields to those dishes, about one out of every 10,000 cells spun with the same frequency as the fields, the researchers report today in the Proceedings of the National Academy of Sciences. Illuminated by the light of the microscope, structures inside of these cells also shone brilliantly, making them easy to detect.
A closer look revealed crystals attached to inside the cell membranes that contained what appeared to be magnetite, an iron-rich magnetic material. Scientists don't yet know how these structures work, but Winklhofer suspects that they excite membranes inside neurons and trigger nerve impulses that send direction-related information to the brain.
Based on the abundance of magnetic cells in the samples, Winklhofer estimated that each fish had a total of between 10 and 100 of these cells in its nose. As expected, there were no magnetic cells in the animals' muscle tissue. But in work yet to be published, his group detected even more magnetic cells in the trout's lateral line, a sensory organ in fish that detects vibrations.
Because magnetic fields penetrate the entire body, magnetic-sensing cells could be sporadically spread throughout in other body parts, too, which would make sense. If the cells were too close together, they would begin to sense each other's magnetic fields instead of the larger fields around the planet. Like needles in a haystack, though, magnetic cells can be difficult to find, which is what makes the new method so valuable.
The new technique also makes it possible to look for magnetic cells in animals that don't necessarily use a sense of magnetism but may have retained the cells even as evolution made them obsolete. In a 2008 study, for example, German researchers analyzed Google Earth images and saw that cows and deer tended to stand facing magnetic north or south.
Some recent research suggests that even people might harbor magnetic cells that linger from our ancestral hunter-gatherer days. If so, magnetic fields from power lines could be causing stress inside of our cells, leading to unknown health effects.
Researchers also hope to identify the genes and proteins responsible for producing magnetic-sensing cells, which would go a long way toward explaining how migrating animals accomplish such amazing feats. These discoveries would also pave the way for applications, such as tiny GPS systems or even novel strategies for healing blindness and other sensory problems in people.
Read more at Discovery News
Space Worms Live Long and Prosper
A microscopic worm used in experiments on the space station not only seems to enjoy living in a microgravity environment, it also appears to get a lifespan boost.
This intriguing discovery was made by University of Nottingham scientists who have flown experiments carrying thousands of tiny Caenorhabditis elegans (C. elegans) to low-Earth orbit over the years. But why are these little worms so special?
C. elegans may be microscopic, but they were the first multi-cellular organism to have their genetic structure completely mapped. These little guys possess 20,000 genes that perform similar functions as equivalent genes in humans. Of particular interest are the 2,000 genes that have a role in promoting muscle function. As any long-duration astronaut can attest, one of the biggest challenges facing mankind's future in space is muscle atrophy.
Understanding how C. elegans function in space is therefore of huge scientific value not only for tiny worm enthusiasts, but for the manned exploration -- and colonization -- of space.
In 2011, Discovery News reported on some results to come from the C. elegans experiments. Nathaniel Szewczyk, of the Division of Clinical Physiology at the University of Nottingham, discussed the worms' microgravity reproduction habits and, as it turns out, C. elegans prospered just fine. Over three months, Szewczyk's team were able to observe the space worms flourish over twelve generations.
And now, in results published on July 5 in the online journal Scientific Reports, it appears that C. elegans not only adapted to microgravity conditions, their lifespans also received a boost when compared with their terrestrial counterparts.
"We identified seven genes, which were down-regulated in space and whose inactivation extended lifespan under laboratory conditions," Szewczyk said in a press release. This basically means that seven C. elegans genes usually associated with muscle aging were suppressed when the worms were exposed to a microgravity environment. Also, it appears spaceflight suppresses the accumulation of toxic proteins that normally gets stored inside aging muscle.
But the biological mechanisms behind this anti-aging effect are a bit of a mystery.
"It would appear that these genes are involved in how the worm senses the environment and signals changes in metabolism in order to adapt to the environment," added Szewczyk. "For example, one of the genes we have identified encodes insulin which, because of diabetes, is well known to be associated with metabolic control. In worms, flies, and mice insulin is also associated with modulation of lifespan."
Read more at Discovery News
This intriguing discovery was made by University of Nottingham scientists who have flown experiments carrying thousands of tiny Caenorhabditis elegans (C. elegans) to low-Earth orbit over the years. But why are these little worms so special?
C. elegans may be microscopic, but they were the first multi-cellular organism to have their genetic structure completely mapped. These little guys possess 20,000 genes that perform similar functions as equivalent genes in humans. Of particular interest are the 2,000 genes that have a role in promoting muscle function. As any long-duration astronaut can attest, one of the biggest challenges facing mankind's future in space is muscle atrophy.
Understanding how C. elegans function in space is therefore of huge scientific value not only for tiny worm enthusiasts, but for the manned exploration -- and colonization -- of space.
In 2011, Discovery News reported on some results to come from the C. elegans experiments. Nathaniel Szewczyk, of the Division of Clinical Physiology at the University of Nottingham, discussed the worms' microgravity reproduction habits and, as it turns out, C. elegans prospered just fine. Over three months, Szewczyk's team were able to observe the space worms flourish over twelve generations.
And now, in results published on July 5 in the online journal Scientific Reports, it appears that C. elegans not only adapted to microgravity conditions, their lifespans also received a boost when compared with their terrestrial counterparts.
"We identified seven genes, which were down-regulated in space and whose inactivation extended lifespan under laboratory conditions," Szewczyk said in a press release. This basically means that seven C. elegans genes usually associated with muscle aging were suppressed when the worms were exposed to a microgravity environment. Also, it appears spaceflight suppresses the accumulation of toxic proteins that normally gets stored inside aging muscle.
But the biological mechanisms behind this anti-aging effect are a bit of a mystery.
"It would appear that these genes are involved in how the worm senses the environment and signals changes in metabolism in order to adapt to the environment," added Szewczyk. "For example, one of the genes we have identified encodes insulin which, because of diabetes, is well known to be associated with metabolic control. In worms, flies, and mice insulin is also associated with modulation of lifespan."
Read more at Discovery News
Earth's Biggest Unanswered Questions
So, what are today’s biggest unanswered questions in Earth science?
Kathyrn Hansen, associate editor of EARTH magazine, recently posed the question to a variety of experts ranging from paleontologists and geologists to atmospheric and planetary scientists. From Hansen's compilation, here are Three Big Unanswered Questions that caught my eye:
BIG UNANSWERED QUESTION #1:
Where are all the big magma chambers that could produce super-eruptions?
Geologists can tell us where supervolcanoes have exploded in the past, but so far none of those old scars seem to have much liquid magma brewing beneath them. Why haven’t we found any big magma chambers yet?
John Eichelberger, a volcanologist with the U.S. Geological Survey, offers several possibilities. Maybe the old supervolcanoes already spent themselves and the magma chambers are empty. Maybe we haven't looked in the right place or our techniques aren’t yet good enough. Or, as geophysicists reported recently, maybe supervolcanoes develop very fast and erupt quickly:
True supervolcano eruptions, ones that spew lava and ash on the order of 1,000 cubic kilometers or more, are incredibly rare; on average, only about one super-eruption occurs every 100,000 years. So we humans really aren’t at much risk. But just imagine…what if? Simply put, the consequences would be apocalyptic.
As Eichelberger notes: “Danger, however unlikely, is fascinating.”
BIG UNANSWERED QUESTION #2:
How stable is the West Antarctic Ice Sheet and what does it mean for global sea level?
Supervolcanoes may not be much risk for those of us living on Earth today, but the devastating consequences of rising sea level are already very real:
With so much of the world's population living near the coasts, scientists would really like to be able to make predictions precise enough for people to plan how to handle the loss of land and threats to coastal communities that are expected by the end of this century.
But so far, they can’t. Bummer.
BIG UNANSWERED QUESTION #3:
So we know a lot about dinosaur fossils, but what about dinosaur biology?
In the case of dinosaurs, it’s a good thing scientists don’t have all the answers.
“Answers to all of the questions about dinosaurs might well take away the very mystery that surrounds them, and it’s the mystery that charges children's imaginations,” notes Jack Horner, a paleontologist at Montana State University.
For all that paleontologists know about the size and shape of dinosaurs, they still know surprisingly little about their biology. In Horner’s opinion, figuring out how gigantic sauropods could be so wildly successful is the key to understanding dinosaurs as living animals.
Read more at Discovery News
Kathyrn Hansen, associate editor of EARTH magazine, recently posed the question to a variety of experts ranging from paleontologists and geologists to atmospheric and planetary scientists. From Hansen's compilation, here are Three Big Unanswered Questions that caught my eye:
BIG UNANSWERED QUESTION #1:
Where are all the big magma chambers that could produce super-eruptions?
Geologists can tell us where supervolcanoes have exploded in the past, but so far none of those old scars seem to have much liquid magma brewing beneath them. Why haven’t we found any big magma chambers yet?
John Eichelberger, a volcanologist with the U.S. Geological Survey, offers several possibilities. Maybe the old supervolcanoes already spent themselves and the magma chambers are empty. Maybe we haven't looked in the right place or our techniques aren’t yet good enough. Or, as geophysicists reported recently, maybe supervolcanoes develop very fast and erupt quickly:
True supervolcano eruptions, ones that spew lava and ash on the order of 1,000 cubic kilometers or more, are incredibly rare; on average, only about one super-eruption occurs every 100,000 years. So we humans really aren’t at much risk. But just imagine…what if? Simply put, the consequences would be apocalyptic.
As Eichelberger notes: “Danger, however unlikely, is fascinating.”
BIG UNANSWERED QUESTION #2:
How stable is the West Antarctic Ice Sheet and what does it mean for global sea level?
Supervolcanoes may not be much risk for those of us living on Earth today, but the devastating consequences of rising sea level are already very real:
With so much of the world's population living near the coasts, scientists would really like to be able to make predictions precise enough for people to plan how to handle the loss of land and threats to coastal communities that are expected by the end of this century.
But so far, they can’t. Bummer.
BIG UNANSWERED QUESTION #3:
So we know a lot about dinosaur fossils, but what about dinosaur biology?
In the case of dinosaurs, it’s a good thing scientists don’t have all the answers.
“Answers to all of the questions about dinosaurs might well take away the very mystery that surrounds them, and it’s the mystery that charges children's imaginations,” notes Jack Horner, a paleontologist at Montana State University.
For all that paleontologists know about the size and shape of dinosaurs, they still know surprisingly little about their biology. In Horner’s opinion, figuring out how gigantic sauropods could be so wildly successful is the key to understanding dinosaurs as living animals.
Read more at Discovery News
Jul 8, 2012
Patients Trust Doctors but Consult the Internet
Patients look up their illnesses online to become better informed and prepared to play an active role in their care -- not because they mistrust their doctors, a new University of California, Davis, study suggests.
The study surveyed more than 500 people who were members of online support groups and had scheduled appointments with a physician.
"We found that mistrust was not a significant predictor of people going online for health information prior to their visit," said Xinyi Hu, who co-authored the study as part of her master's thesis in communication. "This was somewhat surprising and suggests that doctors need not be defensive when their patients come to their appointments armed with information taken from the Internet."
With faculty co-authors at UC Davis and the University of Southern California, Hu examined how the study subjects made use of support groups, other Internet resources, and offline sources of information, including traditional media and social relations, before their medical appointments.
The study found no evidence that the users of online health information had less trust in their doctors than patients who did not seek information through the Internet.
"The Internet has become a mainstream source of information about health and other issues," Hu noted. "Many people go online to get information when they anticipate a challenge in their life. It makes sense that they would do the same when dealing with a health issue."
Although physician mistrust did not predict reliance on the Internet prior to patients' medical visits, several other factors did. For example, people were more likely to seek information online when their health situation was distressful or when they felt they had some level of personal control over their illness. Online information-seeking was also higher among patients who believed that their medical condition was likely to persist.
The study also found that Internet health information did not replace more traditional sources of information. Instead, patients used the Internet to supplement offline sources, such as friends, health news reports and reference books.
"With the growth of online support groups, physicians need to be aware that many of their patients will be joining and interacting with these groups. These patients tend to be very active health-information seekers, making use of both traditional and new media," the study said.
Almost 70 percent of the study subjects reported they were planning to ask their doctor questions about the information they found, and about 40 percent said they had printed out information to take with them to discuss with their doctors. More than 50 percent of subjects said they intended to make at least one request of their doctor on the basis of Internet information.
"As a practicing physician, these results provide some degree of reassurance," said co-author Richard L. Kravitz, a UC Davis Health System professor of internal medicine and study co-author. "The results mean that patients are not turning to the Internet out of mistrust; more likely, Internet users are curious information seekers who are just trying to learn as much as they can before their visit."
Online support groups provide online virtual meeting places for sharing information and social support. In February 2011, there were more than 12,000 groups listed in the support category of Yahoo! Groups Health and Wellness directory. Even so, other studies suggest that only 9 percent of Americans and 37 percent of patients with chronic disease have participated in online support groups. The majority of subjects assessed their own health as fair or poor.
Read more at Science Daily
The study surveyed more than 500 people who were members of online support groups and had scheduled appointments with a physician.
"We found that mistrust was not a significant predictor of people going online for health information prior to their visit," said Xinyi Hu, who co-authored the study as part of her master's thesis in communication. "This was somewhat surprising and suggests that doctors need not be defensive when their patients come to their appointments armed with information taken from the Internet."
With faculty co-authors at UC Davis and the University of Southern California, Hu examined how the study subjects made use of support groups, other Internet resources, and offline sources of information, including traditional media and social relations, before their medical appointments.
The study found no evidence that the users of online health information had less trust in their doctors than patients who did not seek information through the Internet.
"The Internet has become a mainstream source of information about health and other issues," Hu noted. "Many people go online to get information when they anticipate a challenge in their life. It makes sense that they would do the same when dealing with a health issue."
Although physician mistrust did not predict reliance on the Internet prior to patients' medical visits, several other factors did. For example, people were more likely to seek information online when their health situation was distressful or when they felt they had some level of personal control over their illness. Online information-seeking was also higher among patients who believed that their medical condition was likely to persist.
The study also found that Internet health information did not replace more traditional sources of information. Instead, patients used the Internet to supplement offline sources, such as friends, health news reports and reference books.
"With the growth of online support groups, physicians need to be aware that many of their patients will be joining and interacting with these groups. These patients tend to be very active health-information seekers, making use of both traditional and new media," the study said.
Almost 70 percent of the study subjects reported they were planning to ask their doctor questions about the information they found, and about 40 percent said they had printed out information to take with them to discuss with their doctors. More than 50 percent of subjects said they intended to make at least one request of their doctor on the basis of Internet information.
"As a practicing physician, these results provide some degree of reassurance," said co-author Richard L. Kravitz, a UC Davis Health System professor of internal medicine and study co-author. "The results mean that patients are not turning to the Internet out of mistrust; more likely, Internet users are curious information seekers who are just trying to learn as much as they can before their visit."
Online support groups provide online virtual meeting places for sharing information and social support. In February 2011, there were more than 12,000 groups listed in the support category of Yahoo! Groups Health and Wellness directory. Even so, other studies suggest that only 9 percent of Americans and 37 percent of patients with chronic disease have participated in online support groups. The majority of subjects assessed their own health as fair or poor.
Read more at Science Daily
Labels:
Biology,
Human,
Medicin,
Science,
Technology
What the Discovery of the Higgs Means for Scientists
Stephen Wolfram’s diverse areas of research include mathematics, physics, and computing. Though his early career was focused on particle physics, he went on to create the widely used computer algebra system Mathematica and, later, the search engine Wolfram Alpha. He is author of A New Kind of Science — a study of simple computational systems such as cellular automata — and current CEO of Wolfram Research.
The announcement early yesterday morning of experimental evidence for what’s presumably the Higgs particle brings a certain closure to a story I’ve watched (and sometimes been a part of) for nearly 40 years. In some ways I felt like a teenager again. Hearing about a new particle being discovered. And asking the same questions I would have asked at age 15. “What’s its mass?” “What decay channel?” “What total width?” “How many sigma?” “How many events?”
When I was a teenager in the 1970s, particle physics was my great interest. It felt like I had a personal connection to all those kinds of particles that were listed in the little book of particle properties I used to carry around with me. The pions and kaons and lambda particles and f mesons and so on. At some level, though, the whole picture was a mess. A hundred kinds of particles, with all sorts of detailed properties and relations. But there were theories. The quark model. Regge theory. Gauge theories. S-matrix theory. It wasn’t clear what theory was correct. Some theories seemed shallow and utilitarian; others seemed deep and philosophical. Some were clean but boring. Some seemed contrived. Some were mathematically sophisticated and elegant; others were not.
By the mid-1970s, though, those in the know had pretty much settled on what became the Standard Model. In a sense it was the most vanilla of the choices. It seemed a little contrived, but not very. It involved some somewhat sophisticated mathematics, but not the most elegant or deep mathematics. But it did have at least one notable feature: of all the candidate theories, it was the one that most extensively allowed explicit calculations to be made. They weren’t easy calculations—and in fact it was doing those calculations that got me started having computers to do calculations, and set me on the path that eventually led to Mathematica. But at the time I think the very difficulty of the calculations seemed to me and everyone else to make the theory more satisfying to work with, and more likely to be meaningful.
At the least in the early years there were still surprises, though. In November 1974 there was the announcement of the J/psi particle. And one asked the same questions as today, starting with “What’s the mass?” (That particle’s was 3.1 GeV; today’s is 126 GeV.) But unlike with the Higgs particle, to almost everyone the J/psi was completely unexpected. At first it wasn’t at all clear what it could be. Was it evidence of something truly fundamental and exciting? Or was it in a sense just a repeat of things that had been seen before?
My own very first published paper (feverishly worked on over Christmas 1974 soon after I turned 15) speculated that it and some related phenomena might be something exciting: a sign of substructure in the electron. But however nice and interesting a theory may be, nature doesn’t have to follow it. And in this case it didn’t. And instead the phenomena that had been seen turned out to have a more mundane explanation: they were signs of an additional (4th) kind of quark (the c or charm quark).
In the next few years, more surprises followed. Mounting evidence showed that there was a heavier analog of the electron and muon—the tau lepton. Then in July 1977 there was another “sudden discovery”, made at Fermilab: this time of a particle based on the b quark. I happened to be spending the summer of 1977 doing particle physics at Argonne National Lab, not far away from Fermilab. And it was funny: I remember there was a kind of blasé attitude toward the discovery. Like “another unexpected particle physics discovery; there’ll be lots more”.
But as it turned out that’s not what happened. It’s been 35 years, and when it comes to new particles and the like, there really hasn’t been a single surprise. (The discovery of neutrino masses is a partial counterexample, as are various discoveries in cosmology.) Experiments have certainly discovered things—the W and Z bosons, the validity of QCD, the top quark. But all of them were as expected from the Standard Model; there were no surprises.
Needless to say, verifying the predictions of the Standard Model hasn’t always been easy. A few times I happened to be at the front lines. In 1977, for example, I computed what the Standard Model predicted for the rate of producing charm particles in proton-proton collisions. But the key experiment at the time said the actual rate was much lower. I spent ages trying to figure out what might be wrong—either with my calculations or the underlying theory. But in the end—in a rather formative moment for my understanding of applying the scientific method—it turned out that what was wrong was actually the experiment, not the theory.
In 1979—when I was at the front lines of the “discovery of the gluon”—almost the opposite thing happened. The conviction in the Standard Model was by then so great that the experiments agreed too early, even before the calculations were correctly finished. Though once again, in the end all was well, and the method I invented for doing analysis of the experiments is in fact still routinely used today.
By 1981 I myself was beginning to drift away from particle physics, not least because I’d started to work on things that I thought were somehow more fundamental. But I still used to follow what was happening in particle physics. And every so often I’d get excited when I heard about some discovery rumored or announced that seemed somehow unexpected or inexplicable from the Standard Model. But in the end it was all rather disappointing. There’d be questions about each discovery—and in later years there’d often be suspicious correlations with deadlines for funding decisions. And every time, after a while, the discovery would melt away. Leaving only the plain Standard Model, with no surprises.
Through all of this, though, there was always one loose end dangling: the Higgs particle. It wasn’t clear just what it would take to see it, but if the Standard Model was correct, it had to exist.
Read more at Wired Science
The announcement early yesterday morning of experimental evidence for what’s presumably the Higgs particle brings a certain closure to a story I’ve watched (and sometimes been a part of) for nearly 40 years. In some ways I felt like a teenager again. Hearing about a new particle being discovered. And asking the same questions I would have asked at age 15. “What’s its mass?” “What decay channel?” “What total width?” “How many sigma?” “How many events?”
When I was a teenager in the 1970s, particle physics was my great interest. It felt like I had a personal connection to all those kinds of particles that were listed in the little book of particle properties I used to carry around with me. The pions and kaons and lambda particles and f mesons and so on. At some level, though, the whole picture was a mess. A hundred kinds of particles, with all sorts of detailed properties and relations. But there were theories. The quark model. Regge theory. Gauge theories. S-matrix theory. It wasn’t clear what theory was correct. Some theories seemed shallow and utilitarian; others seemed deep and philosophical. Some were clean but boring. Some seemed contrived. Some were mathematically sophisticated and elegant; others were not.
By the mid-1970s, though, those in the know had pretty much settled on what became the Standard Model. In a sense it was the most vanilla of the choices. It seemed a little contrived, but not very. It involved some somewhat sophisticated mathematics, but not the most elegant or deep mathematics. But it did have at least one notable feature: of all the candidate theories, it was the one that most extensively allowed explicit calculations to be made. They weren’t easy calculations—and in fact it was doing those calculations that got me started having computers to do calculations, and set me on the path that eventually led to Mathematica. But at the time I think the very difficulty of the calculations seemed to me and everyone else to make the theory more satisfying to work with, and more likely to be meaningful.
At the least in the early years there were still surprises, though. In November 1974 there was the announcement of the J/psi particle. And one asked the same questions as today, starting with “What’s the mass?” (That particle’s was 3.1 GeV; today’s is 126 GeV.) But unlike with the Higgs particle, to almost everyone the J/psi was completely unexpected. At first it wasn’t at all clear what it could be. Was it evidence of something truly fundamental and exciting? Or was it in a sense just a repeat of things that had been seen before?
My own very first published paper (feverishly worked on over Christmas 1974 soon after I turned 15) speculated that it and some related phenomena might be something exciting: a sign of substructure in the electron. But however nice and interesting a theory may be, nature doesn’t have to follow it. And in this case it didn’t. And instead the phenomena that had been seen turned out to have a more mundane explanation: they were signs of an additional (4th) kind of quark (the c or charm quark).
In the next few years, more surprises followed. Mounting evidence showed that there was a heavier analog of the electron and muon—the tau lepton. Then in July 1977 there was another “sudden discovery”, made at Fermilab: this time of a particle based on the b quark. I happened to be spending the summer of 1977 doing particle physics at Argonne National Lab, not far away from Fermilab. And it was funny: I remember there was a kind of blasé attitude toward the discovery. Like “another unexpected particle physics discovery; there’ll be lots more”.
But as it turned out that’s not what happened. It’s been 35 years, and when it comes to new particles and the like, there really hasn’t been a single surprise. (The discovery of neutrino masses is a partial counterexample, as are various discoveries in cosmology.) Experiments have certainly discovered things—the W and Z bosons, the validity of QCD, the top quark. But all of them were as expected from the Standard Model; there were no surprises.
Needless to say, verifying the predictions of the Standard Model hasn’t always been easy. A few times I happened to be at the front lines. In 1977, for example, I computed what the Standard Model predicted for the rate of producing charm particles in proton-proton collisions. But the key experiment at the time said the actual rate was much lower. I spent ages trying to figure out what might be wrong—either with my calculations or the underlying theory. But in the end—in a rather formative moment for my understanding of applying the scientific method—it turned out that what was wrong was actually the experiment, not the theory.
In 1979—when I was at the front lines of the “discovery of the gluon”—almost the opposite thing happened. The conviction in the Standard Model was by then so great that the experiments agreed too early, even before the calculations were correctly finished. Though once again, in the end all was well, and the method I invented for doing analysis of the experiments is in fact still routinely used today.
By 1981 I myself was beginning to drift away from particle physics, not least because I’d started to work on things that I thought were somehow more fundamental. But I still used to follow what was happening in particle physics. And every so often I’d get excited when I heard about some discovery rumored or announced that seemed somehow unexpected or inexplicable from the Standard Model. But in the end it was all rather disappointing. There’d be questions about each discovery—and in later years there’d often be suspicious correlations with deadlines for funding decisions. And every time, after a while, the discovery would melt away. Leaving only the plain Standard Model, with no surprises.
Through all of this, though, there was always one loose end dangling: the Higgs particle. It wasn’t clear just what it would take to see it, but if the Standard Model was correct, it had to exist.
Read more at Wired Science
Subscribe to:
Posts (Atom)