The eyes are the window into the soul -- or at least the mind, according to a new paper published in Perspectives on Psychological Science, a journal of the Association for Psychological Science. Measuring the diameter of the pupil, the part of the eye that changes size to let in more light, can show what a person is paying attention to. Pupillometry, as it's called, has been used in social psychology, clinical psychology, humans, animals, children, infants -- and it should be used even more, the authors say.
The pupil is best known for changing size in reaction to light. In a dark room, your pupils open wide to let in more light; as soon as you step outside into the sunlight, the pupils shrink to pinpricks. This keeps the retina at the back of the eye from being overwhelmed by bright light. Something similar happens in response to psychological stimuli, says Bruno Laeng of the University of Oslo, who cowrote the paper with Sylvain Sirois of Université du Québec à Trois-Rivières and Gustaf Gredebäck of Uppsala University in Sweden. When someone sees something they want to pay closer attention to, the pupil enlarges. It's not clear why this happens, Laeng says. "One idea is that, by essentially enlarging the field of the visual input, it's beneficial to visual exploration," he says.
However it works, psychological scientists can use the fact that people's pupils widen when they see something they're interested in.
Laeng has used pupil size to study people who had damage to the hippocampus, which usually causes very severe amnesia. Normally, if you show one of these patients a series of pictures, then take a short break, then show them another series of pictures, they don't know which ones they've seen before and which ones are new. But Laeng measured patients' pupils while they did this test and found that the patients did actually respond differently to the pictures they had seen before. "In a way, this is good news, because it shows that some of the brains of these patients, unknown to themselves, is actually capable of making the distinction," he says.
Pupil measurement might also be useful for studying babies. Tiny infants can't tell you what they're paying attention to. "Developmental psychologists have used all kinds of methods to get this information without using language," Laeng says. Seeing what babies are interested in can give clues to what they're able to recognize -- different shapes or sounds, for example. A researcher might show a child two images side by side and see which one they look at for longer. Measuring the size of a baby's pupils could do the same without needing a comparison.
Read more at Science Daily
Jan 28, 2012
Life Discovered On Dead Hydrothermal Vents
Scientists at USC have uncovered evidence that even when hydrothermal sea vents go dormant and their blistering warmth turns to frigid cold, life goes on.
Or rather, it is replaced.
A team led by USC microbiologist Katrina Edwards found that the microbes that thrive on hot fluid methane and sulfur spewed by active hydrothermal vents are supplanted, once the vents go cold, by microbes that feed on the solid iron and sulfur that make up the vents themselves.
The findings -- based on samples collected for Edwards by the U.S. Navy deep sea submersible Alvin (famed for its exploration of the Titanic in 1986) -- provide a rare example of ecological succession in microbes.
The findings were published in an mBio article authored by Edwards, USC postdoctoral researcher Jason Sylvan and Brandy Toner of the University of Minnesota.
Ecological succession is the biological phenomenon whereby one form of life takes the place of another as conditions in an area change -- a phenomenon documented in plants and animals.
For example, after a forest fire, different species of trees replace the older ones that stood for decades.
Scientists long have known that active vents provided the heat and nutrients necessary to maintain microbes. But dormant vents -- lacking a flow of hot, nutrient-rich water -- were thought to be devoid of life.
Hydrothermal vents are formed on the ocean floor with the motion of tectonic plates. Where the sea floor becomes thin, the hot magma below the surface creates a fissure that spews geothermally heated water -- reaching temperatures of more than 400 degrees Celsius.
After a geologically brief time of actively venting into the ocean, the same sea floor spreading that brought them into being shuffles them away from the hotspot, and the vents grow cold and dormant.
"Hydrothermal vents are really ephemeral in nature," said Edwards, professor of biological sciences at the USC Dornsife College of Letters, Arts and Sciences.
Microbial communities on sea floor vents have been studied since the vents themselves were discovered in the late 1970s. Until recently, little attention had been paid to them once they stopped venting.
Sylvan said he would like to take samples on vents of various ages to catalog exactly how the succession from one population of microbes to the next occurs.
Edwards, who recently returned from a two-month expedition to collect samples of microbes deep below the ocean floor, said that the next step will be to see if the ecological succession is mirrored in microbes that exist beneath the surface of the rock.
"The next thing is to go subterranean," she said.
Read more at Science Daily
Or rather, it is replaced.
A team led by USC microbiologist Katrina Edwards found that the microbes that thrive on hot fluid methane and sulfur spewed by active hydrothermal vents are supplanted, once the vents go cold, by microbes that feed on the solid iron and sulfur that make up the vents themselves.
The findings -- based on samples collected for Edwards by the U.S. Navy deep sea submersible Alvin (famed for its exploration of the Titanic in 1986) -- provide a rare example of ecological succession in microbes.
The findings were published in an mBio article authored by Edwards, USC postdoctoral researcher Jason Sylvan and Brandy Toner of the University of Minnesota.
Ecological succession is the biological phenomenon whereby one form of life takes the place of another as conditions in an area change -- a phenomenon documented in plants and animals.
For example, after a forest fire, different species of trees replace the older ones that stood for decades.
Scientists long have known that active vents provided the heat and nutrients necessary to maintain microbes. But dormant vents -- lacking a flow of hot, nutrient-rich water -- were thought to be devoid of life.
Hydrothermal vents are formed on the ocean floor with the motion of tectonic plates. Where the sea floor becomes thin, the hot magma below the surface creates a fissure that spews geothermally heated water -- reaching temperatures of more than 400 degrees Celsius.
After a geologically brief time of actively venting into the ocean, the same sea floor spreading that brought them into being shuffles them away from the hotspot, and the vents grow cold and dormant.
"Hydrothermal vents are really ephemeral in nature," said Edwards, professor of biological sciences at the USC Dornsife College of Letters, Arts and Sciences.
Microbial communities on sea floor vents have been studied since the vents themselves were discovered in the late 1970s. Until recently, little attention had been paid to them once they stopped venting.
Sylvan said he would like to take samples on vents of various ages to catalog exactly how the succession from one population of microbes to the next occurs.
Edwards, who recently returned from a two-month expedition to collect samples of microbes deep below the ocean floor, said that the next step will be to see if the ecological succession is mirrored in microbes that exist beneath the surface of the rock.
"The next thing is to go subterranean," she said.
Read more at Science Daily
Jan 27, 2012
Life Beyond Earth? Underwater Caves in Bahamas Could Give Clues
Discoveries made in some underwater caves by Texas &M University at Galveston researchers in the Bahamas could provide clues about how ocean life formed on Earth millions of years ago, and perhaps give hints of what types of marine life could be found on distant planets and moons.
Tom Iliffe, professor of marine biology at the Texas A&M-Galveston campus, and graduate student Brett Gonzalez of Trabuco Canyon, Calif., examined three "blue holes" in the Bahamas and found that layers of bacterial microbes exists in all three, but each cave had specialized forms of such life and at different depths, suggesting that microbial life in such caves is continually adapting to changes in available light, water chemistry and food sources. Their work, also done in conjunction with researchers from Penn State University, has been published in Hydrobiologia.
"Blue holes" are so named because from an aerial view, they appear circular in shape with different shades of blue in and around their entrances. There are estimated to be more than 1,000 such caves in the Bahamas, the largest concentration of blue holes in the world.
'We examined two caves on Abaco Island and one on Andros Island," Iliffe explains. "One on Abaco, at a depth of about 100 feet, had sheets of bacteria that were attached to the walls of the caves, almost one inch thick. Another cave on the same island had bacteria living within poisonous clouds of hydrogen sulfide at the boundary between fresh and salt water. These caves had different forms of bacteria, with the types and density changing as the light source from above grew dimmer and dimmer.
"In the cave on Andros, we expected to find something similar, but the hydrogen sulfide layer there contained different types of bacteria," he adds. "It shows that the caves tend to have life forms that adapt to that particular habitat, and we found that some types of the bacteria could live in environments where no other forms of life could survive. This research shows how these bacteria have evolved over millions of years and have found a way to live under these extreme conditions."
Iliffe says the microbes change where the salt water meets fresh water within the caves and use chemical energy to produce their food. They can survive in environments with very low amounts of oxygen and light.
There are tens of thousands of underwater caves scattered around the world, but less than 5 percent of these have ever been explored and scientifically investigated, Iliffe notes.
"These bacterial forms of life may be similar to microbes that existed on early Earth and thus provide a glimpse of how life evolved on this planet," he adds. "These caves are natural laboratories where we can study life existing under conditions analogous to what was present many millions of years ago.
"We know more about the far side of the moon than we do about these caves right here on Earth," he adds. "There is no telling what remains to be discovered in the many thousands of caves that no one has ever entered. If life exists elsewhere in our solar system, it most likely would be found in water-filled subterranean environments, perhaps equivalent to those we are studying in the Bahamas."
Read more at Science Daily
Tom Iliffe, professor of marine biology at the Texas A&M-Galveston campus, and graduate student Brett Gonzalez of Trabuco Canyon, Calif., examined three "blue holes" in the Bahamas and found that layers of bacterial microbes exists in all three, but each cave had specialized forms of such life and at different depths, suggesting that microbial life in such caves is continually adapting to changes in available light, water chemistry and food sources. Their work, also done in conjunction with researchers from Penn State University, has been published in Hydrobiologia.
"Blue holes" are so named because from an aerial view, they appear circular in shape with different shades of blue in and around their entrances. There are estimated to be more than 1,000 such caves in the Bahamas, the largest concentration of blue holes in the world.
'We examined two caves on Abaco Island and one on Andros Island," Iliffe explains. "One on Abaco, at a depth of about 100 feet, had sheets of bacteria that were attached to the walls of the caves, almost one inch thick. Another cave on the same island had bacteria living within poisonous clouds of hydrogen sulfide at the boundary between fresh and salt water. These caves had different forms of bacteria, with the types and density changing as the light source from above grew dimmer and dimmer.
"In the cave on Andros, we expected to find something similar, but the hydrogen sulfide layer there contained different types of bacteria," he adds. "It shows that the caves tend to have life forms that adapt to that particular habitat, and we found that some types of the bacteria could live in environments where no other forms of life could survive. This research shows how these bacteria have evolved over millions of years and have found a way to live under these extreme conditions."
Iliffe says the microbes change where the salt water meets fresh water within the caves and use chemical energy to produce their food. They can survive in environments with very low amounts of oxygen and light.
There are tens of thousands of underwater caves scattered around the world, but less than 5 percent of these have ever been explored and scientifically investigated, Iliffe notes.
"These bacterial forms of life may be similar to microbes that existed on early Earth and thus provide a glimpse of how life evolved on this planet," he adds. "These caves are natural laboratories where we can study life existing under conditions analogous to what was present many millions of years ago.
"We know more about the far side of the moon than we do about these caves right here on Earth," he adds. "There is no telling what remains to be discovered in the many thousands of caves that no one has ever entered. If life exists elsewhere in our solar system, it most likely would be found in water-filled subterranean environments, perhaps equivalent to those we are studying in the Bahamas."
Read more at Science Daily
How Viruses Evolve, and in Some Cases, Become Deadly
Researchers at Michigan State University (MSU) have demonstrated how a new virus evolves, shedding light on how easy it can be for diseases to gain dangerous mutations. The findings appear in the current issue of the journal Science.
The scientists showed for the first time how the virus called "Lambda" evolved to find a new way to attack host cells, an innovation that took four mutations to accomplish. This virus infects bacteria, in particular the common E. coli bacterium. Lambda isn't dangerous to humans, but this research demonstrated how viruses evolve complex and potentially deadly new traits, noted Justin Meyer, MSU graduate student, who co-authored the paper with Richard Lenski, MSU Hannah Distinguished Professor of Microbiology and Molecular Genetics.
"We were surprised at first to see Lambda evolve this new function, this ability to attack and enter the cell through a new receptor--and it happened so fast," Meyer said. "But when we re-ran the evolution experiment, we saw the same thing happen over and over."
This paper follows recent news that scientists in the United States and the Netherlands produced a deadly version of bird flu. Even though bird flu is a mere five mutations away from becoming transmissible between humans, it's highly unlikely the virus could naturally obtain all of the beneficial mutations at once. However, it might evolve sequentially, gaining benefits one-by-one, if conditions are favorable at each step, Meyer added.
Read more at Science Daily
The scientists showed for the first time how the virus called "Lambda" evolved to find a new way to attack host cells, an innovation that took four mutations to accomplish. This virus infects bacteria, in particular the common E. coli bacterium. Lambda isn't dangerous to humans, but this research demonstrated how viruses evolve complex and potentially deadly new traits, noted Justin Meyer, MSU graduate student, who co-authored the paper with Richard Lenski, MSU Hannah Distinguished Professor of Microbiology and Molecular Genetics.
"We were surprised at first to see Lambda evolve this new function, this ability to attack and enter the cell through a new receptor--and it happened so fast," Meyer said. "But when we re-ran the evolution experiment, we saw the same thing happen over and over."
This paper follows recent news that scientists in the United States and the Netherlands produced a deadly version of bird flu. Even though bird flu is a mere five mutations away from becoming transmissible between humans, it's highly unlikely the virus could naturally obtain all of the beneficial mutations at once. However, it might evolve sequentially, gaining benefits one-by-one, if conditions are favorable at each step, Meyer added.
Read more at Science Daily
Stone Age Social Networks May Have Resembled Ours
If you ever sit back and wonder what it might have been like to live in the late Pleistocene, you’re not alone. That’s right about when humans emerged from a severe population bottleneck and began to expand globally. But, apparently, life back then might not have been too different than how we live today (that is, without the cars, the written language, and of course, the smartphone). In this week’s Nature, a group of researchers suggest that we share many social characteristics with humans that lived in the late Pleistocene, and that these ancient humans may have paved the way for us to cooperate with each other.
Modern human social networks share several features, whether they operate within a group of schoolchildren in San Francisco or a community of millworkers in Bulgaria. The number of social ties a person has, the probability that two of a person’s friends are also friends, and the inclination for similar people to be connected are all very regular across groups of people living very different lives in far-flung places.
So, the researchers asked, are these traits universal to all groups of humans, or are they merely byproducts of our modern world? They also wanted to understand the social network traits that allowed cooperation to develop in ancient communities.
Of course, the researchers couldn’t poll a group of ancient humans, so they had to find a community living today that has a lifestyle that closely resembles those of people who might have lived 130,000 years ago. They chose the Hadza, a group of hunter-gatherers that live in Tanzania and are very insulated from industrialization and other modern influences. The Hadza community functions much like ancient hunter-gatherer groups did, by cooperating and sharing resources like food and child care. Hadza society is organized into camps, which are taken up and abandoned regularly; the makeup of each camp also changes often, with individuals leaving one camp to join another.
The researchers visited 17 Hadza camps and surveyed 205 adults. First, they looked at individuals’ donations of honey sticks to other community members. They also asked questions like, “With whom would you like to live after this camp ends?” From the answers, the researchers constructed a model of the Hadza social network.
Many features of the hunter-gatherer network are very similar to those of modern, industrialized communities. Those who live farther away from each other are less likely to name each other as friends. Individuals who name more friends are also named more frequently by others, even among people they did not claim as their friends. People who resemble each other in some physical way tend to be connected as well; for Hadza people, similarity in age, body fat, and handgrip strength increases the likelihood of friendship.
There are also several features of the Hadza social network that may facilitate extensive cooperation. People that cooperate (in this case, by donating more honey sticks) are connected to other cooperators, while non-cooperators tend to be connected to each other. This type of clustering allows for cooperators to benefit from others’ large donations and increase in the population.
Evolutionary biologists have predicted that, for cooperation to evolve and spread, there should be more variance in cooperative behavior between groups than within groups. This is another example of clustering, and it allows for differences in the productivity and fitness of groups with different cooperation levels. And indeed, in Hadza society, there is more variance in cooperation between different camps than within camps.
From these results, two things are clear; first, that many of the universal characteristics of modern social networks also hold true for the Hadza, suggesting that these traits may have also governed the social networks of ancient humans. Second, several social features that have been predicted to facilitate the evolution and spread of cooperation are present in Hadza communities.
Read more at Wired Science
Modern human social networks share several features, whether they operate within a group of schoolchildren in San Francisco or a community of millworkers in Bulgaria. The number of social ties a person has, the probability that two of a person’s friends are also friends, and the inclination for similar people to be connected are all very regular across groups of people living very different lives in far-flung places.
So, the researchers asked, are these traits universal to all groups of humans, or are they merely byproducts of our modern world? They also wanted to understand the social network traits that allowed cooperation to develop in ancient communities.
Of course, the researchers couldn’t poll a group of ancient humans, so they had to find a community living today that has a lifestyle that closely resembles those of people who might have lived 130,000 years ago. They chose the Hadza, a group of hunter-gatherers that live in Tanzania and are very insulated from industrialization and other modern influences. The Hadza community functions much like ancient hunter-gatherer groups did, by cooperating and sharing resources like food and child care. Hadza society is organized into camps, which are taken up and abandoned regularly; the makeup of each camp also changes often, with individuals leaving one camp to join another.
The researchers visited 17 Hadza camps and surveyed 205 adults. First, they looked at individuals’ donations of honey sticks to other community members. They also asked questions like, “With whom would you like to live after this camp ends?” From the answers, the researchers constructed a model of the Hadza social network.
Many features of the hunter-gatherer network are very similar to those of modern, industrialized communities. Those who live farther away from each other are less likely to name each other as friends. Individuals who name more friends are also named more frequently by others, even among people they did not claim as their friends. People who resemble each other in some physical way tend to be connected as well; for Hadza people, similarity in age, body fat, and handgrip strength increases the likelihood of friendship.
There are also several features of the Hadza social network that may facilitate extensive cooperation. People that cooperate (in this case, by donating more honey sticks) are connected to other cooperators, while non-cooperators tend to be connected to each other. This type of clustering allows for cooperators to benefit from others’ large donations and increase in the population.
Evolutionary biologists have predicted that, for cooperation to evolve and spread, there should be more variance in cooperative behavior between groups than within groups. This is another example of clustering, and it allows for differences in the productivity and fitness of groups with different cooperation levels. And indeed, in Hadza society, there is more variance in cooperation between different camps than within camps.
From these results, two things are clear; first, that many of the universal characteristics of modern social networks also hold true for the Hadza, suggesting that these traits may have also governed the social networks of ancient humans. Second, several social features that have been predicted to facilitate the evolution and spread of cooperation are present in Hadza communities.
Read more at Wired Science
Even In Death, Egyptian Birds Were Fed
Ancient Egyptians placed food in the mouths or stomachs of animal mummies, suggesting that animals were treated equally to humans in death and perhaps also in life.
In this case the mummies were sacred ibis birds In a study, published in the Journal of Archaeological Science, the findings are the first known examples of food placed directly in animal mummies. The primary organs were also removed, as was the practice for humans. It’s thought that the ancient Egyptians wished to preserve these organs for continued function in the afterlife.
“That the birds received treatment for their own continued provision in the afterlife suggests that the afterlife welfare of the birds was important to the priests performing the embalming ritual on them,” lead author Andrew Wade told Discovery News.
“Certainly, in this sense, there appears to be some degree of equality between humans and animals in death,” added Wade, a University of Western Ontario anthropologist. “If that is the case, then the birds may have been deserving of a greater respect in life.”
Wade and his team analyzed a recently excavated mummified sacred ibis. They found numerous snails in his bill. The people who prepared the body inserted the snails.
The researchers also used non-invasive computed tomography to look inside ibis mummies housed at Yale University’s Peabody Museum. One of these mummies was found to contain wheat. Wade said that temple-raised birds were likely fed grain, so again the bird was probably sent off into the afterlife with food for its spiritual journey.
Life was a mixed bag for animals in the ancient world, however. Wade said all of the birds from the study had broken necks and were likely deliberately killed, probably as a sacrifice to the god Thoth.
Humans at the time were also sacrificed to appease the gods. In the case of these birds, however, the sacrifices were part of a large-scale operation. Wade explained that “votive ibis mummies are found at Thoth shrines throughout Egypt, and are in their tens of thousands, and even millions, at the cult centers of Abydos, Saqqara, and Tuna el-Gebel.”
Overall, life for non-human animals in ancient Egypt was still probably comparable to that for humans. Some literally lived in the lap of luxury, but others may have been viewed more as tools to achieve certain goals.
Co-author Salima Ikram, a professor of Egyptology at the American University in Cairo, explained to Discovery News, “Animals had a very important role to play in Egypt, as totems for divinities, sources of food and thus life, and as a source of raw materials.”
But, she added, “Pets were often very spoiled, just as they are today, and received the same care in life and in death as did humans.”
Read more at Discovery News
In this case the mummies were sacred ibis birds In a study, published in the Journal of Archaeological Science, the findings are the first known examples of food placed directly in animal mummies. The primary organs were also removed, as was the practice for humans. It’s thought that the ancient Egyptians wished to preserve these organs for continued function in the afterlife.
“That the birds received treatment for their own continued provision in the afterlife suggests that the afterlife welfare of the birds was important to the priests performing the embalming ritual on them,” lead author Andrew Wade told Discovery News.
“Certainly, in this sense, there appears to be some degree of equality between humans and animals in death,” added Wade, a University of Western Ontario anthropologist. “If that is the case, then the birds may have been deserving of a greater respect in life.”
Wade and his team analyzed a recently excavated mummified sacred ibis. They found numerous snails in his bill. The people who prepared the body inserted the snails.
The researchers also used non-invasive computed tomography to look inside ibis mummies housed at Yale University’s Peabody Museum. One of these mummies was found to contain wheat. Wade said that temple-raised birds were likely fed grain, so again the bird was probably sent off into the afterlife with food for its spiritual journey.
Life was a mixed bag for animals in the ancient world, however. Wade said all of the birds from the study had broken necks and were likely deliberately killed, probably as a sacrifice to the god Thoth.
Humans at the time were also sacrificed to appease the gods. In the case of these birds, however, the sacrifices were part of a large-scale operation. Wade explained that “votive ibis mummies are found at Thoth shrines throughout Egypt, and are in their tens of thousands, and even millions, at the cult centers of Abydos, Saqqara, and Tuna el-Gebel.”
Overall, life for non-human animals in ancient Egypt was still probably comparable to that for humans. Some literally lived in the lap of luxury, but others may have been viewed more as tools to achieve certain goals.
Co-author Salima Ikram, a professor of Egyptology at the American University in Cairo, explained to Discovery News, “Animals had a very important role to play in Egypt, as totems for divinities, sources of food and thus life, and as a source of raw materials.”
But, she added, “Pets were often very spoiled, just as they are today, and received the same care in life and in death as did humans.”
Read more at Discovery News
Labels:
Animals,
Archeology,
History,
Human,
Science
Jan 26, 2012
Being Ignored Hurts, Even by a Stranger
Feeling like you're part of the gang is crucial to the human experience. All people get stressed out when we're left out. A new study published in Psychological Science, a journal of the Association for Psychological Science, finds that a feeling of inclusion can come from something as simple as eye contact from a stranger.
Psychologists already know that humans have to feel connected to each other to be happy. A knitting circle, a church choir, or a friendly neighbor can all feed that need for connection. Eric D. Wesselmann of Purdue University wanted to know just how small a cue could help someone feel connected. He cowrote the study with Florencia D. Cardoso of the Universidad Nacional de Mar del Plata in Argentina, Samantha Slater of Ohio University, and Kipling D. Williams of Purdue. "Some of my coauthors have found, for example, that people have reported that they felt bothered sometimes even when a stranger hasn't acknowledged them," Wesselmann says. He and his authors came up with an experiment to test that.
The study was carried out with the cooperation of people on campus at Purdue University. A research assistant walked along a well-populated path, picked a subject, and either met that person's eyes, met their eyes and smiled, or looked in the direction of the person's eyes, but past them -- past an ear, for example, "looking at them as if they were air," Wesselmann says. When the assistant had passed the person, he or she gave a thumbs-up behind the back to indicate that another experimenter should stop that person. The second experimenter asked, "Within the last minute, how disconnected do you feel from others?"
People who had gotten eye contact from the research assistant, with or without a smile, felt less disconnected than people who had been looked at as if they weren't there.
Read more at Science Daily
Psychologists already know that humans have to feel connected to each other to be happy. A knitting circle, a church choir, or a friendly neighbor can all feed that need for connection. Eric D. Wesselmann of Purdue University wanted to know just how small a cue could help someone feel connected. He cowrote the study with Florencia D. Cardoso of the Universidad Nacional de Mar del Plata in Argentina, Samantha Slater of Ohio University, and Kipling D. Williams of Purdue. "Some of my coauthors have found, for example, that people have reported that they felt bothered sometimes even when a stranger hasn't acknowledged them," Wesselmann says. He and his authors came up with an experiment to test that.
The study was carried out with the cooperation of people on campus at Purdue University. A research assistant walked along a well-populated path, picked a subject, and either met that person's eyes, met their eyes and smiled, or looked in the direction of the person's eyes, but past them -- past an ear, for example, "looking at them as if they were air," Wesselmann says. When the assistant had passed the person, he or she gave a thumbs-up behind the back to indicate that another experimenter should stop that person. The second experimenter asked, "Within the last minute, how disconnected do you feel from others?"
People who had gotten eye contact from the research assistant, with or without a smile, felt less disconnected than people who had been looked at as if they weren't there.
Read more at Science Daily
Scientists Create First Free-Standing 3-D Cloak
Researchers in the US have, for the first time, cloaked a three-dimensional object standing in free space, bringing the much-talked-about invisibility cloak one step closer to reality.
Whilst previous studies have either been theoretical in nature or limited to the cloaking of two-dimensional objects, this study shows how ordinary objects can be cloaked in their natural environment in all directions and from all of an observer's positions.
Published Jan. 26 in the Institute of Physics and German Physical Society's New Journal of Physics, the researchers used a method known as "plasmonic cloaking" to hide an 18-centimetre cylindrical tube from microwaves.
Some of the most recent breakthroughs in the field of invisibility cloaking have focussed on using transformation-based metamaterials -- inhomogeneous, human-made materials that have the ability to bend light around objects -- however, this new approach uses a different type of artificial material -- plasmonic metamaterials.
When light strikes an object, it rebounds off its surface towards another direction, just like throwing a tennis ball against a wall. The reason we see objects is because light rays bounce off materials towards our eyes and our eyes are able to process the information.
Due to their unique properties, plasmonic metamaterials have the opposite scattering effect to everyday materials.
"When the scattered fields from the cloak and the object interfere, they cancel each other out and the overall effect is transparency and invisibility at all angles of observation.
"One of the advantages of the plasmonic cloaking technique is its robustness and moderately broad bandwidth of operation, superior to conventional cloaks based on transformation metamaterials. This made our experiment more robust to possible imperfections, which is particularly important when cloaking a 3D object in free-space," said study co-author Professor Andrea Alu.
In this instance, the cylindrical tube was cloaked with a shell of plasmonic metamaterial to make it appear invisible. The system was tested by directing microwaves towards the cloaked cylinder and mapping the resulting scattering both around the object and in the far-field. The cloak showed optimal functionality when the microwaves were at a frequency of 3.1 gigahertz and over a moderately broad bandwidth.
The researchers, from the University of Texas at Austin, have shown in previous studies that the shape of the object is irrelevant; oddly shaped and asymmetric objects can both be cloaked using this technique.
Moving forward, one of the key challenges for the researchers will be to demonstrate the cloaking of a 3D object using visible light.
"In principle, this technique could be used to cloak light; in fact, some plasmonic materials are naturally available at optical frequencies. However, the size of the objects that can be efficiently cloaked with this method scales with the wavelength of operation, so when applied to optical frequencies we may be able to efficiently stop the scattering of micrometre-sized objects.
Read more at Science Daily
Whilst previous studies have either been theoretical in nature or limited to the cloaking of two-dimensional objects, this study shows how ordinary objects can be cloaked in their natural environment in all directions and from all of an observer's positions.
Published Jan. 26 in the Institute of Physics and German Physical Society's New Journal of Physics, the researchers used a method known as "plasmonic cloaking" to hide an 18-centimetre cylindrical tube from microwaves.
Some of the most recent breakthroughs in the field of invisibility cloaking have focussed on using transformation-based metamaterials -- inhomogeneous, human-made materials that have the ability to bend light around objects -- however, this new approach uses a different type of artificial material -- plasmonic metamaterials.
When light strikes an object, it rebounds off its surface towards another direction, just like throwing a tennis ball against a wall. The reason we see objects is because light rays bounce off materials towards our eyes and our eyes are able to process the information.
Due to their unique properties, plasmonic metamaterials have the opposite scattering effect to everyday materials.
"When the scattered fields from the cloak and the object interfere, they cancel each other out and the overall effect is transparency and invisibility at all angles of observation.
"One of the advantages of the plasmonic cloaking technique is its robustness and moderately broad bandwidth of operation, superior to conventional cloaks based on transformation metamaterials. This made our experiment more robust to possible imperfections, which is particularly important when cloaking a 3D object in free-space," said study co-author Professor Andrea Alu.
In this instance, the cylindrical tube was cloaked with a shell of plasmonic metamaterial to make it appear invisible. The system was tested by directing microwaves towards the cloaked cylinder and mapping the resulting scattering both around the object and in the far-field. The cloak showed optimal functionality when the microwaves were at a frequency of 3.1 gigahertz and over a moderately broad bandwidth.
The researchers, from the University of Texas at Austin, have shown in previous studies that the shape of the object is irrelevant; oddly shaped and asymmetric objects can both be cloaked using this technique.
Moving forward, one of the key challenges for the researchers will be to demonstrate the cloaking of a 3D object using visible light.
"In principle, this technique could be used to cloak light; in fact, some plasmonic materials are naturally available at optical frequencies. However, the size of the objects that can be efficiently cloaked with this method scales with the wavelength of operation, so when applied to optical frequencies we may be able to efficiently stop the scattering of micrometre-sized objects.
Read more at Science Daily
Your Own Evil Lair, Available Immediately!
Now that you've finished your evil post-graduate degree, it's time to find your first evil home.
Don't look for an island volcano; the hot weather will make you woozy. Antarctic mountain base was so 90s, and a doom-dungeon under your family's mansion is because your Mom would keep coming down to check on you.
Instead, why not explore the beauty of California for your base of evil operations! The Jamesburg Earth Station in Carmel Valley, Calif. is available for sale now!
Inside this wonderful late-1960s compound you'll have the wherewithal to bounce signals off the moon, and have enough room to conduct evil experiments and survive the nuclear disaster you might create.
The compound was built to receive and re-broadcast signals from the Apollo 11 moon mission in 1969. The station's main feature is a 98-foot (30-meter) tall satellite dish that has bounced signals off the moon in the last decade, but also includes a 20,000-square-foot building, basketball court, three-bedroom house, helicopter landing pad, two wells from which to draw water and a barn to store your Schwinn.
Now that I've got your attention, the satellite has the capability of transmitting your dastardly deeds to a geosynchronous satellite over the Pacific Ocean directly broadcasting to Asia, South America, the South Pacific and the entire United States. In fact, this station played a role in capturing and distributing images of the 1989 Tiananmen Square protests.
But wait, there's more! The Jamesburg Earth Station has the added benefit of Fiber Optic OC-48 cable so your ransom demands will broadcast without a glitch and -- in case your evil plans elicit retaliation -- the facility can withstand a 5-megaton nuclear blast.
Read more at Discovery News
Don't look for an island volcano; the hot weather will make you woozy. Antarctic mountain base was so 90s, and a doom-dungeon under your family's mansion is because your Mom would keep coming down to check on you.
Instead, why not explore the beauty of California for your base of evil operations! The Jamesburg Earth Station in Carmel Valley, Calif. is available for sale now!
Inside this wonderful late-1960s compound you'll have the wherewithal to bounce signals off the moon, and have enough room to conduct evil experiments and survive the nuclear disaster you might create.
The compound was built to receive and re-broadcast signals from the Apollo 11 moon mission in 1969. The station's main feature is a 98-foot (30-meter) tall satellite dish that has bounced signals off the moon in the last decade, but also includes a 20,000-square-foot building, basketball court, three-bedroom house, helicopter landing pad, two wells from which to draw water and a barn to store your Schwinn.
Now that I've got your attention, the satellite has the capability of transmitting your dastardly deeds to a geosynchronous satellite over the Pacific Ocean directly broadcasting to Asia, South America, the South Pacific and the entire United States. In fact, this station played a role in capturing and distributing images of the 1989 Tiananmen Square protests.
But wait, there's more! The Jamesburg Earth Station has the added benefit of Fiber Optic OC-48 cable so your ransom demands will broadcast without a glitch and -- in case your evil plans elicit retaliation -- the facility can withstand a 5-megaton nuclear blast.
Read more at Discovery News
Jumping Spiders Use Blurry Vision to Pounce
Jumping spiders, which hunt by pouncing on their prey, gauge distances to their unsuspecting meals in a way that appears to be unique in the animal kingdom, a new study finds.
The superability boils down to seeing green, the researchers found.
There are several different visual systems that organisms use to accurately and reliably judge distance and depth. Humans, for example, have binocular stereovision. Because our eyes are spaced apart, they receive visual information from different angles, which our brains use to automatically triangulate distances. Other animals, such as insects, adjust the focal length of the lenses in their eyes, or move their heads side to side to create an effect called motion parallax — nearer objects will move across their field of vision more quickly than objects farther away.
However, jumping spiders (Hasarius adansoni) lack any kind of focal adjustment system, have eyes that are too close together for binocular stereovision and don’t appear to use motion parallax while hunting. So how are these creatures able to perceive depth?
Researchers in Japan have now discovered that the arachnids accurately sense distances by comparing a blurry version of an image with a clear one, a method called image defocus.
Jumping spiders have four eyes densely packed in a row: two large principal eyes and two small lateral eyes. The spider uses its lateral eyes to sense the motion of an object, such as a fly, which it then zeros in on using its principal eyes, Akihisa Terakita, a biologist at Osaka City University in Japan and lead author of the new study, explained in an email to LiveScience.
Rather than having a single layer of photoreceptor cells, the retinas in the spider’s principal eyes have four distinct photoreceptor layers. When Terakita and his colleagues took a close look at the spider's principal eyes, they found that the two layers closest to the surface contain ultraviolet-sensitive pigments, whereas the deeper layers contain green-sensitive pigments.
However, because of the layers' respective distances from the lens of the eye, incoming green light is only focused on the deepest layer, while the other green-sensitive retinal layer receives defocused or fuzzy images. The researchers hypothesized that the spiders gauge depth cues from the amount of defocus in this fuzzy layer, which is proportional to the distance an object is to the lens of the eye.
To test this, they placed a spider and three to six fruit flies in a cylindrical plastic chamber, housed in a white styrene foam box. They then bathed the bugs in different colored lights: If the defocus of green light is important to the spiders, then they should not be able to accurately judge jumping distance in the absence of green light.
Read more at Discovery News
The superability boils down to seeing green, the researchers found.
There are several different visual systems that organisms use to accurately and reliably judge distance and depth. Humans, for example, have binocular stereovision. Because our eyes are spaced apart, they receive visual information from different angles, which our brains use to automatically triangulate distances. Other animals, such as insects, adjust the focal length of the lenses in their eyes, or move their heads side to side to create an effect called motion parallax — nearer objects will move across their field of vision more quickly than objects farther away.
However, jumping spiders (Hasarius adansoni) lack any kind of focal adjustment system, have eyes that are too close together for binocular stereovision and don’t appear to use motion parallax while hunting. So how are these creatures able to perceive depth?
Researchers in Japan have now discovered that the arachnids accurately sense distances by comparing a blurry version of an image with a clear one, a method called image defocus.
Jumping spiders have four eyes densely packed in a row: two large principal eyes and two small lateral eyes. The spider uses its lateral eyes to sense the motion of an object, such as a fly, which it then zeros in on using its principal eyes, Akihisa Terakita, a biologist at Osaka City University in Japan and lead author of the new study, explained in an email to LiveScience.
Rather than having a single layer of photoreceptor cells, the retinas in the spider’s principal eyes have four distinct photoreceptor layers. When Terakita and his colleagues took a close look at the spider's principal eyes, they found that the two layers closest to the surface contain ultraviolet-sensitive pigments, whereas the deeper layers contain green-sensitive pigments.
However, because of the layers' respective distances from the lens of the eye, incoming green light is only focused on the deepest layer, while the other green-sensitive retinal layer receives defocused or fuzzy images. The researchers hypothesized that the spiders gauge depth cues from the amount of defocus in this fuzzy layer, which is proportional to the distance an object is to the lens of the eye.
To test this, they placed a spider and three to six fruit flies in a cylindrical plastic chamber, housed in a white styrene foam box. They then bathed the bugs in different colored lights: If the defocus of green light is important to the spiders, then they should not be able to accurately judge jumping distance in the absence of green light.
Read more at Discovery News
Jan 25, 2012
Dogs were man's best friend 33,000 years ago
A pair of dog skulls uncovered in digs in Siberia and Belgium, each 33,000 years old, show dogs were domesticated long before any other animal, including sheep, cows or goats.
The skulls had shorter snouts and wider jaws than wild animals, such as wolves, which use their long snouts to hunt. It suggests dogs were used for companionship and protection.
Scientists used carbon dating to determine the age of the skulls, then examined the bone structures.
"Both the Belgian find and the Siberian find are domesticated species based on morphological characteristics," said Greg Hodgins, researcher at the University of Arizona's Accelerator Mass Spectrometry Lab.
"Essentially, wolves have long thin snouts and their teeth are not crowded, and domestication results in this shortening of the snout and widening of the jaws and crowding of the teeth.
"The interesting thing is that typically we think of domestication as being cows, sheep and goats, things that produce food through meat or secondary agricultural products such as milk, cheese and wool and things like that.
Read more at The Telegraph
The skulls had shorter snouts and wider jaws than wild animals, such as wolves, which use their long snouts to hunt. It suggests dogs were used for companionship and protection.
Scientists used carbon dating to determine the age of the skulls, then examined the bone structures.
"Both the Belgian find and the Siberian find are domesticated species based on morphological characteristics," said Greg Hodgins, researcher at the University of Arizona's Accelerator Mass Spectrometry Lab.
"Essentially, wolves have long thin snouts and their teeth are not crowded, and domestication results in this shortening of the snout and widening of the jaws and crowding of the teeth.
"The interesting thing is that typically we think of domestication as being cows, sheep and goats, things that produce food through meat or secondary agricultural products such as milk, cheese and wool and things like that.
Read more at The Telegraph
Infants Grasp Gravity with Innate Sense of Physics
Infants as young as 2 months old already have basic knowledge of "intuitive physics," researchers report in a new study.
Most studies into infant cognition employ eye-tracking technology - psychologists can tease out what an infant is thinking and what she considers to be unexpected by following her gaze in different scenarios. This method, called violation of expectation, involves showing babies photos, videos or events that proceed as expected, followed by others that break everyday rules. If the infant understands the implicit rules, he or she will show little interest in an expected situation, but will stare at images of a surprising event.
But at what point in their development do babies begin to understand how the physical world works?
"We believe that infants are born with expectations about the objects around them, even though that knowledge is a skill that's never been taught," Kristy vanMarle, an assistant professor of psychological sciences at the University of Missouri, said in a statement. "As the child develops, this knowledge is refined and eventually leads to the abilities we use as adults."
To come to this conclusion, vanMarle and her colleague, Susan Hespos, a psychologist at Northwestern University, reviewed infant cognition research conducted over the last 30 years. They found that infants already have an intuitive understanding of certain physical laws by 2 months of age, when they start to track moving objects with both eyes consistently and can be tested with eye-tracking technology.
For instance, at this age they understand that unsupported objects will fall (gravity) and hidden objects don't cease to exist. In one test, researchers placed an object inside of a container and moved the container; 2-month-old infants knew that the hidden object moved with the container.
This innate "physics" knowledge only grows as the infants experience their surroundings and interact more with the world. By 5 months of age, babies understand that solid objects have different properties than noncohesive substances, such as water, the researchers found.
In a 2009 study, a research team (which included Hespos) habituated 5-month-old infants to either a blue solid or a blue liquid in a glass cup, which appeared to be the same when at rest. They tipped the glasses left and right, and poured the contents into other glasses, allowing the infants to form ideas about how the substances worked. Infants habituated to the liquid (but not the solid) weren't surprised that straws could penetrate it, but were confused when straws couldn't penetrate the blue solid. The opposite happened with infants habituated to the solid.
Hespos and vanMarle also learned that babies have rudimentary math abilities: Six-month-old infants can discriminate between numbers of dots (if one set held twice as many dots as the other), and 10-month-old infants can pick out which of two cups holds more liquid (if one cup held four times as much liquid as the other). Also at 10 months of age, babies will consistently choose larger amounts of food - such as crackers - in cups, though only if there are no more than three items in any cup.
While infants appear to be born with intuitive physics knowledge, the researchers believe that parents can further assist their children in developing expectations about the world through normal interactions, such as talking, playing peek-a-boo or letting them handle various safe objects.
Read more at Discovery News
Most studies into infant cognition employ eye-tracking technology - psychologists can tease out what an infant is thinking and what she considers to be unexpected by following her gaze in different scenarios. This method, called violation of expectation, involves showing babies photos, videos or events that proceed as expected, followed by others that break everyday rules. If the infant understands the implicit rules, he or she will show little interest in an expected situation, but will stare at images of a surprising event.
But at what point in their development do babies begin to understand how the physical world works?
"We believe that infants are born with expectations about the objects around them, even though that knowledge is a skill that's never been taught," Kristy vanMarle, an assistant professor of psychological sciences at the University of Missouri, said in a statement. "As the child develops, this knowledge is refined and eventually leads to the abilities we use as adults."
To come to this conclusion, vanMarle and her colleague, Susan Hespos, a psychologist at Northwestern University, reviewed infant cognition research conducted over the last 30 years. They found that infants already have an intuitive understanding of certain physical laws by 2 months of age, when they start to track moving objects with both eyes consistently and can be tested with eye-tracking technology.
For instance, at this age they understand that unsupported objects will fall (gravity) and hidden objects don't cease to exist. In one test, researchers placed an object inside of a container and moved the container; 2-month-old infants knew that the hidden object moved with the container.
This innate "physics" knowledge only grows as the infants experience their surroundings and interact more with the world. By 5 months of age, babies understand that solid objects have different properties than noncohesive substances, such as water, the researchers found.
In a 2009 study, a research team (which included Hespos) habituated 5-month-old infants to either a blue solid or a blue liquid in a glass cup, which appeared to be the same when at rest. They tipped the glasses left and right, and poured the contents into other glasses, allowing the infants to form ideas about how the substances worked. Infants habituated to the liquid (but not the solid) weren't surprised that straws could penetrate it, but were confused when straws couldn't penetrate the blue solid. The opposite happened with infants habituated to the solid.
Hespos and vanMarle also learned that babies have rudimentary math abilities: Six-month-old infants can discriminate between numbers of dots (if one set held twice as many dots as the other), and 10-month-old infants can pick out which of two cups holds more liquid (if one cup held four times as much liquid as the other). Also at 10 months of age, babies will consistently choose larger amounts of food - such as crackers - in cups, though only if there are no more than three items in any cup.
While infants appear to be born with intuitive physics knowledge, the researchers believe that parents can further assist their children in developing expectations about the world through normal interactions, such as talking, playing peek-a-boo or letting them handle various safe objects.
Read more at Discovery News
Is the Tasmanian Tiger Alive?
Two bike-riding brothers noticed something odd near a creek in northern Tasmania about a week ago. Levi and Jarom Triffitt, members of a stunt trail bike team, found what seemed to be a strange skull and jawbone. They claimed to have found the skull of a strange animal called a thylacine, which looks something like a striped dog.
Why was this such an exciting find? Because the last known thylacine is believed to have died in a zoo in the Tasmanian capital of Hobart on September 7, 1936.
Finding an intact, recent thylacine skull three-quarters of a century later -- especially out in the open -- would indicate that the animals are indeed still alive and roaming the rural areas of this small island south of Australia.
Some people are convinced that the thylacines still exist, almost like a Tasmanian version of Bigfoot (unlike Bigfoot, of course, there’s hard evidence that thylacines were real). According to thylacine expert Andrew Pask, an Australian zoologist at the University of Melbourne, the fact that the animals no longer exist hasn’t stopped people from seeing them.
In an interview on the MonsterTalk podcast, Pask said that “Since it was named extinct, every year people come forth and say there’s been sightings of the thylacine. But there’s been no evidence ever brought forward for it. A few years ago in Australia there was a magazine that offered a million dollar reward for actual proof of a living thylacine in the wild. So people set off in droves trying to find the thylacine, but nobody was ever able to. Tasmania’s not that big, and even its most inaccessible parts are not that inaccessible... I think if these were out there in the wild they would have been discovered by now.”
Scientists at Tasmania’s Queen Victoria Museum examined the new skull and identified it as from the canid family -- specifically, a dog. The similarity was not all in the Triffitt brothers’ imaginations; the skulls of the dog-sized marsupial do resemble skulls of domestic dogs. The general public would be unfamiliar with a thylacine skull, and not know, for example, that the thylacine has two more front teeth in the upper jaw than a dog.
Read more at Discovery News
Why was this such an exciting find? Because the last known thylacine is believed to have died in a zoo in the Tasmanian capital of Hobart on September 7, 1936.
Finding an intact, recent thylacine skull three-quarters of a century later -- especially out in the open -- would indicate that the animals are indeed still alive and roaming the rural areas of this small island south of Australia.
Some people are convinced that the thylacines still exist, almost like a Tasmanian version of Bigfoot (unlike Bigfoot, of course, there’s hard evidence that thylacines were real). According to thylacine expert Andrew Pask, an Australian zoologist at the University of Melbourne, the fact that the animals no longer exist hasn’t stopped people from seeing them.
In an interview on the MonsterTalk podcast, Pask said that “Since it was named extinct, every year people come forth and say there’s been sightings of the thylacine. But there’s been no evidence ever brought forward for it. A few years ago in Australia there was a magazine that offered a million dollar reward for actual proof of a living thylacine in the wild. So people set off in droves trying to find the thylacine, but nobody was ever able to. Tasmania’s not that big, and even its most inaccessible parts are not that inaccessible... I think if these were out there in the wild they would have been discovered by now.”
Scientists at Tasmania’s Queen Victoria Museum examined the new skull and identified it as from the canid family -- specifically, a dog. The similarity was not all in the Triffitt brothers’ imaginations; the skulls of the dog-sized marsupial do resemble skulls of domestic dogs. The general public would be unfamiliar with a thylacine skull, and not know, for example, that the thylacine has two more front teeth in the upper jaw than a dog.
Read more at Discovery News
Glow-in-the-Dark Mammal Discovered
Iridescence -- a lustrous rainbow-like play of color caused by differential refraction of light waves -- has just been detected in the fur of golden moles.
Aside from the “eye shine” of nocturnal mammals, seen when a headlight or flashlight strikes their eyes, the discovery marks the first known instance of iridescence in a mammal. The findings, published in the latest Royal Society Biology Letters, reveal yet another surprise: the golden moles are completely blind, so they cannot even see their gorgeous fur.
“It is densely packed and silky, and has an almost metallic, shiny appearance with subtle hints of colors ranging between species from blue to green,” co-author Matthew Shawkey told Discovery News.
Shawkey, an associate professor in the Integrated Bioscience Program at the University of Akron, was first inspired to study golden moles after an undergraduate student of his, Holly Snyder, wrote her honors thesis about iridescence. Snyder is lead author of the paper.
For the study, the scientists pulled hairs from specimens of four golden mole species. Using high tech equipment, such as scanning electron microscopy and transmission electron microscopy, the researchers analyzed the structure of the hairs, down to their smallest elements.
The researchers determined that the hairs are indeed luminescent. They further discovered that each hair has a flattened shape with reduced cuticular scales that provide a broad and smooth surface for light reflection. The scales form multiple layers of light and dark materials of consistent thickness, very similar to those seen in iridescent beetles.
Optical modeling suggests that the multiple layers act as reflectors that produce color through interference with light. The sensitivity of this mechanism to slight changes in layer thickness and number explains color variability.
What remains a mystery is why blind animals would have such eye-catching fur.
Ancestors of the moles were sighted, so it’s possible that the iridescence is a carryover from those times. “However, the moles have diverged considerably from these ancestors so there had to be some selection pressure other than communication to keep their color intact,” Shawkey said.
Another possibility is that the fur somehow wards off the mole’s sighted predators. But Shawkey said shiny fur “would seem to make them more conspicuous,” doing just the opposite. The moles are not poisonous, so the coloration does not serve as a warning to other animals.
The researchers instead think that iridescence may be a byproduct of the fur’s composition, since the structure also streamlines the mole’s profile and creates less turbulence underground, permitting the animals to move more easily through dirt and sand.
“Many of the nanostructures producing iridescent colors have non-optical properties like enhanced rigidity (think mother of pearl) or enhanced water repellency (such as seen in Morpho butterflies),” Shawkey explained. “In the former case, the color, like in the moles, clearly has no communication function and is a byproduct.”
Read more at Discovery News
Aside from the “eye shine” of nocturnal mammals, seen when a headlight or flashlight strikes their eyes, the discovery marks the first known instance of iridescence in a mammal. The findings, published in the latest Royal Society Biology Letters, reveal yet another surprise: the golden moles are completely blind, so they cannot even see their gorgeous fur.
“It is densely packed and silky, and has an almost metallic, shiny appearance with subtle hints of colors ranging between species from blue to green,” co-author Matthew Shawkey told Discovery News.
Shawkey, an associate professor in the Integrated Bioscience Program at the University of Akron, was first inspired to study golden moles after an undergraduate student of his, Holly Snyder, wrote her honors thesis about iridescence. Snyder is lead author of the paper.
For the study, the scientists pulled hairs from specimens of four golden mole species. Using high tech equipment, such as scanning electron microscopy and transmission electron microscopy, the researchers analyzed the structure of the hairs, down to their smallest elements.
The researchers determined that the hairs are indeed luminescent. They further discovered that each hair has a flattened shape with reduced cuticular scales that provide a broad and smooth surface for light reflection. The scales form multiple layers of light and dark materials of consistent thickness, very similar to those seen in iridescent beetles.
Optical modeling suggests that the multiple layers act as reflectors that produce color through interference with light. The sensitivity of this mechanism to slight changes in layer thickness and number explains color variability.
What remains a mystery is why blind animals would have such eye-catching fur.
Ancestors of the moles were sighted, so it’s possible that the iridescence is a carryover from those times. “However, the moles have diverged considerably from these ancestors so there had to be some selection pressure other than communication to keep their color intact,” Shawkey said.
Another possibility is that the fur somehow wards off the mole’s sighted predators. But Shawkey said shiny fur “would seem to make them more conspicuous,” doing just the opposite. The moles are not poisonous, so the coloration does not serve as a warning to other animals.
The researchers instead think that iridescence may be a byproduct of the fur’s composition, since the structure also streamlines the mole’s profile and creates less turbulence underground, permitting the animals to move more easily through dirt and sand.
“Many of the nanostructures producing iridescent colors have non-optical properties like enhanced rigidity (think mother of pearl) or enhanced water repellency (such as seen in Morpho butterflies),” Shawkey explained. “In the former case, the color, like in the moles, clearly has no communication function and is a byproduct.”
Read more at Discovery News
Laser Tests Offer Clue to Magnetic Field Mystery
Gravity may be the master of the universe, but it has had a key assistant in shaping the cosmos -- magnetism. Exactly how that force, which comes from the motion of electric charges, got its start however, has been a mystery -- until now.
A new experiment shows that a relatively simple system that's not initially magnetized can generate magnetic fields out of nothing, said University of Michigan astrophysicist Paul Drake.
"From the standard theories of the Big Bang, you don't start with a strong magnetic field. It has to arise out of what the universe does," Drake told Discovery News.
Working in a laboratory in France, scientists fired high-energy lasers that pulse in a billionth of a second with a trillion times the intensity of sunlight into a helium-filled chamber. The lasers are made of carbon rods, similar to what is found in ordinary pencil lead.
When they pulse, some of the carbon atoms are ripped apart and explode, creating a blast wave that moves out into the gas and generates a magnetic field.
The process is similar to what happens when a star explodes. It shows one mechanism by which the universe formed and evolved.
Gravity gets the process started, eventually giving rise to collapsing objects that send out shock waves.
"Shocks are the driving force for the formation of magnetic field, and all this precedes galaxy formation," lead researcher Gianluca Gregori, with Oxford University in the United Kingdom, told Discovery News.
Once magnetic fields are established, turbulence takes over, making them larger and sustaining them over the eons.
"It's been rather mysterious that the universe is as magnetized as it," Drake said. "When you do simple calculations, any magnetic field formed in the early phases of the universe one would think should have vanished by now."
Magnetic fields are found everywhere in the universe, even in places where they seemingly shouldn't exist, such as the voids between clusters of galaxies.
Read more at Discovery News
A new experiment shows that a relatively simple system that's not initially magnetized can generate magnetic fields out of nothing, said University of Michigan astrophysicist Paul Drake.
"From the standard theories of the Big Bang, you don't start with a strong magnetic field. It has to arise out of what the universe does," Drake told Discovery News.
Working in a laboratory in France, scientists fired high-energy lasers that pulse in a billionth of a second with a trillion times the intensity of sunlight into a helium-filled chamber. The lasers are made of carbon rods, similar to what is found in ordinary pencil lead.
When they pulse, some of the carbon atoms are ripped apart and explode, creating a blast wave that moves out into the gas and generates a magnetic field.
The process is similar to what happens when a star explodes. It shows one mechanism by which the universe formed and evolved.
Gravity gets the process started, eventually giving rise to collapsing objects that send out shock waves.
"Shocks are the driving force for the formation of magnetic field, and all this precedes galaxy formation," lead researcher Gianluca Gregori, with Oxford University in the United Kingdom, told Discovery News.
Once magnetic fields are established, turbulence takes over, making them larger and sustaining them over the eons.
"It's been rather mysterious that the universe is as magnetized as it," Drake said. "When you do simple calculations, any magnetic field formed in the early phases of the universe one would think should have vanished by now."
Magnetic fields are found everywhere in the universe, even in places where they seemingly shouldn't exist, such as the voids between clusters of galaxies.
Read more at Discovery News
Jan 24, 2012
Supermom Primates Raise Twins
The first occurrence of twins for free-ranging Tibetan macaques has just been documented, revealing how rare survivorship of twins can be in many primate species, and how important mothers are to their success.
It’s possible that only supermom primates, humans included, can properly raise twins. In the wild, twins often die shortly after birth, or only one lives into adulthood.
In the case of the Tibetan macaque mother of twins, described in the latest issue of the journal Primates, there is little doubt that she was qualified for the task.
“She appeared to remain quite healthy,” co-author Megan Matheson told Discovery News. “I was very impressed when I observed her in August of 2010 running with two, by now quite large, infants hanging on!”
“At last check, the twins were still alive,” added Matheson, a professor of psychology at Central Washington University. “They would be not quite 2 years old now, so still in the young juvenile stage. The twins were males, so they are not considered to be adults until 7 years of age.”
Matheson and her colleagues discovered the twins among a group of free-ranging Tibetan macaques at Huangshan, China. They studied the mother for 5 months after the birth, comparing her activities to those of other adult females with single or no offspring.
The researchers found that the mother monkey with twins spent more time foraging and resting, but that the quality of her social interactions did not seem to differ much from that of other macaque females.
In fact, she seemed to enjoy showing off her twins to others, who displayed an interest in the youngsters. For some reason, she tended to present one twin more frequently than its sibling. The researchers are not certain if that was because she is right handed, and simply handed over the twin on her right side more, or if she preferred that particular individual.
Males for this primate species, and many others, do not share parental duties. Female Tibetan macaques may mate with multiple males during the primary mating season.
“Dominant males have priority of access, but more subordinate males may sneak copulations,” Matheson said. As a result, paternity can be uncertain.
“Generally speaking, in these species where paternity uncertainty is the norm, adult males will be protective of all infants if they are threatened, but don’t necessarily favor any one for special contact,” she continued.
Some female primates help out in what’s called “aunting behavior,” but that doesn’t happen much among Tibetan macaques. Matheson suspects it’s because “the mothers are not overly protective, and thus give the infants a lot of freedom once they’re able to move about independently. Even when they’re still nursing, the mother will retrieve infants when she leaves an area, but the infant is often exploring or playing with others while his or her mother forages.”
Successful parenting of twins among all non-human primates is rare, save for one family of South American monkeys, the Callitrichidae, which includes tamarins and marmosets. Females of this primate family routinely give birth to twins, with males providing substantial care. Sometimes mothers and dads of these primates will even raise triplets.
Among humans, studies reveal that women who deliver twins live longer, have more children than expected, bear babies at shorter intervals over a longer time, and are older at their last birth.
Read more at Discovery News
It’s possible that only supermom primates, humans included, can properly raise twins. In the wild, twins often die shortly after birth, or only one lives into adulthood.
In the case of the Tibetan macaque mother of twins, described in the latest issue of the journal Primates, there is little doubt that she was qualified for the task.
“She appeared to remain quite healthy,” co-author Megan Matheson told Discovery News. “I was very impressed when I observed her in August of 2010 running with two, by now quite large, infants hanging on!”
“At last check, the twins were still alive,” added Matheson, a professor of psychology at Central Washington University. “They would be not quite 2 years old now, so still in the young juvenile stage. The twins were males, so they are not considered to be adults until 7 years of age.”
Matheson and her colleagues discovered the twins among a group of free-ranging Tibetan macaques at Huangshan, China. They studied the mother for 5 months after the birth, comparing her activities to those of other adult females with single or no offspring.
The researchers found that the mother monkey with twins spent more time foraging and resting, but that the quality of her social interactions did not seem to differ much from that of other macaque females.
In fact, she seemed to enjoy showing off her twins to others, who displayed an interest in the youngsters. For some reason, she tended to present one twin more frequently than its sibling. The researchers are not certain if that was because she is right handed, and simply handed over the twin on her right side more, or if she preferred that particular individual.
Males for this primate species, and many others, do not share parental duties. Female Tibetan macaques may mate with multiple males during the primary mating season.
“Dominant males have priority of access, but more subordinate males may sneak copulations,” Matheson said. As a result, paternity can be uncertain.
“Generally speaking, in these species where paternity uncertainty is the norm, adult males will be protective of all infants if they are threatened, but don’t necessarily favor any one for special contact,” she continued.
Some female primates help out in what’s called “aunting behavior,” but that doesn’t happen much among Tibetan macaques. Matheson suspects it’s because “the mothers are not overly protective, and thus give the infants a lot of freedom once they’re able to move about independently. Even when they’re still nursing, the mother will retrieve infants when she leaves an area, but the infant is often exploring or playing with others while his or her mother forages.”
Successful parenting of twins among all non-human primates is rare, save for one family of South American monkeys, the Callitrichidae, which includes tamarins and marmosets. Females of this primate family routinely give birth to twins, with males providing substantial care. Sometimes mothers and dads of these primates will even raise triplets.
Among humans, studies reveal that women who deliver twins live longer, have more children than expected, bear babies at shorter intervals over a longer time, and are older at their last birth.
Read more at Discovery News
Was First Winged Dinosaur Jet Black?
The winged dinosaur Archaeopteryx, which may represent the missing link in birds' evolution to powered flight, had at least some jet-black feathers, according to new research published today in Nature Communications.
Aside from creating more of a cool visual for this raven-sized animal, the discovery suggests that Archaeopteryx could fly, since the color and parts of cells that would have supplied the black pigment are evidence that the wing feathers were rigid and durable. These are traits that probably would have permitted flight.
The research team, led by evolutionary biologist Ryan Carney of Brown University, made another important discovery. The feather structure of Archaeopteryx turns out to have been identical to that of living birds, providing strong evidence that wing feathers evolved as early as 150 million years ago during the Jurassic Period.
"If Archaeopteryx was flapping or gliding, the presence of melanosomes [pigment-producing parts of a cell] would have given the feathers additional structural support," Carney was quoted as saying in a press release. "This would have been advantageous during this early evolutionary stage of dinosaur flight."
Archaeopteryx has been at the center of debate between scientists, who squabble over whether the animal was a non-avian dinosaur or a bird. It could have even been an intermediary between the two.
As for the feather, it was discovered in a limestone deposit in Germany in 1861, a few years after the publication of Charles Darwin's On the Origin of Species. Thinking of Darwin ... the traits that might make Archaeopteryx an evolutionary intermediate between dinosaurs and birds are the combination of reptilian (teeth, clawed fingers and a bony tail) and avian (feathered wings and a wishbone) features.
Carney and his team patiently used a scanning electron microscope to locate patches of hundreds of the pigment structures, the melanosomes, still encased in the fossilized feather.
"The third time was the charm, and we finally found the keys to unlocking the feather's original color, hidden in the rock for the past 150 million years," said Carney.
The scientists also examined fossilized barbules within the feather. These are tiny, rib-like appendages that overlap and interlock like zippers to give a feather rigidity and strength. The barbules and the alignment of melanosomes within them, Carney said, are all identical to those found in modern birds.
The black coloration offers possible clues about the behavior of Archaeopteryx. Black can serve to regulate body temperature, act as camouflage, be employed for display and, again, support flight.
Read more at Discovery News
Aside from creating more of a cool visual for this raven-sized animal, the discovery suggests that Archaeopteryx could fly, since the color and parts of cells that would have supplied the black pigment are evidence that the wing feathers were rigid and durable. These are traits that probably would have permitted flight.
The research team, led by evolutionary biologist Ryan Carney of Brown University, made another important discovery. The feather structure of Archaeopteryx turns out to have been identical to that of living birds, providing strong evidence that wing feathers evolved as early as 150 million years ago during the Jurassic Period.
"If Archaeopteryx was flapping or gliding, the presence of melanosomes [pigment-producing parts of a cell] would have given the feathers additional structural support," Carney was quoted as saying in a press release. "This would have been advantageous during this early evolutionary stage of dinosaur flight."
Archaeopteryx has been at the center of debate between scientists, who squabble over whether the animal was a non-avian dinosaur or a bird. It could have even been an intermediary between the two.
As for the feather, it was discovered in a limestone deposit in Germany in 1861, a few years after the publication of Charles Darwin's On the Origin of Species. Thinking of Darwin ... the traits that might make Archaeopteryx an evolutionary intermediate between dinosaurs and birds are the combination of reptilian (teeth, clawed fingers and a bony tail) and avian (feathered wings and a wishbone) features.
Carney and his team patiently used a scanning electron microscope to locate patches of hundreds of the pigment structures, the melanosomes, still encased in the fossilized feather.
"The third time was the charm, and we finally found the keys to unlocking the feather's original color, hidden in the rock for the past 150 million years," said Carney.
The scientists also examined fossilized barbules within the feather. These are tiny, rib-like appendages that overlap and interlock like zippers to give a feather rigidity and strength. The barbules and the alignment of melanosomes within them, Carney said, are all identical to those found in modern birds.
The black coloration offers possible clues about the behavior of Archaeopteryx. Black can serve to regulate body temperature, act as camouflage, be employed for display and, again, support flight.
Read more at Discovery News
Hundreds of Meteorites Uncovered in Antarctica
A gang of heavily insulated scientists has wrapped up its Antarctic expedition, with its members thawing out from the experience, but pleased to have bagged more than 300 space rocks.
They are participants in the Antarctic Search for Meteorites program, or ANSMET for short. Since 1976, ANSMET researchers have been recovering thousands of meteorite specimens from the East Antarctic ice sheet. ANSMET is funded by the Office of Polar Programs of the National Science Foundation.
According to the ANSMET website, the specimens are currently the only reliable, continuous source of new, nonmicroscopic extraterrestrial material. Given that there are no active planetary sample-return missions coming or going at the moment, the retrieval of meteorites is the cheapest and only guaranteed way to recover new things from worlds beyond the Earth.
Special place
"It has been another interesting season at Miller Range," said Ralph Harvey, associate professor in the department of Earth, Environmental and Planetary Sciences at Case Western Reserve University in Cleveland, Ohio.
"The place is special for us because we seem to find meteorites everywhere, in every little nook and cranny, almost unpredictable," Harvey told SPACE.com. "And it did it again ... lots of places we checked out just to be complete proved to have dozens of specimens."
Harvey is the principal investigator for the ANSMET program. "I've been leading field parties since 1991 and I think this year marks my 25th overall with the program," Harvey said.
Harvey likens his search for meteorites to a farmer who's used to harvesting corn in a field finding it growing in the barn, in the garage, in the basement and other surprising spots.
The meteorite hunting wasn't all smooth, though.
The team was held back significantly by early snowfalls that buried the meteorites. Even though a few strong windstorms cleared some of it, the whipping winds did not clear all of it, Harvey explained.
"The total number of meteorites is less than half what I would have predicted, again primarily because of that early snow hiding all the specimens," Harvey said. "We'll be going back to the Miller Range at least one more time and maybe two."
Celestial collectibles
Antarctica is viewed as the world's premier meteorite hunting ground, and for good reason.
While meteorites fall in a random fashion all over the globe, the East Antarctic ice sheet is a "desert of ice," a stark scene that enhances the likelihood of finding meteorites, which are usually undisturbed and stand out against the background.
In the just-concluded search, the team's bounty of celestial collectibles brought the total number of meteorites found in ANSMET history to 20,000.
Along with Harvey, the meteorite hunters are:
John Schutt, an ANSMET mountaineer for over 30 years who once again played that role. He recently got an honorary doctorate recognizing his contributions to planetary science.
Jim Karner, a postdoctoral researcher working with the ANSMET program and a specialist in Martian meteorites from Case Western Reserve. He's a veteran of four ANSMET expeditions.
Christian Schrader, a geologist from NASA Marshall Space Flight Center in Huntsville, Ala., who has done significant rock work, particularly in studying lunar meteorites.
Katie Joy, planetary geologist, most recently from the Lunar and Planetary Institute in Houston, Tex., and a lunar meteorite researcher.
Anne Peslier, a planetary scientist from NASA's Johnson Space Center in Houston who has done a great deal of work on Martian meteorites.
Jake Maule, a planetary scientist, recently of Carnegie Institute in Washington, D.C., with a specialty in astrobiology.
Jesper Holst, a Ph.D. student studying planetary geochemistry at the University of Copenhagen.
Tim Swindle, a planetary geochemist from the University of Arizona, taking part in the second half of the season, and a veteran of several previous expeditions.
Read more at Discovery News
They are participants in the Antarctic Search for Meteorites program, or ANSMET for short. Since 1976, ANSMET researchers have been recovering thousands of meteorite specimens from the East Antarctic ice sheet. ANSMET is funded by the Office of Polar Programs of the National Science Foundation.
According to the ANSMET website, the specimens are currently the only reliable, continuous source of new, nonmicroscopic extraterrestrial material. Given that there are no active planetary sample-return missions coming or going at the moment, the retrieval of meteorites is the cheapest and only guaranteed way to recover new things from worlds beyond the Earth.
Special place
"It has been another interesting season at Miller Range," said Ralph Harvey, associate professor in the department of Earth, Environmental and Planetary Sciences at Case Western Reserve University in Cleveland, Ohio.
"The place is special for us because we seem to find meteorites everywhere, in every little nook and cranny, almost unpredictable," Harvey told SPACE.com. "And it did it again ... lots of places we checked out just to be complete proved to have dozens of specimens."
Harvey is the principal investigator for the ANSMET program. "I've been leading field parties since 1991 and I think this year marks my 25th overall with the program," Harvey said.
Harvey likens his search for meteorites to a farmer who's used to harvesting corn in a field finding it growing in the barn, in the garage, in the basement and other surprising spots.
The meteorite hunting wasn't all smooth, though.
The team was held back significantly by early snowfalls that buried the meteorites. Even though a few strong windstorms cleared some of it, the whipping winds did not clear all of it, Harvey explained.
"The total number of meteorites is less than half what I would have predicted, again primarily because of that early snow hiding all the specimens," Harvey said. "We'll be going back to the Miller Range at least one more time and maybe two."
Celestial collectibles
Antarctica is viewed as the world's premier meteorite hunting ground, and for good reason.
While meteorites fall in a random fashion all over the globe, the East Antarctic ice sheet is a "desert of ice," a stark scene that enhances the likelihood of finding meteorites, which are usually undisturbed and stand out against the background.
In the just-concluded search, the team's bounty of celestial collectibles brought the total number of meteorites found in ANSMET history to 20,000.
Along with Harvey, the meteorite hunters are:
John Schutt, an ANSMET mountaineer for over 30 years who once again played that role. He recently got an honorary doctorate recognizing his contributions to planetary science.
Jim Karner, a postdoctoral researcher working with the ANSMET program and a specialist in Martian meteorites from Case Western Reserve. He's a veteran of four ANSMET expeditions.
Christian Schrader, a geologist from NASA Marshall Space Flight Center in Huntsville, Ala., who has done significant rock work, particularly in studying lunar meteorites.
Katie Joy, planetary geologist, most recently from the Lunar and Planetary Institute in Houston, Tex., and a lunar meteorite researcher.
Anne Peslier, a planetary scientist from NASA's Johnson Space Center in Houston who has done a great deal of work on Martian meteorites.
Jake Maule, a planetary scientist, recently of Carnegie Institute in Washington, D.C., with a specialty in astrobiology.
Jesper Holst, a Ph.D. student studying planetary geochemistry at the University of Copenhagen.
Tim Swindle, a planetary geochemist from the University of Arizona, taking part in the second half of the season, and a veteran of several previous expeditions.
Read more at Discovery News
Waiting for Death Valley's Next Big Bang
California’s Death Valley comes by its morbid reputation honestly, but not for the reason you might think. True, this stark desert holds the record for the hottest, driest spot in North America. Scientists now say it also poses a different threat: spectacularly explosive volcanic eruptions.
A new study from geochemists at Columbia University’s Lamont-Doherty Earth Observatory reveals that around the year 1300, a volcanic explosion in the northern part of the valley ripped a half-mile-wide hole in the overlying sedimentary rock, blasting out superheated steam, volcanic ash and deadly gases. Study co-author Brent Goehring, now at Purdue University, described this dramatic event yesterday in a press release:
…this would have created an atom-bomb-like mushroom cloud that collapsed on itself in a donut shape, then rushed outward along the ground at some 200 miles an hour, as rocks hailed down. Any creature within two miles or more would be fatally thrown, suffocated, burned and bombarded, though not necessarily in that order.
In the new report, published in the 18 January issue of Geophysical Research Letters, Goehring and his colleagues suggest such an event created Ubehebe (pronounced YOU-bee-HEE-bee) volcanic crater, the youngest and largest of a dozen similar craters in northern Death Valley.
What's more, conditions may be ripe for a repeat performance.
Geologists had long assumed that Ubehebe and its sister craters were thousands (or even tens of thousands) of years old. Those supposed ages would put the eruptions at the end of the last ice age, when the U.S. southwest was considerably wetter than it is today. And that made perfect sense, considering all geologic clues suggest the magma mixed with water, which is what made them so explosive.
Called phreatomagmatic eruptions, such events usually occur where water is abundant—near the edge of a lake, say, or at the bottom of the ocean.
But when the Lamont geochemists used a new-fangled isotope technique to date the volcanic craters, they turned out to be surprisingly young. They ranged from 2,100 years old to 800 years old—meaning they formed long after California had dried out.
That left the researchers with only groundwater to blame. Indeed, the present-day water table is relatively shallow, probably only 150 meters below the floor of Ubehebe crater. “This and the youth of the most recent activity suggest that the Ubehebe volcanic field may constitute a more significant hazard than generally appreciated,” the researchers concluded.
This bold, new statement stems from the remarkable ability of quartz-rich pebbles, which now litter the desert soil around the craters, to track the time since an explosion ripped them out of the ground. Ever since that moment, cosmic rays striking certain oxygen atoms within the quartz grains have been creating radioactive beryllium-10, a so-called cosmogenic nuclide with a known rate of decay.
Measuring beryllium-10 is the basis for a dating technique similar to one Dutch scientists used recently to hone in on the date of a prehistoric tsunami based on the grains of sand the storm washed to shore.
Read more at Discovery News
A new study from geochemists at Columbia University’s Lamont-Doherty Earth Observatory reveals that around the year 1300, a volcanic explosion in the northern part of the valley ripped a half-mile-wide hole in the overlying sedimentary rock, blasting out superheated steam, volcanic ash and deadly gases. Study co-author Brent Goehring, now at Purdue University, described this dramatic event yesterday in a press release:
…this would have created an atom-bomb-like mushroom cloud that collapsed on itself in a donut shape, then rushed outward along the ground at some 200 miles an hour, as rocks hailed down. Any creature within two miles or more would be fatally thrown, suffocated, burned and bombarded, though not necessarily in that order.
In the new report, published in the 18 January issue of Geophysical Research Letters, Goehring and his colleagues suggest such an event created Ubehebe (pronounced YOU-bee-HEE-bee) volcanic crater, the youngest and largest of a dozen similar craters in northern Death Valley.
What's more, conditions may be ripe for a repeat performance.
Geologists had long assumed that Ubehebe and its sister craters were thousands (or even tens of thousands) of years old. Those supposed ages would put the eruptions at the end of the last ice age, when the U.S. southwest was considerably wetter than it is today. And that made perfect sense, considering all geologic clues suggest the magma mixed with water, which is what made them so explosive.
Called phreatomagmatic eruptions, such events usually occur where water is abundant—near the edge of a lake, say, or at the bottom of the ocean.
But when the Lamont geochemists used a new-fangled isotope technique to date the volcanic craters, they turned out to be surprisingly young. They ranged from 2,100 years old to 800 years old—meaning they formed long after California had dried out.
That left the researchers with only groundwater to blame. Indeed, the present-day water table is relatively shallow, probably only 150 meters below the floor of Ubehebe crater. “This and the youth of the most recent activity suggest that the Ubehebe volcanic field may constitute a more significant hazard than generally appreciated,” the researchers concluded.
This bold, new statement stems from the remarkable ability of quartz-rich pebbles, which now litter the desert soil around the craters, to track the time since an explosion ripped them out of the ground. Ever since that moment, cosmic rays striking certain oxygen atoms within the quartz grains have been creating radioactive beryllium-10, a so-called cosmogenic nuclide with a known rate of decay.
Measuring beryllium-10 is the basis for a dating technique similar to one Dutch scientists used recently to hone in on the date of a prehistoric tsunami based on the grains of sand the storm washed to shore.
Read more at Discovery News
Jan 23, 2012
New Understanding of Chronic Pain
Millions of people worldwide suffer from a type of chronic pain called neuropathic pain, which is triggered by nerve damage. Precisely how this pain persists has been a mystery, and current treatments are largely ineffective. But a team led by scientists from The Scripps Research Institute, using a new approach known as metabolomics, has now discovered a major clue: dimethylsphingosine (DMS), a small-molecule byproduct of cellular membranes in the nervous system. In their new study, the scientists found that DMS is produced at abnormally high levels in the spinal cords of rats with neuropathic pain and appears to cause pain when injected. The findings suggest inhibiting this molecule may be a fruitful target for drug development.
"We think that this is a big step forward in understanding and treating neuropathic pain, and also a solid demonstration of the power of metabolomics," said Gary J. Patti, a research associate at Scripps Research during the study, and now an assistant professor of genetics, chemistry, and medicine at Washington University in St. Louis. Patti is a lead author of the report on the study, which appeared online in the journal Nature Chemical Biology on January 22, 2012.
Scientists who want to understand what makes diseased cells different from healthy cells have often looked for differences in levels of gene expression or cellular proteins -- approaches known respectively as genomics and proteomics. Metabolomics, by contrast, concerns differences in the levels of small-molecule metabolites, such as sugars, vitamins, and amino acids, that serve as the building blocks of basic cellular processes. "These are the molecules that are actually being transformed during cellular activity, and tracking them provides more direct information on what's happening at a biochemical level," Patti said.
Metabolomics is increasingly used to find biochemical markers or signatures of diseases. One of the most relied-upon "metabolome" databases, METLIN, was set up at Scripps Research in 2005, and now contains data on thousands of metabolites found in humans and other organisms. However, in this case the research team hoped to do more than find a metabolic marker of neuropathic pain.
"The idea was to apply metabolomic analysis to understand the biochemical basis of the neuropathic pain condition and reveal potential therapeutic targets," said Gary Siuzdak, a senior investigator in the study, who is professor of chemistry and molecular biology and director of the Scripps Research Center for Metabolomics. "We call this approach 'therapeutic metabolomics'."
The scientists began with a standard model of neuropathic pain in lab rats. Patti, Siuzdak, and their colleagues sampled segments of a previously injured tibial leg nerve triggering neuropathic pain, as well as the rats' blood plasma and tissue from the rats' spinal cords. The scientists then determined the levels of metabolites in these tissues, and compared them to levels from control animals.
Unexpectedly, the scientists found that nearly all the major abnormalities in metabolite levels were present not in the injured leg nerve fiber, nor in blood plasma, but in tissue from the "dorsal horn" region of the spinal cord which normally receives signals from the tibial nerve and relays them to the brain. "After the nerve is damaged, it degrades and rebuilds itself at the site of the injury, but remodeling also occurs, possibly over a longer period, at the terminus of the nerve where it connects to dorsal horn neurons," Patti said.
Next, the researchers set up a test to see which of the abnormally altered metabolites in dorsal horn tissue could evoke signs of pain signaling in cultures of rat spinal cord tissue. One metabolite stood out -- a small molecule that didn't appear in any of the metabolome databases. Patti eventually determined that the molecule was DMS, an apparent byproduct of cellular reactions involving sphingomyelin, a major building block for the insulating sheaths of nerve fibers. "This is the first characterization and quantitation of DMS as a naturally occurring compound," Patti noted. When the scientists injected it into healthy rats, at a dose similar to that found in the nerve-injured rats, it induced pain.
DMS seems to cause pain at least in part by stimulating the release of pro-inflammatory molecules from neuron-supporting cells called astrocytes. Patti, Siuzdak, and their colleagues are now trying to find out more about DMS's pain-inducing mechanisms -- and are testing inhibitors of DMS production that may prove to be effective treatments or preventives of neuropathic pain.
"We're very excited about this therapeutic metabolomics approach," said Siuzdak. "In fact, we're already involved in several other projects in which metabolites are giving us a direct indication of disease biochemistry and potential treatments."
Read more at Science Daily
"We think that this is a big step forward in understanding and treating neuropathic pain, and also a solid demonstration of the power of metabolomics," said Gary J. Patti, a research associate at Scripps Research during the study, and now an assistant professor of genetics, chemistry, and medicine at Washington University in St. Louis. Patti is a lead author of the report on the study, which appeared online in the journal Nature Chemical Biology on January 22, 2012.
Scientists who want to understand what makes diseased cells different from healthy cells have often looked for differences in levels of gene expression or cellular proteins -- approaches known respectively as genomics and proteomics. Metabolomics, by contrast, concerns differences in the levels of small-molecule metabolites, such as sugars, vitamins, and amino acids, that serve as the building blocks of basic cellular processes. "These are the molecules that are actually being transformed during cellular activity, and tracking them provides more direct information on what's happening at a biochemical level," Patti said.
Metabolomics is increasingly used to find biochemical markers or signatures of diseases. One of the most relied-upon "metabolome" databases, METLIN, was set up at Scripps Research in 2005, and now contains data on thousands of metabolites found in humans and other organisms. However, in this case the research team hoped to do more than find a metabolic marker of neuropathic pain.
"The idea was to apply metabolomic analysis to understand the biochemical basis of the neuropathic pain condition and reveal potential therapeutic targets," said Gary Siuzdak, a senior investigator in the study, who is professor of chemistry and molecular biology and director of the Scripps Research Center for Metabolomics. "We call this approach 'therapeutic metabolomics'."
The scientists began with a standard model of neuropathic pain in lab rats. Patti, Siuzdak, and their colleagues sampled segments of a previously injured tibial leg nerve triggering neuropathic pain, as well as the rats' blood plasma and tissue from the rats' spinal cords. The scientists then determined the levels of metabolites in these tissues, and compared them to levels from control animals.
Unexpectedly, the scientists found that nearly all the major abnormalities in metabolite levels were present not in the injured leg nerve fiber, nor in blood plasma, but in tissue from the "dorsal horn" region of the spinal cord which normally receives signals from the tibial nerve and relays them to the brain. "After the nerve is damaged, it degrades and rebuilds itself at the site of the injury, but remodeling also occurs, possibly over a longer period, at the terminus of the nerve where it connects to dorsal horn neurons," Patti said.
Next, the researchers set up a test to see which of the abnormally altered metabolites in dorsal horn tissue could evoke signs of pain signaling in cultures of rat spinal cord tissue. One metabolite stood out -- a small molecule that didn't appear in any of the metabolome databases. Patti eventually determined that the molecule was DMS, an apparent byproduct of cellular reactions involving sphingomyelin, a major building block for the insulating sheaths of nerve fibers. "This is the first characterization and quantitation of DMS as a naturally occurring compound," Patti noted. When the scientists injected it into healthy rats, at a dose similar to that found in the nerve-injured rats, it induced pain.
DMS seems to cause pain at least in part by stimulating the release of pro-inflammatory molecules from neuron-supporting cells called astrocytes. Patti, Siuzdak, and their colleagues are now trying to find out more about DMS's pain-inducing mechanisms -- and are testing inhibitors of DMS production that may prove to be effective treatments or preventives of neuropathic pain.
"We're very excited about this therapeutic metabolomics approach," said Siuzdak. "In fact, we're already involved in several other projects in which metabolites are giving us a direct indication of disease biochemistry and potential treatments."
Read more at Science Daily
Mysterious 'Winged' Structure from Ancient Rome Found
A recently discovered mysterious "winged" structure in England, which in the Roman period may have been used as a temple, presents a puzzle for archaeologists, who say the building has no known parallels.
Built around 1,800 years ago, the structure was discovered in Norfolk, in eastern England, just to the south of the ancient town of Venta Icenorum. The structure has two wings radiating out from a rectangular room that in turn leads to a central room.
"Generally speaking, [during] the Roman Empire people built within a fixed repertoire of architectural forms," said William Bowden, a professor at the University of Nottingham, who reported the find in the most recent edition of the Journal of Roman Archaeology. The investigation was carried out in conjunction with the Norfolk Archaeological and Historical Research Group.
The winged shape of the building appears to be unique in the Roman Empire, with no other example known. "It's very unusual to find a building like this where you have no known parallels for it," Bowden told LiveScience. "What they were trying to achieve by using this design is really very difficult to say."
The building appears to have been part of a complex that includes a villa to the north and at least two other structures to the northeast and northwest. An aerial photograph suggests the existence of an oval or polygonal building with an apse located to the east.
The foundation of the two wings and the rectangular room was made of a thin layer of rammed clay and chalk. "This suggests that the superstructure of much of the building was quite light, probably timber and clay-lump walls with a thatched roof," writes Bowden. This raises the possibility that the building was not intended to be used long term.
The central room, on the other hand, was made of stronger stuff, with its foundations crafted from lime mortar mixed with clay and small pieces of flint and brick. That section likely had a tiled roof. "Roman tiles are very large things, they’re very heavy," Bowden said.
Sometime after the demise of this wing-shaped structure, another building, this one decorated, was built over it. Archaeologists found post holes from it with painted wall plaster inside.
Bowden said few artifacts were found at the site and none that could be linked to the winged structure with certainty. A plough had ripped through the site at some point, scattering debris. Also, metal detecting is a major problem in the Norfolk area, with people using metal detectors to locate and confiscate materials, something that may have happened at this site.
Still, even when the team found undisturbed layers, there was little in the way of artifacts. "This could suggest that it [the winged building] wasn't used for a very particularly long time," Bowden said.
Researchers are not certain what the building was used for. While its elevated position made it visible from the town of Venta Icenorum, the foundations of the radiating wings are weak. "It's possible that this was a temporary building constructed for a single event or ceremony, which might account for its insubstantial construction,' writes Bowden in the journal article.
"Alternatively the building may represent a shrine or temple on a hilltop close to a Roman road, visible from the road as well as from the town."
Adding another layer to this mystery is the ancient history of Norfolk, where the structure was found.
The local people in the area, who lived here before the Roman conquest, were known as the Iceni. It may have been their descendents who lived at the site and constructed the winged building.
Iceni architecture was quite simple and, as Bowden explained, not as elaborate as this. On the other hand, their religion was intertwined with nature, something which may help explain the wind-blown location of the site. "Iceni gods, pre-Roman gods, tend to be associated with the natural sites: the springs, trees, sacred groves, this kind of thing," said Bowden.
The history between the Iceni and the Romans is a violent one. In A.D. 43, when the Romans, under Emperor Claudius, invaded Britain, they encountered fierce resistance from them. After a failed revolt in A.D. 47 they became a client kingdom of the empire, with Prasutagus as their leader. When he died, around A.D. 60, the Romans tried to finish the subjugation, in brutal fashion.
"First, his [Prasutagus'] wife Boudicea was scourged, and his daughters outraged. All the chief men of the Iceni, as if Rome had received the whole country as a gift, were stripped of their ancestral possessions, and the king's relatives were made slaves," wrote Tacitus, a Roman writer in The Annals. (From the book, "Complete Works of Tacitus," 1942, edited for the Perseus Digital Library.)
This led Boudicea (more commonly spelled Boudicca) to form an army and lead a revolt against the Romans. At first she was successful, defeating Roman military units and even sacking Londinium. In the end the Romans rallied and defeated her at the Battle of Watling Street. With the Roman victory the rebellion came to an end, and a town named Venta Icenorumwas eventually set up on their land.
Read more at Discovery News
Built around 1,800 years ago, the structure was discovered in Norfolk, in eastern England, just to the south of the ancient town of Venta Icenorum. The structure has two wings radiating out from a rectangular room that in turn leads to a central room.
"Generally speaking, [during] the Roman Empire people built within a fixed repertoire of architectural forms," said William Bowden, a professor at the University of Nottingham, who reported the find in the most recent edition of the Journal of Roman Archaeology. The investigation was carried out in conjunction with the Norfolk Archaeological and Historical Research Group.
The winged shape of the building appears to be unique in the Roman Empire, with no other example known. "It's very unusual to find a building like this where you have no known parallels for it," Bowden told LiveScience. "What they were trying to achieve by using this design is really very difficult to say."
The building appears to have been part of a complex that includes a villa to the north and at least two other structures to the northeast and northwest. An aerial photograph suggests the existence of an oval or polygonal building with an apse located to the east.
The foundation of the two wings and the rectangular room was made of a thin layer of rammed clay and chalk. "This suggests that the superstructure of much of the building was quite light, probably timber and clay-lump walls with a thatched roof," writes Bowden. This raises the possibility that the building was not intended to be used long term.
The central room, on the other hand, was made of stronger stuff, with its foundations crafted from lime mortar mixed with clay and small pieces of flint and brick. That section likely had a tiled roof. "Roman tiles are very large things, they’re very heavy," Bowden said.
Sometime after the demise of this wing-shaped structure, another building, this one decorated, was built over it. Archaeologists found post holes from it with painted wall plaster inside.
Bowden said few artifacts were found at the site and none that could be linked to the winged structure with certainty. A plough had ripped through the site at some point, scattering debris. Also, metal detecting is a major problem in the Norfolk area, with people using metal detectors to locate and confiscate materials, something that may have happened at this site.
Still, even when the team found undisturbed layers, there was little in the way of artifacts. "This could suggest that it [the winged building] wasn't used for a very particularly long time," Bowden said.
Researchers are not certain what the building was used for. While its elevated position made it visible from the town of Venta Icenorum, the foundations of the radiating wings are weak. "It's possible that this was a temporary building constructed for a single event or ceremony, which might account for its insubstantial construction,' writes Bowden in the journal article.
"Alternatively the building may represent a shrine or temple on a hilltop close to a Roman road, visible from the road as well as from the town."
Adding another layer to this mystery is the ancient history of Norfolk, where the structure was found.
The local people in the area, who lived here before the Roman conquest, were known as the Iceni. It may have been their descendents who lived at the site and constructed the winged building.
Iceni architecture was quite simple and, as Bowden explained, not as elaborate as this. On the other hand, their religion was intertwined with nature, something which may help explain the wind-blown location of the site. "Iceni gods, pre-Roman gods, tend to be associated with the natural sites: the springs, trees, sacred groves, this kind of thing," said Bowden.
The history between the Iceni and the Romans is a violent one. In A.D. 43, when the Romans, under Emperor Claudius, invaded Britain, they encountered fierce resistance from them. After a failed revolt in A.D. 47 they became a client kingdom of the empire, with Prasutagus as their leader. When he died, around A.D. 60, the Romans tried to finish the subjugation, in brutal fashion.
"First, his [Prasutagus'] wife Boudicea was scourged, and his daughters outraged. All the chief men of the Iceni, as if Rome had received the whole country as a gift, were stripped of their ancestral possessions, and the king's relatives were made slaves," wrote Tacitus, a Roman writer in The Annals. (From the book, "Complete Works of Tacitus," 1942, edited for the Perseus Digital Library.)
This led Boudicea (more commonly spelled Boudicca) to form an army and lead a revolt against the Romans. At first she was successful, defeating Roman military units and even sacking Londinium. In the end the Romans rallied and defeated her at the Battle of Watling Street. With the Roman victory the rebellion came to an end, and a town named Venta Icenorumwas eventually set up on their land.
Read more at Discovery News
Dinosaurs were caring mothers, new discovery finds
The finding in South Africa showed several clutches of fossilised eggs, many containing embryos.
Tiny footprints of the newborn dinosaurs also showed they stayed in the nest long enough to grow to double their size.
The nests, found in Golden Gate Highlands National Park, is 100 million years older than previously found nests and belonged to Massospondylus, a 20-foot ancestor of long-necked "sauropod" dinosaurs that lived 190 million years ago.
At least 10 nests were found at different rock levels, with up to 34 eggs in each, suggesting the dinosaurs returned to the same spot to lay their eggs.
The research appeared in the journal Proceedings of the National Academy of Sciences (PNAS).
"Even though the fossil record of dinosaurs is extensive, we actually have very little fossil information about their reproductive biology, particularly for early dinosaurs," said leading scientist Dr David Evans, curator of vertebrate palaeontology at the Royal Ontario Museum in Canada.
Read more at The Telegraph
Tiny footprints of the newborn dinosaurs also showed they stayed in the nest long enough to grow to double their size.
The nests, found in Golden Gate Highlands National Park, is 100 million years older than previously found nests and belonged to Massospondylus, a 20-foot ancestor of long-necked "sauropod" dinosaurs that lived 190 million years ago.
At least 10 nests were found at different rock levels, with up to 34 eggs in each, suggesting the dinosaurs returned to the same spot to lay their eggs.
The research appeared in the journal Proceedings of the National Academy of Sciences (PNAS).
"Even though the fossil record of dinosaurs is extensive, we actually have very little fossil information about their reproductive biology, particularly for early dinosaurs," said leading scientist Dr David Evans, curator of vertebrate palaeontology at the Royal Ontario Museum in Canada.
Read more at The Telegraph
INCOMING! Sun Blasts Another CME at Earth and Mars
You may not know it, but there's an epic magnetic battle between the sun and Earth raging over our heads.
On Friday, the sun hurled a coronal mass ejection (CME) at our planet that sparked a strong geomagnetic storm and beautiful aurorae at high latitudes on Sunday. Late last night (EST), the sun unleashed yet another CME... and it's heading our way.
One Fast-Moving CME Coming Right Up
A particularly angry-looking sunspot (1402) on the solar surface erupted with a strong, long-duration M9-class flare Sunday night at around 11 p.m. EST. "M" stands for "medium," but the explosive energy was just shy of an X-class solar flare -- the strongest kind of flare the sun can produce.
This flare was accompanied by a fast-moving CME that jetted from the lower solar atmosphere and is currently heading our way. Space weather researchers predict the CME will impact our planet's magnetosphere tomorrow (Jan. 24). It will then plough into Mars the following day.
Now that yet another CME is approaching, even more spectacular auroral activity can be expected for the next few nights. We are currently undergoing the largest solar radiation storm since 2005.
"SWPC (Space Weather Prediction Center) has issued a Geomagnetic Storm Watch with G2 level storming likely and G3 level storming possible, with the storm continuing into Wednesday, Jan. 25," the NOAA announced on Monday.
Living With A Star
All these flares, CMEs, space radiation and aurorae may sound scary, but it's all a natural consequence of living with a star.
As our sun approaches "solar maximum" -- a time of maximum magnetic activity in its 11-year cycle -- we can expect more solar flares and CMEs, some of which will hit the Earth. The next solar maximum is predicted to occur in 2013, so we have a few more months of solar excitement to come.
Our planet is more than capable of protecting us from a solar radiation battering. We live in a dense atmosphere that can absorb ionizing X-ray radiation from the most powerful of flares. Also, our planet has a natural magnetic "force field" (the magnetosphere) that deflects energetic solar particles from CMEs. The particles are funneled toward Polar Regions by the magnetosphere where they collide with our dense atmosphere, generating beautiful auroral displays.
Although solar radiation may not be a direct threat to life on Earth, it can cause problems with sensitive electronics in space. Communications satellites in geosynchronous orbit, for example, are especially vulnerable to solar radiation -- there will no doubt be some nervous satellite operators watching the NOAA's SWPC website over the coming hours and days.
Increased solar radiation can also affect unprotected astronauts in orbit, although no problems are expected during this event. "The flight surgeons have reviewed the space weather forecasts for the flare and determined that there are no expected adverse effects or actions required to protect the on-orbit crew," NASA spokesman Kelly Humphries told Discovery News.
The latest geomagnetic storm also generated powerful currents through our atmosphere over the weekend. These currents are generated when charged particles from impacting CMEs rain down through our atmosphere. Occasionally, if powerful enough, these currents can knock out power grids on the ground -- a rare scenario that knocked out the Hydro-Québec's power grid during a geomagnetic storm in 1989.
Read more at Discovery News
On Friday, the sun hurled a coronal mass ejection (CME) at our planet that sparked a strong geomagnetic storm and beautiful aurorae at high latitudes on Sunday. Late last night (EST), the sun unleashed yet another CME... and it's heading our way.
One Fast-Moving CME Coming Right Up
A particularly angry-looking sunspot (1402) on the solar surface erupted with a strong, long-duration M9-class flare Sunday night at around 11 p.m. EST. "M" stands for "medium," but the explosive energy was just shy of an X-class solar flare -- the strongest kind of flare the sun can produce.
This flare was accompanied by a fast-moving CME that jetted from the lower solar atmosphere and is currently heading our way. Space weather researchers predict the CME will impact our planet's magnetosphere tomorrow (Jan. 24). It will then plough into Mars the following day.
Now that yet another CME is approaching, even more spectacular auroral activity can be expected for the next few nights. We are currently undergoing the largest solar radiation storm since 2005.
"SWPC (Space Weather Prediction Center) has issued a Geomagnetic Storm Watch with G2 level storming likely and G3 level storming possible, with the storm continuing into Wednesday, Jan. 25," the NOAA announced on Monday.
Living With A Star
All these flares, CMEs, space radiation and aurorae may sound scary, but it's all a natural consequence of living with a star.
As our sun approaches "solar maximum" -- a time of maximum magnetic activity in its 11-year cycle -- we can expect more solar flares and CMEs, some of which will hit the Earth. The next solar maximum is predicted to occur in 2013, so we have a few more months of solar excitement to come.
Our planet is more than capable of protecting us from a solar radiation battering. We live in a dense atmosphere that can absorb ionizing X-ray radiation from the most powerful of flares. Also, our planet has a natural magnetic "force field" (the magnetosphere) that deflects energetic solar particles from CMEs. The particles are funneled toward Polar Regions by the magnetosphere where they collide with our dense atmosphere, generating beautiful auroral displays.
Although solar radiation may not be a direct threat to life on Earth, it can cause problems with sensitive electronics in space. Communications satellites in geosynchronous orbit, for example, are especially vulnerable to solar radiation -- there will no doubt be some nervous satellite operators watching the NOAA's SWPC website over the coming hours and days.
Increased solar radiation can also affect unprotected astronauts in orbit, although no problems are expected during this event. "The flight surgeons have reviewed the space weather forecasts for the flare and determined that there are no expected adverse effects or actions required to protect the on-orbit crew," NASA spokesman Kelly Humphries told Discovery News.
The latest geomagnetic storm also generated powerful currents through our atmosphere over the weekend. These currents are generated when charged particles from impacting CMEs rain down through our atmosphere. Occasionally, if powerful enough, these currents can knock out power grids on the ground -- a rare scenario that knocked out the Hydro-Québec's power grid during a geomagnetic storm in 1989.
Read more at Discovery News
Jan 22, 2012
Unveiling Malaria's 'Cloak of Invisibility'
The discovery by researchers from the Walter and Eliza Hall Institute of a molecule that is key to malaria's 'invisibility cloak' will help to better understand how the parasite causes disease and escapes from the defences mounted by the immune system.
The research team, led by Professor Alan Cowman from the institute's Infection and Immunity division, has identified one of the crucial molecules that instructs the parasite to employ its invisibility cloak to hide from the immune system, and helps its offspring to remember how to 'make' the cloak.
In research published in the journal Cell Host & Microbe, Professor Cowman and colleagues reveal details about the first molecule found to control the genetic expression of PfEMP1 (Plasmodium falciparum erythrocyte membrane protein 1), a protein that is known to be a major cause of disease during malaria infection.
"The molecule that we discovered, named PfSET10, plays an important role in the genetic control of PfEMP1; an essential parasite protein that is used during specific stages of parasite development for its survival," Professor Cowman said.
"This is the first protein that has been found at what we call the 'active' site, where control of the genes that produce PfEMP1 occurs. Knowing the genes involved in the production of PfEMP1 is key to understanding how this parasite escapes the defenses deployed against it by our immune system," he said.
PfEMP1 plays two important roles in malaria infection. It enables the parasite to stick to cells on the internal lining of blood vessels, which prevents the infected cells from being eliminated from the body. It is also responsible for helping the parasite to escape destruction by the immune system, by varying the genetic code of the PfEMP1 protein so that at least some of the parasites will evade detection. This variation lends the parasite the 'cloak of invisibility' which makes it difficult for the immune system to detect parasite-infected cells, and is part of the reason a vaccine has remained elusive.
Professor Cowman said identification of the PfSET10 molecule was the first step towards unveiling the way in which the parasite uses PfEMP1 as an invisibility cloak to hide itself from the immune system. "As we better understand the systems that control how the PfEMP1 protein is encoded and produced by the parasite, including the molecules that are involved in controlling the process, we will be able to produce targeted treatments that would be more effective in preventing malaria infection in the approximately 3 billion people who are at risk of contracting malaria worldwide," he said.
Each year more than 250 million people are infected with malaria and approximately 655,000 people, mostly children, die. Professor Cowman has spent more than 30 years studying Plasmodium falciparum, the most lethal of the four Plasmodium species, with the aim of developing new vaccines and treatments for the disease.
Read more at Science Daily
The research team, led by Professor Alan Cowman from the institute's Infection and Immunity division, has identified one of the crucial molecules that instructs the parasite to employ its invisibility cloak to hide from the immune system, and helps its offspring to remember how to 'make' the cloak.
In research published in the journal Cell Host & Microbe, Professor Cowman and colleagues reveal details about the first molecule found to control the genetic expression of PfEMP1 (Plasmodium falciparum erythrocyte membrane protein 1), a protein that is known to be a major cause of disease during malaria infection.
"The molecule that we discovered, named PfSET10, plays an important role in the genetic control of PfEMP1; an essential parasite protein that is used during specific stages of parasite development for its survival," Professor Cowman said.
"This is the first protein that has been found at what we call the 'active' site, where control of the genes that produce PfEMP1 occurs. Knowing the genes involved in the production of PfEMP1 is key to understanding how this parasite escapes the defenses deployed against it by our immune system," he said.
PfEMP1 plays two important roles in malaria infection. It enables the parasite to stick to cells on the internal lining of blood vessels, which prevents the infected cells from being eliminated from the body. It is also responsible for helping the parasite to escape destruction by the immune system, by varying the genetic code of the PfEMP1 protein so that at least some of the parasites will evade detection. This variation lends the parasite the 'cloak of invisibility' which makes it difficult for the immune system to detect parasite-infected cells, and is part of the reason a vaccine has remained elusive.
Professor Cowman said identification of the PfSET10 molecule was the first step towards unveiling the way in which the parasite uses PfEMP1 as an invisibility cloak to hide itself from the immune system. "As we better understand the systems that control how the PfEMP1 protein is encoded and produced by the parasite, including the molecules that are involved in controlling the process, we will be able to produce targeted treatments that would be more effective in preventing malaria infection in the approximately 3 billion people who are at risk of contracting malaria worldwide," he said.
Each year more than 250 million people are infected with malaria and approximately 655,000 people, mostly children, die. Professor Cowman has spent more than 30 years studying Plasmodium falciparum, the most lethal of the four Plasmodium species, with the aim of developing new vaccines and treatments for the disease.
Read more at Science Daily
Catching a Comet Death On Camera
On July 6, 2011, a comet was caught doing something never seen before: die a scorching death as it flew too close to the sun. That the comet met its fate this way was no surprise -- but the chance to watch it first-hand amazed even the most seasoned comet watchers.
"Comets are usually too dim to be seen in the glare of the sun's light," says Dean Pesnell at NASA's Goddard Space Flight Center in Greenbelt, Md., who is the project scientist for NASA's Solar Dynamic Observatory (SDO), which snapped images of the comet. "We've been telling people we'd never see one in SDO data."
But an ultra bright comet, from a group known as the Kreutz comets, overturned all preconceived notions. The comet can clearly be viewed moving in over the right side of the sun, disappearing 20 minutes later as it evaporates in the searing heat. The movie is more than just a novelty. As detailed in a paper in Science magazine appearing January 20, 2012, watching the comet's death provides a new way to estimate the comet's size and mass. The comet turns out to be somewhere between 150 to 300 feet long and have about as much mass as an aircraft carrier.
"Of course, it's doing something very different than what aircraft carriers do," says Karel Schrijver, a solar scientist at Lockheed Martin in Palo Alto, Calif., who is the first author on the Science paper and is the principal investigator of the Atmospheric Imaging Assembly instrument on SDO, which recorded the movie. "It was moving along at almost 400 miles per second through the intense heat of the sun -- and was literally being evaporated away."
Typically, comet-watchers see the Kreutz-group comets only through images taken by coronagraphs, a specialized telescope that views the Sun's fainter out atmosphere, or corona, by blocking the direct blinding sunlight with a solid occulting disk. On average a new member of the Kreutz family is discovered every three days, with some of the larger members being observed for some 48 hours or more before disappearing behind the occulting disk, never to be seen again. Such "sun-grazer" comets obviously destruct when they get close to the sun, but the event had never been witnessed.
The journey to categorizing this comet began on July 6, 2011 after Schrijver spotted a bright comet in a coronagraph produced by the SOlar Heliospheric Observatory (SOHO). He looked for it in the SDO images and much to his surprise he found it. Soon a movie of the comet circulated to comet and solar scientists, eventually making a huge splash on the Internet as well.
Karl Battams, a scientist with the Naval Research Laboratory in Washington, DC, who has extensively observed comets with SOHO and is also an author on the paper, was skeptical when he first received the movie. "But as soon as I watched it, there was zero doubt," he says. "I am so used to seeing comets simply disappearing in the SOHO images. It was breathtaking to see one truly evaporating in the corona like that."
After the excitement, the scientists got down to work. Humans have been watching and recording comets for thousands of years, but finding their dimensions has typically required a direct visit from a probe flying nearby. This movie offered the first chance to measure such things from afar. The very fact that the comet evaporated in a certain amount of time over a certain amount of space means one can work backward to determine how big it must have been before hitting the sun's atmosphere.
The Science paper describes the comet and its last moments as follows: It was traveling some 400 miles per second and made it to within 62,000 miles of the sun's surface before evaporating. Before its final death throes, in the last 20 minutes of its existence when it was visible to SDO, the comet was some 100 million pounds, had broken up into a dozen or so large chunks with sizes between 30 to 150 feet, embedded in a "coma" -- that is the fuzzy cloud surrounding the comet -- of approximately 800 miles across, and followed by a glowing tail of about 10,000 miles in length.
It is actually the coma and tail of the comet being seen in the video, not the comet's core. And close examination shows that the light in the tail pulses, getting dimmer and brighter over time. The team speculates that the pulsing variations are caused by successive breakups of each of the individual chunks that made up the comet material as it fell apart in the Sun's intense heat.
"I think this is one of the most interesting things we can see here," says Lockheed's Schrijver. "The comet's tail gets brighter by as much as four times every minute or two. The comet seems first to put a lot of material into that tail, then less, and then the pattern repeats." Figuring out the exact details of why this happens is but one of the mysteries remaining about this comet movie. High on the list is to answer the not-so-simple question of why we can see the comet at all. Certainly, there are a few basic characteristics of this situation that help. For one, this comet was big enough to survive long enough to be seen, and its orbit took it right across the face of the Sun. It was also, says Battams, probably one of the top 15 brightest comets seen by SOHO, which has observed over 2,100 sun-grazing comets to date. The SDO cameras, in of themselves, also contributed a great deal: despite being far away and relatively small compared to the sun, the comet showed up clearly on SDO's high definition imager. This imager, called the Atmospheric Imaging Assembly (AIA) takes a picture every 12 seconds so the movement of the comet across the face of the sun could be continuously watched. Most other similar instruments capture images every few minutes, which makes it hard to track the movement of an object that's only visible for 20 minutes.
Read more at Science Daily
"Comets are usually too dim to be seen in the glare of the sun's light," says Dean Pesnell at NASA's Goddard Space Flight Center in Greenbelt, Md., who is the project scientist for NASA's Solar Dynamic Observatory (SDO), which snapped images of the comet. "We've been telling people we'd never see one in SDO data."
But an ultra bright comet, from a group known as the Kreutz comets, overturned all preconceived notions. The comet can clearly be viewed moving in over the right side of the sun, disappearing 20 minutes later as it evaporates in the searing heat. The movie is more than just a novelty. As detailed in a paper in Science magazine appearing January 20, 2012, watching the comet's death provides a new way to estimate the comet's size and mass. The comet turns out to be somewhere between 150 to 300 feet long and have about as much mass as an aircraft carrier.
"Of course, it's doing something very different than what aircraft carriers do," says Karel Schrijver, a solar scientist at Lockheed Martin in Palo Alto, Calif., who is the first author on the Science paper and is the principal investigator of the Atmospheric Imaging Assembly instrument on SDO, which recorded the movie. "It was moving along at almost 400 miles per second through the intense heat of the sun -- and was literally being evaporated away."
Typically, comet-watchers see the Kreutz-group comets only through images taken by coronagraphs, a specialized telescope that views the Sun's fainter out atmosphere, or corona, by blocking the direct blinding sunlight with a solid occulting disk. On average a new member of the Kreutz family is discovered every three days, with some of the larger members being observed for some 48 hours or more before disappearing behind the occulting disk, never to be seen again. Such "sun-grazer" comets obviously destruct when they get close to the sun, but the event had never been witnessed.
The journey to categorizing this comet began on July 6, 2011 after Schrijver spotted a bright comet in a coronagraph produced by the SOlar Heliospheric Observatory (SOHO). He looked for it in the SDO images and much to his surprise he found it. Soon a movie of the comet circulated to comet and solar scientists, eventually making a huge splash on the Internet as well.
Karl Battams, a scientist with the Naval Research Laboratory in Washington, DC, who has extensively observed comets with SOHO and is also an author on the paper, was skeptical when he first received the movie. "But as soon as I watched it, there was zero doubt," he says. "I am so used to seeing comets simply disappearing in the SOHO images. It was breathtaking to see one truly evaporating in the corona like that."
After the excitement, the scientists got down to work. Humans have been watching and recording comets for thousands of years, but finding their dimensions has typically required a direct visit from a probe flying nearby. This movie offered the first chance to measure such things from afar. The very fact that the comet evaporated in a certain amount of time over a certain amount of space means one can work backward to determine how big it must have been before hitting the sun's atmosphere.
The Science paper describes the comet and its last moments as follows: It was traveling some 400 miles per second and made it to within 62,000 miles of the sun's surface before evaporating. Before its final death throes, in the last 20 minutes of its existence when it was visible to SDO, the comet was some 100 million pounds, had broken up into a dozen or so large chunks with sizes between 30 to 150 feet, embedded in a "coma" -- that is the fuzzy cloud surrounding the comet -- of approximately 800 miles across, and followed by a glowing tail of about 10,000 miles in length.
It is actually the coma and tail of the comet being seen in the video, not the comet's core. And close examination shows that the light in the tail pulses, getting dimmer and brighter over time. The team speculates that the pulsing variations are caused by successive breakups of each of the individual chunks that made up the comet material as it fell apart in the Sun's intense heat.
"I think this is one of the most interesting things we can see here," says Lockheed's Schrijver. "The comet's tail gets brighter by as much as four times every minute or two. The comet seems first to put a lot of material into that tail, then less, and then the pattern repeats." Figuring out the exact details of why this happens is but one of the mysteries remaining about this comet movie. High on the list is to answer the not-so-simple question of why we can see the comet at all. Certainly, there are a few basic characteristics of this situation that help. For one, this comet was big enough to survive long enough to be seen, and its orbit took it right across the face of the Sun. It was also, says Battams, probably one of the top 15 brightest comets seen by SOHO, which has observed over 2,100 sun-grazing comets to date. The SDO cameras, in of themselves, also contributed a great deal: despite being far away and relatively small compared to the sun, the comet showed up clearly on SDO's high definition imager. This imager, called the Atmospheric Imaging Assembly (AIA) takes a picture every 12 seconds so the movement of the comet across the face of the sun could be continuously watched. Most other similar instruments capture images every few minutes, which makes it hard to track the movement of an object that's only visible for 20 minutes.
Read more at Science Daily
Subscribe to:
Posts (Atom)