A new image from the Hubble Space Telescope may look like something from "The Lord of the Rings," but this fiery swirl is actually a planetary nebula known as ESO 456-67. Set against a backdrop of bright stars, the rust-colored object lies in the constellation of Sagittarius (The Archer), in the southern sky.
Despite the name, these ethereal objects have nothing at all to do with planets; this misnomer came about over a century ago, when the first astronomers to observe them only had small, poor-quality telescopes. Through these, the nebulae looked small, compact, and planet-like -- and so were labeled as such.
When a star like the sun approaches the end of its life, it flings material out into space. Planetary nebulae are the intricate, glowing shells of dust and gas pushed outwards from such a star. At their centers lie the remnants of the original stars themselves -- small, dense white dwarf stars.
In this image of ESO 456-67, it is possible to see the various layers of material expelled by the central star. Each appears in a different hue -- red, orange, yellow, and green-tinted bands of gas are visible, with clear patches of space at the heart of the nebula. It is not fully understood how planetary nebulae form such a wide variety of shapes and structures; some appear to be spherical, some elliptical, others shoot material in waves from their polar regions, some look like hourglasses or figures of eight, and others resemble large, messy stellar explosions -- to name but a few.
Read more at Science Daily
Mar 2, 2013
3-D Printing Using Old Milk Jugs
Suppose you could replace "Made in China" with "Made in my garage." Suppose also that every time you polished off a jug of two percent, you would be stocking up on raw material to make anything from a cell phone case and golf tees to a toy castle and a garlic press.
And, you could give yourself a gold medal for being a bona fide, recycling, polar-bear-saving rock star.
Michigan Technological University's Joshua Pearce is working on it. His main tool is open-source 3D printing, which he uses to save thousands of dollars by making everything from his lab equipment to his safety razor.
Using free software downloaded from sites like Thingiverse, which now holds over 54,000 open-source designs, 3D printers make all manner of objects by laying down thin layers of plastic in a specific pattern. While high-end printers can cost many thousands of dollars, simpler open-source units run between $250 and $500 -- and can be used to make parts for other 3D printers, driving the cost down ever further.
"One impediment to even more widespread use has been the cost of filament," says Pearce, an associate professor of materials science and engineering and electrical and computer engineering. Though vastly less expensive than most manufactured products, the plastic filament that 3D printers transform into useful objects isn't free.
Milk jugs, on the other hand, are a costly nuisance, either to recycle or to bury in a landfill. But if you could turn them into plastic filament, Pearce reasoned, you could solve the disposal problem and drive down the cost of 3D printing even more.
So Pearce and his research group decided to make their own recycling unit, or RecycleBot. They cut the labels off milk jugs, washed the plastic, and shredded it. Then they ran it through a homemade device that melts and extrudes it into a long, spaghetti-like string of plastic. Their process is open-source and free for everyone to make and use at Thingiverse.com.
The process isn't perfect. Milk jugs are made of high-density polyethylene, or HDPE, which is not ideal for 3D printing. "HDPE is a little more challenging to print with," Pearce says. But the disadvantages are not overwhelming. His group made its own climate-controlled chamber using a dorm-room refrigerator and an off-the-shelf teddy-bear humidifier and had good results. With more experimentation, the results would be even better, he says. "3D printing is where computers were in the 1970s."
The group determined that making their own filament in an insulated RecycleBot used about 1/10th the energy needed to acquire commercial 3D filament. They also calculated that they used less energy than it would take to recycle milk jugs conventionally.
RecycleBots and 3D printers have all kinds of applications, but they would be especially useful in areas where shopping malls are few and far between, Pearce believes. "Three billion people live in rural areas that have lots of plastic junk," he says. "They could use it to make useful consumer goods for themselves. Or imagine people living by a landfill in Brazil, recycling plastic and making useful products or even just 'fair trade filament' to sell. Twenty milk jugs gets you about 1 kilogram of plastic filament, which currently costs $30 to $50 online."
Read more at Science Daily
And, you could give yourself a gold medal for being a bona fide, recycling, polar-bear-saving rock star.
Michigan Technological University's Joshua Pearce is working on it. His main tool is open-source 3D printing, which he uses to save thousands of dollars by making everything from his lab equipment to his safety razor.
Using free software downloaded from sites like Thingiverse, which now holds over 54,000 open-source designs, 3D printers make all manner of objects by laying down thin layers of plastic in a specific pattern. While high-end printers can cost many thousands of dollars, simpler open-source units run between $250 and $500 -- and can be used to make parts for other 3D printers, driving the cost down ever further.
"One impediment to even more widespread use has been the cost of filament," says Pearce, an associate professor of materials science and engineering and electrical and computer engineering. Though vastly less expensive than most manufactured products, the plastic filament that 3D printers transform into useful objects isn't free.
Milk jugs, on the other hand, are a costly nuisance, either to recycle or to bury in a landfill. But if you could turn them into plastic filament, Pearce reasoned, you could solve the disposal problem and drive down the cost of 3D printing even more.
So Pearce and his research group decided to make their own recycling unit, or RecycleBot. They cut the labels off milk jugs, washed the plastic, and shredded it. Then they ran it through a homemade device that melts and extrudes it into a long, spaghetti-like string of plastic. Their process is open-source and free for everyone to make and use at Thingiverse.com.
The process isn't perfect. Milk jugs are made of high-density polyethylene, or HDPE, which is not ideal for 3D printing. "HDPE is a little more challenging to print with," Pearce says. But the disadvantages are not overwhelming. His group made its own climate-controlled chamber using a dorm-room refrigerator and an off-the-shelf teddy-bear humidifier and had good results. With more experimentation, the results would be even better, he says. "3D printing is where computers were in the 1970s."
The group determined that making their own filament in an insulated RecycleBot used about 1/10th the energy needed to acquire commercial 3D filament. They also calculated that they used less energy than it would take to recycle milk jugs conventionally.
RecycleBots and 3D printers have all kinds of applications, but they would be especially useful in areas where shopping malls are few and far between, Pearce believes. "Three billion people live in rural areas that have lots of plastic junk," he says. "They could use it to make useful consumer goods for themselves. Or imagine people living by a landfill in Brazil, recycling plastic and making useful products or even just 'fair trade filament' to sell. Twenty milk jugs gets you about 1 kilogram of plastic filament, which currently costs $30 to $50 online."
Read more at Science Daily
Mar 1, 2013
'Defective' Virus Surprisingly Plays Major Role in Spread of Disease
Defective viruses, thought for decades to be essentially garbage unrelated to the transmission of normal viruses, now appear able to play an important role in the spread of disease, new research by UCLA life scientists indicates.
Defective viruses have genetic mutations or deletions that eliminate their essential viral functions. They have been observed for many human pathogens and are generated frequently for viruses that have high mutation rates. However, for some 40 years, it was believed that they were unimportant in natural settings.
In findings published Feb. 28 in the journal PLoS Pathogens, UCLA scientists and their colleagues report for the first time a significant link between a defective virus and an increased rate of transmission of a major disease.
"The idea has always been that defective viruses are either meaningless or detrimental," said James O. Lloyd-Smith, a UCLA assistant professor of ecology and evolutionary biology and the senior author of the research. "We have found the opposite of that -- that the defective virus is actually helping the normal, functional virus. This finding is bizarre and hard to believe, but the data are the data."
"We have shown that the defective virus not only transmits with the virus but increases the transmission of the functional virus," said Ruian Ke, a UCLA postdoctoral scholar in the department of ecology and evolutionary biology and the lead author of the study.
Defective viruses cannot complete their life cycle on their own, but if they're able to get into the same cell with a non-defective virus, they can "hitchhike" with the normal virus and propagate, Lloyd-Smith said. Biologists had thought that defective viruses interfered with normal versions of the virus, "clogging up the gears of viral replication," he said.
The life scientists studied DENV-1, one of four known types of the dengue virus that infect humans. Dengue viruses are transmitted by several species of mosquitoes and cause dengue fever, which is characterized by fever, joint pain and a skin rash similar to measles. Dengue hemorrhagic fever, a more severe form of dengue infection, can cause death. The dengue virus infects between 50 million and 100 million people each year in Southeast Asia, South America, parts of the United States and elsewhere.
The life sciences team -- which also included John Aaskov, a virologist and professor of health at Australia's Queensland University of Technology in Brisbane, and Edward Holmes, a professor of biological sciences at Australia's University of Sydney -- found that the presence of a defective DENV-1 virus may have led to large increases in dengue fever cases in Myanmar in 2001 and 2002, when that country experienced its most severe dengue epidemics on record.
The scientists describe when and how the defective "lineage," or series of very closely related defective DENV-1 viruses, emerged and was transmitted between humans and mosquitoes in Myanmar, as well as what the public health implications are.
For the study, Ke designed a mathematical model to analyze the data to learn how the defective DENV-1 virus interacted with the normal virus. Aaskov and Holmes collected genetic sequences from from 15 people in Myanmar sampled over an 18-month period, all of whom were infected with the DENV-1 virus and nine of whom were also infected with the defective version.
Ke discovered that the lineage of defective viruses emerged between June 1998 and February 2001 and that it was spreading in the population until at least 2002. (The following year, the lineage appeared on the South Pacific island of New Caledonia, carried there by either a mosquito or a person.) The scientists analyzed the genetic sequences of both the defective and normal dengue viruses to estimate how long the defective virus had been transmitting in the human population.
"We can see from the gene sequence of the defective version that it is the same lineage and is a continued propagation of the virus," said Lloyd-Smith, who holds UCLA's De Logi Chair in Biological Sciences. "From 2001 to 2002, it went from being quite rare to being in all nine people we sampled that year; everybody sampled who was getting dengue fever was getting the defective version along with the functional virus. It rose from being rare to being very common in just one year."
Most surprisingly, Lloyd-Smith said, the combination of the defective virus with the normal virus was "more fit" than the normal dengue virus alone.
"What we have shown is that this defective virus, which everyone had thought was useless or even detrimental to the fitness of the functional virus, actually appears to have made it better able to spread," he said. "Ruian [Ke] calculated that the defective virus makes it at least 10 percent more transmissible, which is a lot. It was spreading better with its weird, defective cousin tagging along than on its own.
"This study has shown that the functional virus and defective virus travel in unison. The two transmit together in an unbroken chain, and that's not just a matter of getting into the same human or the same mosquito -- they need to get into the same cell inside that human or mosquito in order to share their genes and for the defective version to continue 'hitchhiking.' We are gaining insights into the cellular-level biology of how dengue is infecting hosts. It must be the case that frequently there are multiple infections of single cells.
"Ruian showed the defective virus appeared one to three years before these major epidemics," Lloyd-Smith added. "One could imagine that if you build an understanding of this mechanism, you could measure it, see it coming and potentially get ahead of it."
Might defective viruses play a role in the transmission of influenza, measles and other diseases?
"There are a few signs that this phenomenon may be happening for other viruses," Lloyd-Smith said. "We may be cracking open the book on the possible interactions between the normal, functional viruses and the defective ones that people thought were just dead-ends. These supposedly meaningless viruses may be having a positive impact -- positive for the virus, not for us. There is great variation, year to year, in how large dengue epidemics are in various locations, and we don't understand why. This is a possible mechanism for why there are large epidemics in some years in some places. We need to keep studying this question."
The research points to implications for how mutations might allow a new non-human virus to become a human virus.
"Different strains of a virus with different genetic properties may be interacting more frequently than we thought," said Lloyd-Smith, who studies how ecology, evolution and epidemiology merge to drive the emergence of new pathogens, including new strains with important properties like drug resistance.
Why would a defective virus increase transmission of a disease?
Lloyd-Smith offers two hypotheses. One is that the presence of the defective virus with the functional virus in the same cell makes the functional virus replicate better within the cell by some unknown mechanism. "It might give the virus a bit of flexibility in how it expresses its genes and may make it a bit more fit, a bit better able to reproduce under some circumstances," he said.
A second idea is that the defective virus may be interfering with the disease-causing virus, making the disease less intense; people then have a milder infection, and because they don't feel as sick, they are more likely to go out and spread the disease.
"Normally, biologists test for how well a virus can replicate in a cell, but what we have shown here is even a genotype that cannot replicate in a cell can have an impact on transmission," Ke said.
In conducting the research, Lloyd-Smith and Ke combined genetic sequence analysis with sophisticated mathematical models and bioinformatics.
Genetic sequencing technology has "exploded," Lloyd-Smith said, providing a wealth of data on genetic sequences of pathogens and the evolution of viruses, leading to major new insights into the transmission of viruses.
Read more at Science Daily
Defective viruses have genetic mutations or deletions that eliminate their essential viral functions. They have been observed for many human pathogens and are generated frequently for viruses that have high mutation rates. However, for some 40 years, it was believed that they were unimportant in natural settings.
In findings published Feb. 28 in the journal PLoS Pathogens, UCLA scientists and their colleagues report for the first time a significant link between a defective virus and an increased rate of transmission of a major disease.
"The idea has always been that defective viruses are either meaningless or detrimental," said James O. Lloyd-Smith, a UCLA assistant professor of ecology and evolutionary biology and the senior author of the research. "We have found the opposite of that -- that the defective virus is actually helping the normal, functional virus. This finding is bizarre and hard to believe, but the data are the data."
"We have shown that the defective virus not only transmits with the virus but increases the transmission of the functional virus," said Ruian Ke, a UCLA postdoctoral scholar in the department of ecology and evolutionary biology and the lead author of the study.
Defective viruses cannot complete their life cycle on their own, but if they're able to get into the same cell with a non-defective virus, they can "hitchhike" with the normal virus and propagate, Lloyd-Smith said. Biologists had thought that defective viruses interfered with normal versions of the virus, "clogging up the gears of viral replication," he said.
The life scientists studied DENV-1, one of four known types of the dengue virus that infect humans. Dengue viruses are transmitted by several species of mosquitoes and cause dengue fever, which is characterized by fever, joint pain and a skin rash similar to measles. Dengue hemorrhagic fever, a more severe form of dengue infection, can cause death. The dengue virus infects between 50 million and 100 million people each year in Southeast Asia, South America, parts of the United States and elsewhere.
The life sciences team -- which also included John Aaskov, a virologist and professor of health at Australia's Queensland University of Technology in Brisbane, and Edward Holmes, a professor of biological sciences at Australia's University of Sydney -- found that the presence of a defective DENV-1 virus may have led to large increases in dengue fever cases in Myanmar in 2001 and 2002, when that country experienced its most severe dengue epidemics on record.
The scientists describe when and how the defective "lineage," or series of very closely related defective DENV-1 viruses, emerged and was transmitted between humans and mosquitoes in Myanmar, as well as what the public health implications are.
For the study, Ke designed a mathematical model to analyze the data to learn how the defective DENV-1 virus interacted with the normal virus. Aaskov and Holmes collected genetic sequences from from 15 people in Myanmar sampled over an 18-month period, all of whom were infected with the DENV-1 virus and nine of whom were also infected with the defective version.
Ke discovered that the lineage of defective viruses emerged between June 1998 and February 2001 and that it was spreading in the population until at least 2002. (The following year, the lineage appeared on the South Pacific island of New Caledonia, carried there by either a mosquito or a person.) The scientists analyzed the genetic sequences of both the defective and normal dengue viruses to estimate how long the defective virus had been transmitting in the human population.
"We can see from the gene sequence of the defective version that it is the same lineage and is a continued propagation of the virus," said Lloyd-Smith, who holds UCLA's De Logi Chair in Biological Sciences. "From 2001 to 2002, it went from being quite rare to being in all nine people we sampled that year; everybody sampled who was getting dengue fever was getting the defective version along with the functional virus. It rose from being rare to being very common in just one year."
Most surprisingly, Lloyd-Smith said, the combination of the defective virus with the normal virus was "more fit" than the normal dengue virus alone.
"What we have shown is that this defective virus, which everyone had thought was useless or even detrimental to the fitness of the functional virus, actually appears to have made it better able to spread," he said. "Ruian [Ke] calculated that the defective virus makes it at least 10 percent more transmissible, which is a lot. It was spreading better with its weird, defective cousin tagging along than on its own.
"This study has shown that the functional virus and defective virus travel in unison. The two transmit together in an unbroken chain, and that's not just a matter of getting into the same human or the same mosquito -- they need to get into the same cell inside that human or mosquito in order to share their genes and for the defective version to continue 'hitchhiking.' We are gaining insights into the cellular-level biology of how dengue is infecting hosts. It must be the case that frequently there are multiple infections of single cells.
"Ruian showed the defective virus appeared one to three years before these major epidemics," Lloyd-Smith added. "One could imagine that if you build an understanding of this mechanism, you could measure it, see it coming and potentially get ahead of it."
Might defective viruses play a role in the transmission of influenza, measles and other diseases?
"There are a few signs that this phenomenon may be happening for other viruses," Lloyd-Smith said. "We may be cracking open the book on the possible interactions between the normal, functional viruses and the defective ones that people thought were just dead-ends. These supposedly meaningless viruses may be having a positive impact -- positive for the virus, not for us. There is great variation, year to year, in how large dengue epidemics are in various locations, and we don't understand why. This is a possible mechanism for why there are large epidemics in some years in some places. We need to keep studying this question."
The research points to implications for how mutations might allow a new non-human virus to become a human virus.
"Different strains of a virus with different genetic properties may be interacting more frequently than we thought," said Lloyd-Smith, who studies how ecology, evolution and epidemiology merge to drive the emergence of new pathogens, including new strains with important properties like drug resistance.
Why would a defective virus increase transmission of a disease?
Lloyd-Smith offers two hypotheses. One is that the presence of the defective virus with the functional virus in the same cell makes the functional virus replicate better within the cell by some unknown mechanism. "It might give the virus a bit of flexibility in how it expresses its genes and may make it a bit more fit, a bit better able to reproduce under some circumstances," he said.
A second idea is that the defective virus may be interfering with the disease-causing virus, making the disease less intense; people then have a milder infection, and because they don't feel as sick, they are more likely to go out and spread the disease.
"Normally, biologists test for how well a virus can replicate in a cell, but what we have shown here is even a genotype that cannot replicate in a cell can have an impact on transmission," Ke said.
In conducting the research, Lloyd-Smith and Ke combined genetic sequence analysis with sophisticated mathematical models and bioinformatics.
Genetic sequencing technology has "exploded," Lloyd-Smith said, providing a wealth of data on genetic sequences of pathogens and the evolution of viruses, leading to major new insights into the transmission of viruses.
Read more at Science Daily
New Insights Into Plant Evolution
New research has uncovered a mechanism that regulates the reproduction of plants, providing a possible tool for engineering higher yielding crops. In a study published today in Science, researchers from Monash University and collaborators in Japan and the US, identified for the first time a particular gene that regulates the transition between stages of the life cycle in land plants.
Professor John Bowman, of the Monash School of Biological Sciences said plants, in contrast to animals, take different forms in alternating generations -- one with one set of genes and one with two sets.
"In animals, the bodies we think of are our diploid bodies -- where each cell has two sets of DNA. The haploid phase of our life cycle consists of only eggs if we are female and sperm if we are male. In contrast, plants have large complex bodies in both haploid and diploid generations," Professor Bowman said.
These two plant bodies often have such different characteristics that until the mid-1800s, when better microscopes allowed further research, they were sometimes thought to be separate species.
Professor Bowman and Dr Keiko Sakakibara, formerly of the Monash School of Biological Sciences and now at Hiroshima University, removed a gene, known as KNOX2 from moss. They found that this caused the diploid generation to develop as if it was a haploid, a phenomenon termed apospory. The equivalent mutations in humans would be if our entire bodies were transformed into either eggs or sperm.
"Our study provides insights into how land plants evolved two complex generations, strongly supporting one theory put forward at the beginning of last century proposing that the complex diploid body was a novel evolutionary invention," Professor Bowman said.
While Professor Bowman's laboratory in the School of Biological Sciences is focused on basic research exploring the evolution and development of land plants, he said there were possible applications for the results as mutations in the gene cause the plant to skip a generation.
One goal in agriculture is apomixis, where a plant produces seeds clonally by skipping the haploid generation and thereby maintaining the characteristics, such as a high yielding hybrid, of the mother plant. Apomixis would mean crops with desirable qualities could be produced more easily and cheaply.
Read more at Science Daily
Professor John Bowman, of the Monash School of Biological Sciences said plants, in contrast to animals, take different forms in alternating generations -- one with one set of genes and one with two sets.
"In animals, the bodies we think of are our diploid bodies -- where each cell has two sets of DNA. The haploid phase of our life cycle consists of only eggs if we are female and sperm if we are male. In contrast, plants have large complex bodies in both haploid and diploid generations," Professor Bowman said.
These two plant bodies often have such different characteristics that until the mid-1800s, when better microscopes allowed further research, they were sometimes thought to be separate species.
Professor Bowman and Dr Keiko Sakakibara, formerly of the Monash School of Biological Sciences and now at Hiroshima University, removed a gene, known as KNOX2 from moss. They found that this caused the diploid generation to develop as if it was a haploid, a phenomenon termed apospory. The equivalent mutations in humans would be if our entire bodies were transformed into either eggs or sperm.
"Our study provides insights into how land plants evolved two complex generations, strongly supporting one theory put forward at the beginning of last century proposing that the complex diploid body was a novel evolutionary invention," Professor Bowman said.
While Professor Bowman's laboratory in the School of Biological Sciences is focused on basic research exploring the evolution and development of land plants, he said there were possible applications for the results as mutations in the gene cause the plant to skip a generation.
One goal in agriculture is apomixis, where a plant produces seeds clonally by skipping the haploid generation and thereby maintaining the characteristics, such as a high yielding hybrid, of the mother plant. Apomixis would mean crops with desirable qualities could be produced more easily and cheaply.
Read more at Science Daily
Did a Comet Really Chill and Kill Clovis Culture?
A comet crashing into the Earth some 13,000 years ago was thought to have spelled doom to a group of early North American people, and possibly the extinction of ice age beasts in the region.
But the space rock was wrongly accused, according to a group of 16 scientists in fields ranging from archaeology to crystallography to physics, who have offered counterevidence to the existence of such a collision.
"Despite more than four years of trying by many qualified researchers, no unambiguous evidence has been found [of such an event]," Mark Boslough, a physicist at Sandia National Laboratories in New Mexico, told LiveScience.
"That lack of evidence is therefore evidence of absence."
Changing times
Almost 13,000 years ago, a prehistoric Paleo-Indian group known as the Clovis culture suffered its demise at the same time the region underwent significant climate cooling known as the Younger Dryas. Animals such as ground sloths, camels and mammoths were wiped out in North America around the same period.
In 2007, a team of scientists led by Richard Firestone of the Lawrence Berkeley National Laboratory in California suggested these changes were the result of a collision or explosion of an enormous comet or asteroid, pointing to a carbon-rich black layer at a number of sites across North America. The theory has remained controversial, with no sign of a crater that would have resulted from such an impact.
"If a four-kilometer [2.5-mile] comet had broken up over North America only 12.9 thousand years ago, it is certain that it would have left an unambiguous impact crater or craters, as well as unambiguous shocked materials," Boslough said.
Boslough, who has spent decades studying the effects of comet and asteroid collisions, was part of a team that predicted the visibility of plumes from the impact of the 1994 Shoemaker-Levy 9 comet with Jupiter.
"Comet impacts may be low enough in density not to leave craters," Firestone told LiveScience by email.
He also points to independent research by William Napier at the University of Cardiff in the United Kingdom that indicates such explosions could have come from a debris trail created by Comet Encke, which also would not have left a crater.
A large rock plunging into the Earth's atmosphere may detonate in the air without coming into contact with the ground. Such an explosion occurred in Siberia in the early 20th century; the explosive energy of the so-called Tunguska event was more than 1,000 times more powerful than the atomic bomb dropped on Hiroshima.
"No crater was formed at Tunguska, or the recent Russian impact," Firestone said.
But Boslough said this math doesn't add up. The object responsible for the Tunguska event was very small, about 130 to 160 feet (40 to 50 meters) wide, while the recent explosion over Russia was smaller, about 56 feet (17 meters). The proposed North American space rock linked with the Clovis demise is estimated to have been closer to 2.5 miles (4 kilometers) across.
"The physics doesn't support the idea of something that big exploding in the air," he said, noting that the original research team doesn't provide any explanation or models for how such a breakup might occur.
If such a large object crashed into the Earth, the resulting crater would be too large to miss, particularly when it was only a few thousand years old, Boslough said. He pointed to Meteor Crater in Arizona, which is three times as old and formed by an object "a million times smaller in terms of explosive energy."
"Meteor Crater is an unambiguous impact crater with unambiguous shocked minerals," Boslough said. If a 2.5-mile comet had broken into pieces, it could have made a million Meteor Craters, he added.
Firestone argued that water or ice could have absorbed the impact, possibly leaving behind no crater.
Boslough disagreed. Even if the comet had plunged into the ice sheet covering much of North America, the crater formed beneath it would still be sizable. "We wouldn't be able to miss that right now — it would be obvious," Boslough said.
The arguments and evidence against the impact were published in the December 2012 American Geophysical Union monograph.
"Extraordinary claims require extraordinary evidence"
Powerful impacts are Boslough's field, but the other 15 scientists working on the paper offered up other sources of counterevidence for the existence of a collision.
"We all independently came to the conclusion that the evidence doesn't support a Younger Dryas impact," Boslough said.
"We all came to this based on our own very narrow piece of the puzzle."
For instance, the initial team studying the event announced the discovery of a carbon-rich black layer, colloquially known as a "black mat," at a number of sites in North America. Containing charcoal, soot and nanodiamonds, such material could be formed by a violent collision.
But this isn't the only possible source.
"The things they call impact markers are not necessarily indicators of high-pressure shocks," Boslough said. "There are other processes that potentially could have formed them."
Speaking of the black mat found in central Mexico, Firestone said, "Boslough is correct that there are other black mats, but these are dated to 12,900 years ago at the time of impact." He points to independent research published this fall that located hundreds to thousands of samples.
However, radiocarbon dating of one of the sites in Gainey, Mich., suggested its samples were contaminated.
Melted rock formations and microscopic diamonds found in a lake in Central Mexico last year were also suggested as evidence for the collision, but Boslough's team disagrees with the age of the sediment layer in the region.
Boslough said the standard for indicating a strong shock occurred is pretty high in the impact community, and the findings by the original team don't meet them. Nor do they offer up any physical models that propose how an impact or airburst would have occurred — and the ones Boslough has run just don't pan out.
"It's really a stretch to claim that there was this large impact event with no crater and no unambiguous shock material, because large impacts are such rare events," Boslough said.
Read more at Discovery News
But the space rock was wrongly accused, according to a group of 16 scientists in fields ranging from archaeology to crystallography to physics, who have offered counterevidence to the existence of such a collision.
"Despite more than four years of trying by many qualified researchers, no unambiguous evidence has been found [of such an event]," Mark Boslough, a physicist at Sandia National Laboratories in New Mexico, told LiveScience.
"That lack of evidence is therefore evidence of absence."
Changing times
Almost 13,000 years ago, a prehistoric Paleo-Indian group known as the Clovis culture suffered its demise at the same time the region underwent significant climate cooling known as the Younger Dryas. Animals such as ground sloths, camels and mammoths were wiped out in North America around the same period.
In 2007, a team of scientists led by Richard Firestone of the Lawrence Berkeley National Laboratory in California suggested these changes were the result of a collision or explosion of an enormous comet or asteroid, pointing to a carbon-rich black layer at a number of sites across North America. The theory has remained controversial, with no sign of a crater that would have resulted from such an impact.
"If a four-kilometer [2.5-mile] comet had broken up over North America only 12.9 thousand years ago, it is certain that it would have left an unambiguous impact crater or craters, as well as unambiguous shocked materials," Boslough said.
Boslough, who has spent decades studying the effects of comet and asteroid collisions, was part of a team that predicted the visibility of plumes from the impact of the 1994 Shoemaker-Levy 9 comet with Jupiter.
"Comet impacts may be low enough in density not to leave craters," Firestone told LiveScience by email.
He also points to independent research by William Napier at the University of Cardiff in the United Kingdom that indicates such explosions could have come from a debris trail created by Comet Encke, which also would not have left a crater.
A large rock plunging into the Earth's atmosphere may detonate in the air without coming into contact with the ground. Such an explosion occurred in Siberia in the early 20th century; the explosive energy of the so-called Tunguska event was more than 1,000 times more powerful than the atomic bomb dropped on Hiroshima.
"No crater was formed at Tunguska, or the recent Russian impact," Firestone said.
But Boslough said this math doesn't add up. The object responsible for the Tunguska event was very small, about 130 to 160 feet (40 to 50 meters) wide, while the recent explosion over Russia was smaller, about 56 feet (17 meters). The proposed North American space rock linked with the Clovis demise is estimated to have been closer to 2.5 miles (4 kilometers) across.
"The physics doesn't support the idea of something that big exploding in the air," he said, noting that the original research team doesn't provide any explanation or models for how such a breakup might occur.
If such a large object crashed into the Earth, the resulting crater would be too large to miss, particularly when it was only a few thousand years old, Boslough said. He pointed to Meteor Crater in Arizona, which is three times as old and formed by an object "a million times smaller in terms of explosive energy."
"Meteor Crater is an unambiguous impact crater with unambiguous shocked minerals," Boslough said. If a 2.5-mile comet had broken into pieces, it could have made a million Meteor Craters, he added.
Firestone argued that water or ice could have absorbed the impact, possibly leaving behind no crater.
Boslough disagreed. Even if the comet had plunged into the ice sheet covering much of North America, the crater formed beneath it would still be sizable. "We wouldn't be able to miss that right now — it would be obvious," Boslough said.
The arguments and evidence against the impact were published in the December 2012 American Geophysical Union monograph.
"Extraordinary claims require extraordinary evidence"
Powerful impacts are Boslough's field, but the other 15 scientists working on the paper offered up other sources of counterevidence for the existence of a collision.
"We all independently came to the conclusion that the evidence doesn't support a Younger Dryas impact," Boslough said.
"We all came to this based on our own very narrow piece of the puzzle."
For instance, the initial team studying the event announced the discovery of a carbon-rich black layer, colloquially known as a "black mat," at a number of sites in North America. Containing charcoal, soot and nanodiamonds, such material could be formed by a violent collision.
But this isn't the only possible source.
"The things they call impact markers are not necessarily indicators of high-pressure shocks," Boslough said. "There are other processes that potentially could have formed them."
Speaking of the black mat found in central Mexico, Firestone said, "Boslough is correct that there are other black mats, but these are dated to 12,900 years ago at the time of impact." He points to independent research published this fall that located hundreds to thousands of samples.
However, radiocarbon dating of one of the sites in Gainey, Mich., suggested its samples were contaminated.
Melted rock formations and microscopic diamonds found in a lake in Central Mexico last year were also suggested as evidence for the collision, but Boslough's team disagrees with the age of the sediment layer in the region.
Boslough said the standard for indicating a strong shock occurred is pretty high in the impact community, and the findings by the original team don't meet them. Nor do they offer up any physical models that propose how an impact or airburst would have occurred — and the ones Boslough has run just don't pan out.
"It's really a stretch to claim that there was this large impact event with no crater and no unambiguous shock material, because large impacts are such rare events," Boslough said.
Read more at Discovery News
The Coolest Thing About Alpha Centauri A
Alpha Centauri is cool. Everyone knows that, right? It’s the closest star system to our own, one of the brightest visible stars in the night sky, and it even has at least one planet. As star systems go, that’s already pretty cool. Ah, but the coolest thing that we’ve discovered about the star itself — at least the main one, Alpha Centauri A — is in its atmosphere. And in this case, I mean that quite literally.
René Liseau and a team of astronomers using ESA’s Herschel Space Observatory have taken detailed observations of Alpha Cen A, and peered into the star’s atmosphere. What they found was a cool layer just above the star’s surface, just like we see in the sun. Except that this is the first time anyone’s observed this phenomenon in another star.
The visible exterior of the sun is no less complex than Earth’s atmosphere, with several different layers each possessing their own different characteristics. The sun’s “surface” is known as the photosphere, so named because it’s the layer where sunlight is emitted out into space. The photosphere itself, reaching a searing temperature of a little under 6,000 degrees Celsius, isn’t actually a true surface per se. It’s simply the part of the sun which becomes opaque to light, preventing us from looking any deeper inside it in visible light. The sun being essentially a roiling magnetized ball of hot plasma powered by nuclear reactions, it doesn’t have any real surface to speak of.
Above the sun’s photosphere lies the chromosphere — the lowest layer of the sun’s atmosphere, extending a few hundred kilometers upwards, and also the coolest with a somewhat lower temperature of around 4,000°C. Actually, the chromosphere is easily seen during a solar eclipse as the region where bright red flares, colored by hydrogen, leap thousands of kilometers in single bounds. Higher up lies the sun’s corona, stretching millions of kilometers out into space, with a temperature peaking at around 5 million degrees. The reason for this sudden huge rise in temperature? Actually, that’s a question which solar physicists are still working on.
The fascinating part is that before now, we had never properly seen this in any other star. Alpha Cen A is quite a logical place to look for these things, mind you. It’s quite close to being a mirror image of the sun. Similar in mass and temperature, slightly hotter and older, the two stars are both G-type yellow dwarfs. Evidently too, they also have the same sort of atmospheres.
The sun’s corona is thought to be heated to such extreme temperatures by electromagnetic phenomena such as Alfvén waves, and the magnetic reconnections that cause solar flares. However, a complete theory to explain the high temperature of the corona — and the low temperature of the chromosphere — has yet to be devised. Studying the same phenomenon in other stars like Alpha Centauri A may help us to better understand the cause for the different layers in a star’s atmosphere, and what processes create them.
Read more at Discovery News
René Liseau and a team of astronomers using ESA’s Herschel Space Observatory have taken detailed observations of Alpha Cen A, and peered into the star’s atmosphere. What they found was a cool layer just above the star’s surface, just like we see in the sun. Except that this is the first time anyone’s observed this phenomenon in another star.
The visible exterior of the sun is no less complex than Earth’s atmosphere, with several different layers each possessing their own different characteristics. The sun’s “surface” is known as the photosphere, so named because it’s the layer where sunlight is emitted out into space. The photosphere itself, reaching a searing temperature of a little under 6,000 degrees Celsius, isn’t actually a true surface per se. It’s simply the part of the sun which becomes opaque to light, preventing us from looking any deeper inside it in visible light. The sun being essentially a roiling magnetized ball of hot plasma powered by nuclear reactions, it doesn’t have any real surface to speak of.
Above the sun’s photosphere lies the chromosphere — the lowest layer of the sun’s atmosphere, extending a few hundred kilometers upwards, and also the coolest with a somewhat lower temperature of around 4,000°C. Actually, the chromosphere is easily seen during a solar eclipse as the region where bright red flares, colored by hydrogen, leap thousands of kilometers in single bounds. Higher up lies the sun’s corona, stretching millions of kilometers out into space, with a temperature peaking at around 5 million degrees. The reason for this sudden huge rise in temperature? Actually, that’s a question which solar physicists are still working on.
The fascinating part is that before now, we had never properly seen this in any other star. Alpha Cen A is quite a logical place to look for these things, mind you. It’s quite close to being a mirror image of the sun. Similar in mass and temperature, slightly hotter and older, the two stars are both G-type yellow dwarfs. Evidently too, they also have the same sort of atmospheres.
The sun’s corona is thought to be heated to such extreme temperatures by electromagnetic phenomena such as Alfvén waves, and the magnetic reconnections that cause solar flares. However, a complete theory to explain the high temperature of the corona — and the low temperature of the chromosphere — has yet to be devised. Studying the same phenomenon in other stars like Alpha Centauri A may help us to better understand the cause for the different layers in a star’s atmosphere, and what processes create them.
Read more at Discovery News
Feb 28, 2013
Birth of a Giant Planet? Candidate Protoplanet Spotted Inside Its Stellar Womb
Astronomers using ESO's Very Large Telescope have obtained what is likely the first direct observation of a forming planet still embedded in a thick disc of gas and dust. If confirmed, this discovery will greatly improve our understanding of how planets form and allow astronomers to test the current theories against an observable target.
An international team led by Sascha Quanz (ETH Zurich, Switzerland) has studied the disc of gas and dust that surrounds the young star HD 100546, a relatively nearby neighbour located 335 light-years from Earth. They were surprised to find what seems to be a planet in the process of being formed, still embedded in the disc of material around the young star. The candidate planet would be a gas giant similar to Jupiter.
"So far, planet formation has mostly been a topic tackled by computer simulations," says Sascha Quanz. "If our discovery is indeed a forming planet, then for the first time scientists will be able to study the planet formation process and the interaction of a forming planet and its natal environment empirically at a very early stage."
HD 100546 is a well-studied object, and it has already been suggested that a giant planet orbits about six times further from the star than Earth is from the Sun. The newly found planet candidate is located in the outer regions of the system, about ten times further out [1].
The planet candidate around HD 100546 was detected as a faint blob located in the circumstellar disc revealed thanks to the NACO adaptive optics instrument on ESO's VLT, combined with pioneering data analysis techniques. The observations were made using a special coronagraph in NACO, which operates at near-infrared wavelengths and suppresses the brilliant light coming from the star at the location of the protoplanet candidate [2].
According to current theory, giant planets grow by capturing some of the gas and dust that remains after the formation of a star [3]. The astronomers have spotted several features in the new image of the disc around HD100546 that support this protoplanet hypothesis. Structures in the dusty circumstellar disc, which could be caused by interactions between the planet and the disc, were revealed close to the detected protoplanet. Also, there are indications that the surroundings of the protoplanet are potentially heated up by the formation process.
Adam Amara, another member of the team, is enthusiastic about the finding. "Exoplanet research is one of the most exciting new frontiers in astronomy, and direct imaging of planets is still a new field, greatly benefiting from recent improvements in instruments and data analysis methods. In this research we used data analysis techniques developed for cosmological research, showing that cross-fertilisation of ideas between fields can lead to extraordinary progress."
Although the protoplanet is the most likely explanation for the observations, the results of this study require follow-up observations to confirm the existence of the planet and discard other plausible scenarios. Among other explanations, it is possible, although unlikely, that the detected signal could have come from a background source. It is also possible that the newly detected object might not be a protoplanet, but a fully formed planet which was ejected from its original orbit closer to the star. When the new object around HD 100546 is confirmed to be a forming planet embedded in its parent disc of gas and dust, it will become an unique laboratory in which to study the formation process of a new planetary system.
Read more at Science Daily
An international team led by Sascha Quanz (ETH Zurich, Switzerland) has studied the disc of gas and dust that surrounds the young star HD 100546, a relatively nearby neighbour located 335 light-years from Earth. They were surprised to find what seems to be a planet in the process of being formed, still embedded in the disc of material around the young star. The candidate planet would be a gas giant similar to Jupiter.
"So far, planet formation has mostly been a topic tackled by computer simulations," says Sascha Quanz. "If our discovery is indeed a forming planet, then for the first time scientists will be able to study the planet formation process and the interaction of a forming planet and its natal environment empirically at a very early stage."
HD 100546 is a well-studied object, and it has already been suggested that a giant planet orbits about six times further from the star than Earth is from the Sun. The newly found planet candidate is located in the outer regions of the system, about ten times further out [1].
The planet candidate around HD 100546 was detected as a faint blob located in the circumstellar disc revealed thanks to the NACO adaptive optics instrument on ESO's VLT, combined with pioneering data analysis techniques. The observations were made using a special coronagraph in NACO, which operates at near-infrared wavelengths and suppresses the brilliant light coming from the star at the location of the protoplanet candidate [2].
According to current theory, giant planets grow by capturing some of the gas and dust that remains after the formation of a star [3]. The astronomers have spotted several features in the new image of the disc around HD100546 that support this protoplanet hypothesis. Structures in the dusty circumstellar disc, which could be caused by interactions between the planet and the disc, were revealed close to the detected protoplanet. Also, there are indications that the surroundings of the protoplanet are potentially heated up by the formation process.
Adam Amara, another member of the team, is enthusiastic about the finding. "Exoplanet research is one of the most exciting new frontiers in astronomy, and direct imaging of planets is still a new field, greatly benefiting from recent improvements in instruments and data analysis methods. In this research we used data analysis techniques developed for cosmological research, showing that cross-fertilisation of ideas between fields can lead to extraordinary progress."
Although the protoplanet is the most likely explanation for the observations, the results of this study require follow-up observations to confirm the existence of the planet and discard other plausible scenarios. Among other explanations, it is possible, although unlikely, that the detected signal could have come from a background source. It is also possible that the newly detected object might not be a protoplanet, but a fully formed planet which was ejected from its original orbit closer to the star. When the new object around HD 100546 is confirmed to be a forming planet embedded in its parent disc of gas and dust, it will become an unique laboratory in which to study the formation process of a new planetary system.
Read more at Science Daily
Toxic Oceans May Have Delayed Spread of Complex Life
A new model suggests that inhospitable hydrodgen-sulphide rich waters could have delayed the spread of complex life forms in ancient oceans. The research, published online this week in the journal Nature Communications, considers the composition of the oceans 550-700 million years ago and shows that oxygen-poor toxic conditions, which may have delayed the establishment of complex life, were controlled by the biological availability of nitrogen.
In contrast to modern oceans, data from ancient rocks indicates that the deep oceans of the early Earth contained little oxygen, and flipped between an iron-rich state and a toxic hydrogen-sulphide-rich state. The latter toxic sulphidic state is caused by bacteria that survive in low oxygen and low nitrate conditions. The study shows how bacteria using nitrate in their metabolism would have displaced the less energetically efficient bacteria that produce sulphide -- meaning that the presence of nitrate in the oceans prevented build-up of the toxic sulphidic state.
The model, developed by researchers at the University of Exeter in collaboration with Plymouth Marine Laboratory, University of Leeds, UCL (University College London) and the University of Southern Denmark, reveals the sensitivity of the early oceans to the global nitrogen cycle. It shows how the availability of nitrate, and feedbacks within the global nitrogen cycle, would have controlled the shifting of the oceans between the two oxygen-free states -- potentially restricting the spread of early complex life.
Dr Richard Boyle from the University of Exeter said: "Data from the modern ocean suggests that even in an oxygen-poor ocean, this apparent global-scale interchange between sulphidic and non-sulphidic conditions is difficult to achieve. We've shown here how feedbacks arising from the fact that life uses nitrate as both a nutrient, and in respiration, controlled the interchange between two ocean states. For as long as sulphidic conditions remained frequent, Earth's oceans were inhospitable towards complex life."
Read more at Science Daily
In contrast to modern oceans, data from ancient rocks indicates that the deep oceans of the early Earth contained little oxygen, and flipped between an iron-rich state and a toxic hydrogen-sulphide-rich state. The latter toxic sulphidic state is caused by bacteria that survive in low oxygen and low nitrate conditions. The study shows how bacteria using nitrate in their metabolism would have displaced the less energetically efficient bacteria that produce sulphide -- meaning that the presence of nitrate in the oceans prevented build-up of the toxic sulphidic state.
The model, developed by researchers at the University of Exeter in collaboration with Plymouth Marine Laboratory, University of Leeds, UCL (University College London) and the University of Southern Denmark, reveals the sensitivity of the early oceans to the global nitrogen cycle. It shows how the availability of nitrate, and feedbacks within the global nitrogen cycle, would have controlled the shifting of the oceans between the two oxygen-free states -- potentially restricting the spread of early complex life.
Dr Richard Boyle from the University of Exeter said: "Data from the modern ocean suggests that even in an oxygen-poor ocean, this apparent global-scale interchange between sulphidic and non-sulphidic conditions is difficult to achieve. We've shown here how feedbacks arising from the fact that life uses nitrate as both a nutrient, and in respiration, controlled the interchange between two ocean states. For as long as sulphidic conditions remained frequent, Earth's oceans were inhospitable towards complex life."
Read more at Science Daily
DNA's Twisted Communication
Gene expression needs to be finely controlled during embryo development. Fgf8 is one of these regulation factors that control how the limbs, the head and the brain grow. Researchers at EMBL have elucidated how Fgf8 in mammal embryos is, itself, controlled by a series of interdependent regulatory elements. Their findings, published February 28 in Developmental Cell, shed new light on the importance of the genome's architecture for gene regulation.
During embryo development, genes are dynamically, and very precisely, switched on and off to confer different properties to different cells and build a well-proportioned and healthy animal. Fgf8 is one of the key genes in this process, controlling in particular the growth of the limbs and the formation of the different regions of the brain. Researchers at EMBL have elucidated how Fgf8 in mammal embryos is, itself, controlled by a series of multiple, interdependent regulatory elements.
Fgf8 is controlled by a large number of regulatory elements that are clustered in the same large region of the genome and are interspersed with other, unrelated genes. Both the sequences and the intricate genomic arrangement of these elements have remained very stable throughout evolution, thus proving their importance. By selectively changing the relative positioning of the regulatory elements, the researchers were able to modify their combined impact on Fgf8, and therefore drastically affect the embryo.
"We showed that the surprisingly complex organisation of this genomic region is a key aspect of the regulation of Fgf8," explains François Spitz, who led the study at EMBL. "Fgf8 responds to the input of specific regulatory elements, and not to others, because it sits at a special place, not because it is a special gene. How the regulatory elements contribute to activate a gene is not determined by a specific recognition tag, but by where precisely the gene is in the genome."
Scientists are still looking into the molecular details of this regulatory mechanism. It is likely that the way DNA folds in 3D could, under certain circumstances, bring different sets of regulatory elements in contact with each other and with Fgf8, to trigger or prevent gene expression. These findings highlight a level of complexity of gene regulation that is often overlooked. Regulatory elements are not engaged in a one-to-one relationship with the specific gene that has the appropriate DNA sequence. The local genomic organisation, and 3D folding of DNA, might actually be more important factors that both modulate the action of regulation elements, and put them in contact with their target gene.
Read more at Science Daily
During embryo development, genes are dynamically, and very precisely, switched on and off to confer different properties to different cells and build a well-proportioned and healthy animal. Fgf8 is one of the key genes in this process, controlling in particular the growth of the limbs and the formation of the different regions of the brain. Researchers at EMBL have elucidated how Fgf8 in mammal embryos is, itself, controlled by a series of multiple, interdependent regulatory elements.
Fgf8 is controlled by a large number of regulatory elements that are clustered in the same large region of the genome and are interspersed with other, unrelated genes. Both the sequences and the intricate genomic arrangement of these elements have remained very stable throughout evolution, thus proving their importance. By selectively changing the relative positioning of the regulatory elements, the researchers were able to modify their combined impact on Fgf8, and therefore drastically affect the embryo.
"We showed that the surprisingly complex organisation of this genomic region is a key aspect of the regulation of Fgf8," explains François Spitz, who led the study at EMBL. "Fgf8 responds to the input of specific regulatory elements, and not to others, because it sits at a special place, not because it is a special gene. How the regulatory elements contribute to activate a gene is not determined by a specific recognition tag, but by where precisely the gene is in the genome."
Scientists are still looking into the molecular details of this regulatory mechanism. It is likely that the way DNA folds in 3D could, under certain circumstances, bring different sets of regulatory elements in contact with each other and with Fgf8, to trigger or prevent gene expression. These findings highlight a level of complexity of gene regulation that is often overlooked. Regulatory elements are not engaged in a one-to-one relationship with the specific gene that has the appropriate DNA sequence. The local genomic organisation, and 3D folding of DNA, might actually be more important factors that both modulate the action of regulation elements, and put them in contact with their target gene.
Read more at Science Daily
Nut-Cracking Monkeys Show Humanlike Skills
Nut-cracking monkeys don't just use tools. They use tools with skill.
That's the conclusion of a new study that finds similar tool-use strategies between humans and Brazil's bearded capuchin monkeys, which use rocks to smash nuts for snacks. Both monkeys and humans given the nut-smashing task take the time to place the nuts in their most stable position on a stone "anvil," the study found, keeping the tasty morsels from rolling away.
That means the monkeys are able to not only use tools, but to use them with finesse. This ability may be a precursor to humans' ability to adapt tools to different circumstances and to use them smoothly under varying conditions.
"Any one individual can accomodate stones of different sizes, anvils of different angles and material and nuts of different shapes and sizes," said study leader Dorothy Fragaszy, a primate researcher at the University of Georgia, adding, "In fact, some of these nuts people can't crack."
Nut-crackers
Bearded capuchin monkeys were the first non-ape primates to be discovered using tools in the wild. They crack tough nuts by placing them on pitted stone anvils and then hitting them hard with other large rocks.
"They are slamming [the rock] on that nut," Fragaszy told LiveScience. "It's very impressive when you see it."
Fragaszy and her colleagues wanted to get a better idea of how skilled capuchins are at nut-cracking. In particular, they noticed the monkeys have an odd habit of tapping the nuts multiple times against the stone pits before putting them down. Perhaps, they thought, the tapping was a way to tell how stable the nut might be.
To find out, the researchers brought palm nuts to a population of capuchin monkeys in Fazenda Boa Vista in Brazil. The monkeys are wild, but habituated to human presence. Ten of the monkeys "volunteered" for the study by gathering the nuts and cracking them with stones as big as their heads as the researchers videotaped.
Before handing over the nuts, however, the scientists rolled them along the floor to find their flat sides, which they marked with a line. They also marked the other axis of the nut with color-coded pens so they could identify how the monkeys placed the nuts in the video.
Savvy tool use
The results revealed that the monkeys consistently placed the nuts in the most stable position. Out of 302 nut-cracking attempts, 253 started with the line marking the nut's stable axis facing up. Monkeys varied only slightly in their ability to ideally place the nut, doing so between 71 percent and 94 percent of the time depending on the individual.
Next, the researchers ran an identical test with humans. Seven male and seven female volunteers were given nuts and told to crack them with stones, just as the capuchin monkeys do. The humans were blindfolded during the task, because the researchers suspected that the monkeys could place the nuts by feel and wanted to find out if humans could, too.
On average, the humans also placed the nuts in the most stable position, doing so on about 71 percent of tries. Unlike capuchins, however, they didn't knock the nuts against the stone very frequently. Instead, humans tended to roll the nuts around in their hands, feeling their shape. Humans have much larger hands than bearded capuchins, the researchers wrote today (Feb. 27) in the journal PLOS ONE, which could explain the different strategies.
Read more at Discovery News
That's the conclusion of a new study that finds similar tool-use strategies between humans and Brazil's bearded capuchin monkeys, which use rocks to smash nuts for snacks. Both monkeys and humans given the nut-smashing task take the time to place the nuts in their most stable position on a stone "anvil," the study found, keeping the tasty morsels from rolling away.
That means the monkeys are able to not only use tools, but to use them with finesse. This ability may be a precursor to humans' ability to adapt tools to different circumstances and to use them smoothly under varying conditions.
"Any one individual can accomodate stones of different sizes, anvils of different angles and material and nuts of different shapes and sizes," said study leader Dorothy Fragaszy, a primate researcher at the University of Georgia, adding, "In fact, some of these nuts people can't crack."
Nut-crackers
Bearded capuchin monkeys were the first non-ape primates to be discovered using tools in the wild. They crack tough nuts by placing them on pitted stone anvils and then hitting them hard with other large rocks.
"They are slamming [the rock] on that nut," Fragaszy told LiveScience. "It's very impressive when you see it."
Fragaszy and her colleagues wanted to get a better idea of how skilled capuchins are at nut-cracking. In particular, they noticed the monkeys have an odd habit of tapping the nuts multiple times against the stone pits before putting them down. Perhaps, they thought, the tapping was a way to tell how stable the nut might be.
To find out, the researchers brought palm nuts to a population of capuchin monkeys in Fazenda Boa Vista in Brazil. The monkeys are wild, but habituated to human presence. Ten of the monkeys "volunteered" for the study by gathering the nuts and cracking them with stones as big as their heads as the researchers videotaped.
Before handing over the nuts, however, the scientists rolled them along the floor to find their flat sides, which they marked with a line. They also marked the other axis of the nut with color-coded pens so they could identify how the monkeys placed the nuts in the video.
Savvy tool use
The results revealed that the monkeys consistently placed the nuts in the most stable position. Out of 302 nut-cracking attempts, 253 started with the line marking the nut's stable axis facing up. Monkeys varied only slightly in their ability to ideally place the nut, doing so between 71 percent and 94 percent of the time depending on the individual.
Next, the researchers ran an identical test with humans. Seven male and seven female volunteers were given nuts and told to crack them with stones, just as the capuchin monkeys do. The humans were blindfolded during the task, because the researchers suspected that the monkeys could place the nuts by feel and wanted to find out if humans could, too.
On average, the humans also placed the nuts in the most stable position, doing so on about 71 percent of tries. Unlike capuchins, however, they didn't knock the nuts against the stone very frequently. Instead, humans tended to roll the nuts around in their hands, feeling their shape. Humans have much larger hands than bearded capuchins, the researchers wrote today (Feb. 27) in the journal PLOS ONE, which could explain the different strategies.
Read more at Discovery News
Feb 27, 2013
Ancient Shoes Turn Up in Egypt Temple
More than 2,000 years ago, at a time when Egypt was ruled by a dynasty of kings of Greek descent, someone, perhaps a group of people, hid away some of the most valuable possessions they had — their shoes.
Seven shoes were deposited in a jar in an Egyptian temple in Luxor, three pairs and a single one. Two pairs were originally worn by children and were only about 7 inches (18 centimeters) long. Using palm fiber string, the child shoes were tied together within the single shoe (it was larger and meant for an adult) and put in the jar. Another pair of shoes, more than 9 inches (24 cm) long that had been worn by a limping adult, was also inserted in the jar.
The shoe-filled jar, along with two other jars, had been "deliberately placed in a small space between two mudbrick walls," writes archaeologist Angelo Sesana in a report published in the journal Memnonia.
Whoever deposited the shoes never returned to collect them, and they were forgotten, until now.
In 2004, an Italian archaeological expedition team, led by Sesana, rediscovered the shoes. The archaeologists gave André Veldmeijer, an expert in ancient Egyptian footwear, access to photographs that show the finds.
"The find is extraordinary as the shoes were in pristine condition and still supple upon discovery," writes Veldmeijer in the most recent edition of the Journal of the American Research Center in Egypt. Unfortunately after being unearthed the shoes became brittle and "extremely fragile," he added.
Pricey shoes
Veldmeijer's analysis suggests the shoes may have been foreign-made and were "relatively expensive." Sandals were the more common footwear in Egypt and that the style and quality of these seven shoes was such that "everybody would look at you," and "it would give you much more status because you had these expensive pair of shoes," said Veldmeijer, assistant director for Egyptology of the Netherlands-Flemish Institute in Cairo.
The date of the shoes is based on the jar they were found in and the other two jars, as well as the stratigraphy, or layering of sediments, of the area. It may be possible in the future to carbon date the shoes to confirm their age.
Why they were left in the temple in antiquity and not retrieved is a mystery. "There's no reason to store them without having the intention of getting them back at some point," Veldmeijer said in an interview with LiveScience, adding that there could have been some kind of unrest that forced the owners of the shoes to deposit them and flee hastily. The temple itself predates the shoes by more than 1,000 years and was originally built for pharaoh Amenhotep II (1424-1398 B.C.).
Design discoveries
Veldmeijer made a number of shoe design discoveries. He found that the people who wore the seven shoes would have tied them using what researchers call "tailed toggles." Leather strips at the top of the shoes would form knots that would be passed through openings to close the shoes. After they were closed a long strip of leather would have hung down, decoratively, at either side. The shoes are made out of leather, which is likely bovine.
Most surprising was that the isolated shoe had what shoemakers call a "rand," a device that until now was thought to have been first used in medieval Europe. A rand is a folded leather strip that would go between the sole of the shoe and the upper part, reinforcing the stitching as the "the upper is very prone to tear apart at the stitch holes," he explained. The device would've been useful in muddy weather when shoes are under pressure, as it makes the seam much more resistant to water.
In the dry (and generally not muddy) climate of ancient Egypt, he said that it's a surprising innovation and seems to indicate the seven shoes were constructed somewhere abroad.
Health discoveries
The shoes also provided insight into the health of the people wearing them. In the case of the isolated shoe, he found a "semi-circular protruding area" that could be a sign of a condition called Hallux Valgus, more popularly known as a bunion. (The 9 Most Bizarre Medical Conditions)
Read more at Discovery News
Seven shoes were deposited in a jar in an Egyptian temple in Luxor, three pairs and a single one. Two pairs were originally worn by children and were only about 7 inches (18 centimeters) long. Using palm fiber string, the child shoes were tied together within the single shoe (it was larger and meant for an adult) and put in the jar. Another pair of shoes, more than 9 inches (24 cm) long that had been worn by a limping adult, was also inserted in the jar.
The shoe-filled jar, along with two other jars, had been "deliberately placed in a small space between two mudbrick walls," writes archaeologist Angelo Sesana in a report published in the journal Memnonia.
Whoever deposited the shoes never returned to collect them, and they were forgotten, until now.
In 2004, an Italian archaeological expedition team, led by Sesana, rediscovered the shoes. The archaeologists gave André Veldmeijer, an expert in ancient Egyptian footwear, access to photographs that show the finds.
"The find is extraordinary as the shoes were in pristine condition and still supple upon discovery," writes Veldmeijer in the most recent edition of the Journal of the American Research Center in Egypt. Unfortunately after being unearthed the shoes became brittle and "extremely fragile," he added.
Pricey shoes
Veldmeijer's analysis suggests the shoes may have been foreign-made and were "relatively expensive." Sandals were the more common footwear in Egypt and that the style and quality of these seven shoes was such that "everybody would look at you," and "it would give you much more status because you had these expensive pair of shoes," said Veldmeijer, assistant director for Egyptology of the Netherlands-Flemish Institute in Cairo.
The date of the shoes is based on the jar they were found in and the other two jars, as well as the stratigraphy, or layering of sediments, of the area. It may be possible in the future to carbon date the shoes to confirm their age.
Why they were left in the temple in antiquity and not retrieved is a mystery. "There's no reason to store them without having the intention of getting them back at some point," Veldmeijer said in an interview with LiveScience, adding that there could have been some kind of unrest that forced the owners of the shoes to deposit them and flee hastily. The temple itself predates the shoes by more than 1,000 years and was originally built for pharaoh Amenhotep II (1424-1398 B.C.).
Design discoveries
Veldmeijer made a number of shoe design discoveries. He found that the people who wore the seven shoes would have tied them using what researchers call "tailed toggles." Leather strips at the top of the shoes would form knots that would be passed through openings to close the shoes. After they were closed a long strip of leather would have hung down, decoratively, at either side. The shoes are made out of leather, which is likely bovine.
Most surprising was that the isolated shoe had what shoemakers call a "rand," a device that until now was thought to have been first used in medieval Europe. A rand is a folded leather strip that would go between the sole of the shoe and the upper part, reinforcing the stitching as the "the upper is very prone to tear apart at the stitch holes," he explained. The device would've been useful in muddy weather when shoes are under pressure, as it makes the seam much more resistant to water.
In the dry (and generally not muddy) climate of ancient Egypt, he said that it's a surprising innovation and seems to indicate the seven shoes were constructed somewhere abroad.
Health discoveries
The shoes also provided insight into the health of the people wearing them. In the case of the isolated shoe, he found a "semi-circular protruding area" that could be a sign of a condition called Hallux Valgus, more popularly known as a bunion. (The 9 Most Bizarre Medical Conditions)
Read more at Discovery News
Cleopatra's Murdered Sister -- Found?
A Viennese archaeologist lecturing in North Carolina this week claims to have identified the bones of Cleopatra's murdered sister or half-sister. But not everyone is convinced.
That's because the evidence linking the bones, discovered in an ancient Greek city, to Cleopatra's sibling Arsinoe IV is largely circumstantial. A DNA test was attempted, said Hilke Thur, an archaeologist at the Austrian Academy of Sciences and a former director of excavations at the site where the bones were found. However, the 2,000-year-old bones had been moved and handled too many times to get uncontaminated results.
"It didn't bring the results we hoped to find," Thur told the Charlotte News-Observer. She will lecture on her research March 1 at the North Carolina Museum of History in Raleigh.
The Ptolemy's bloody history
Arsinoe IV was Cleopatra's younger half-sister or sister, both of them fathered by Ptolemy XII Auletes, though whether they shared a mother is not clear. Ptolemic family politics were tough: When Ptolemy XII died, he made Cleopatra and her brother Ptolemy XIII joint rulers, but Ptolemy soon ousted Cleopatra. Julius Caesar took Cleopatra's side in the family fight for power, while Arsinoe joined the Egyptian army resisting Caesar and the Roman forces.
Rome won out, however, and Arsinoe was taken captive. She was allowed to live in exile in Ephesus, an ancient Greek city in what is now Turkey. However, Cleopatra saw her half-sister as a threat and had her murdered in 41 B.C.
Fast forward to 1904. That year, archaeologists began excavating a ruined structure in Ephesus known as the Octagon for its shape. In 1926, they revealed a burial chamber in the Octagon, holding the bones of a young woman.
Thur argues that the date of the tomb (sometime in the second half of the first century B.C.) and the illustrious within-city location of the grave, points to the occupant being Arsinoe IV herself. Thur also believes the octagonal shape may echo that of the great Lighthouse of Alexandria, one of the Seven Wonders of the Ancient World. That would make the tomb an homage to Arsinoe's hometown, Egypt's ancient capital, Alexandria.
Controversial claim
The skull of the possible murdered princess disappeared in Germany during World War II, but Thur found the rest of the bones in two niches in the burial chamber in 1985. The remains have been debated every step of the way. Forensic analysis revealed them to belong to a girl of 15 or 16, which would make Arsinoe surprisingly young for someone who was supposed to have played a major leadership role in a war against Rome years before her death. Thur dismisses those criticisms.
"This academic questioning is normal," she told the News-Observer. "It happens. It's a kind of jealousy."
In 2009, a BBC documentary, "Cleopatra: Portrait of a Killer," trumpeted the claim that the bones are Arsinoe's. At the time, the most controversial findings centered on the body's lost skull. Measurements and photographs of the incomplete skull remain in historical records and were used to reconstruct the dead woman's face.
From the reconstruction, Thur and her colleagues concluded that Arsinoe had an African mother (the Ptolemies were an ethnically Greek dynasty). That conclusion led to splashy headlines suggesting that Cleopatra, too, was African.
But classicists say the conclusions are shaky.
"We get this skull business and having Arsinoe's ethnicity actually being determined from a reconstructed skull based on measurements taken in the 1920s?" wrote David Meadows, a Canadian classicist and teacher, on his blog rogueclassicism.
Not only that, but Cleopatra and Arsinoe may not have shared a mother.
Read more at Discovery News
That's because the evidence linking the bones, discovered in an ancient Greek city, to Cleopatra's sibling Arsinoe IV is largely circumstantial. A DNA test was attempted, said Hilke Thur, an archaeologist at the Austrian Academy of Sciences and a former director of excavations at the site where the bones were found. However, the 2,000-year-old bones had been moved and handled too many times to get uncontaminated results.
"It didn't bring the results we hoped to find," Thur told the Charlotte News-Observer. She will lecture on her research March 1 at the North Carolina Museum of History in Raleigh.
The Ptolemy's bloody history
Arsinoe IV was Cleopatra's younger half-sister or sister, both of them fathered by Ptolemy XII Auletes, though whether they shared a mother is not clear. Ptolemic family politics were tough: When Ptolemy XII died, he made Cleopatra and her brother Ptolemy XIII joint rulers, but Ptolemy soon ousted Cleopatra. Julius Caesar took Cleopatra's side in the family fight for power, while Arsinoe joined the Egyptian army resisting Caesar and the Roman forces.
Rome won out, however, and Arsinoe was taken captive. She was allowed to live in exile in Ephesus, an ancient Greek city in what is now Turkey. However, Cleopatra saw her half-sister as a threat and had her murdered in 41 B.C.
Fast forward to 1904. That year, archaeologists began excavating a ruined structure in Ephesus known as the Octagon for its shape. In 1926, they revealed a burial chamber in the Octagon, holding the bones of a young woman.
Thur argues that the date of the tomb (sometime in the second half of the first century B.C.) and the illustrious within-city location of the grave, points to the occupant being Arsinoe IV herself. Thur also believes the octagonal shape may echo that of the great Lighthouse of Alexandria, one of the Seven Wonders of the Ancient World. That would make the tomb an homage to Arsinoe's hometown, Egypt's ancient capital, Alexandria.
Controversial claim
The skull of the possible murdered princess disappeared in Germany during World War II, but Thur found the rest of the bones in two niches in the burial chamber in 1985. The remains have been debated every step of the way. Forensic analysis revealed them to belong to a girl of 15 or 16, which would make Arsinoe surprisingly young for someone who was supposed to have played a major leadership role in a war against Rome years before her death. Thur dismisses those criticisms.
"This academic questioning is normal," she told the News-Observer. "It happens. It's a kind of jealousy."
In 2009, a BBC documentary, "Cleopatra: Portrait of a Killer," trumpeted the claim that the bones are Arsinoe's. At the time, the most controversial findings centered on the body's lost skull. Measurements and photographs of the incomplete skull remain in historical records and were used to reconstruct the dead woman's face.
From the reconstruction, Thur and her colleagues concluded that Arsinoe had an African mother (the Ptolemies were an ethnically Greek dynasty). That conclusion led to splashy headlines suggesting that Cleopatra, too, was African.
But classicists say the conclusions are shaky.
"We get this skull business and having Arsinoe's ethnicity actually being determined from a reconstructed skull based on measurements taken in the 1920s?" wrote David Meadows, a Canadian classicist and teacher, on his blog rogueclassicism.
Not only that, but Cleopatra and Arsinoe may not have shared a mother.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Spinning Black Hole Observed for the First Time
Astronomers have directly measured the spin of a black hole for the first time by detecting the mind-bending relativistic effects that warp space-time at the very edge of its event horizon -- the point of no return, beyond which even light cannot escape.
By monitoring X-ray emissions from iron ions (iron atoms with some electrons missing) trapped in the black hole’s accretion disk, the rapidly-rotating inner edge of the disk of hot material has provided direct information about how fast the black hole is spinning.
And by doing this, a long-standing controversy surrounding black hole studies has been laid to rest.
The spinning supermassive black hole lives in the heart of the nucleus of NGC 1365, a nearby galaxy some 56 million light-years away.
X-Ray Fireworks
Accretion disks consist of any material that has drifted too close to the gravitational dominance of a black hole. Gas, dust, even stars succumb to the force inside an active galactic nucleus (AGN). Some material will feed the black hole, whereas a surplus of matter is ejected from the black hole’s poles, blasting into space as jets of material traveling close to the speed of light, generating an intense cosmic fireworks display.
AGNs can be dazzling, shining bright in X-ray radiation -- an indication that the supermassive black hole lurking inside is feeding.
Now, astronomers using data from NASA’s brand new Nuclear Spectroscopic Telescope Array (NuSTAR) -- that was launched into Earth orbit in June 2012 -- and the European observatory XMM-Newton have used this X-ray radiation as a tool to directly measure the spin of NGC 1365’s black hole.
“The accretion disk isn’t hot enough to generate X-rays itself, these X-rays generated in the jet shine down on the disk and reflect off of it, exciting the iron,” Fiona Harrison, professor of physics and astronomy at the California Institute of Technology, Pasadena, Calif., and principal investigator of the NuSTAR mission, told Discovery News. “That’s what enables us to see the accretion disk -- we’re seeing reflected X-rays off the disk.”
“We selected (NGC 1365) because it is bright in X-rays, and previous observations with less powerful satellites suggested that this could be a good candidate for such a study,” said astronomer Guido Risaliti, of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass., and the Italian National Institute for Astrophysics, and lead author of research published today (Feb. 27) in the journal Nature.
The environment near the black hole’s event horizon is extreme; the fabric of space-time itself is being warped by the spin of the black hole, dragging the inner edge of the accretion disk with it. As the disk of material rapidly rotates -- like the vortex of a water funnel down a plughole -- it is still emitting X-rays.
The emission from this component of the accretion disk should therefore be stretched, or redshifted, providing astronomers with a means of quantifying how fast the black hole is spinning.
“We’re actually using the rotation of the disk to measure the spin of the black hole,” Harrison added.
Astrophysical Controversy
However, until now, measurements of the X-ray emission spectrum have been limited to low energies and there were two explanations for the broadening (red-shifting) of the iron emission spectrum.
One theory was that the X-rays were being red-shifted by the extreme relativistic environment near the event horizon of a spinning black hole. The other theory was that the X-rays were being obscured by gas blocking our view of the central black hole, adding complexity to the detected X-ray signal. Through lack of convincing evidence supporting either model, an astrophysical controversy erupted.
NuSTAR, which detects more energetic X-ray emissions, has now definitively ended this controversy. The orbiting X-ray observatory has detected previously undetectable high-energy X-rays and provided conclusive evidence that NGC 1365’s black hole is spinning -- the line broadening is not therefore caused by absorption by intervening clouds of gas.
“It was my expectation, and the main scientific rationale for the project. Of course many colleagues would rather expect absorption as the right explanation ... but the whole project has been conceived to solve this puzzle,” said Risaliti.
“The interesting thing, especially in the system we looked at is that we know there’s partial absorbing clouds -- we see them going in front of the (galactic) nucleus causing time-variable absorption ... it’s not unreasonable to suppose that could be distorting the spectrum in a way that gives you broad lines,” said Harrison. “But when you add the NuSTAR data that can just be ruled out. Yes, there is absorption, but it’s not explaining the iron line.
“What they tell us is that the black hole HAS to be spinning. Now there’s a maximum rate a black hole can spin given by general relativity and that is telling us that this black hole is spinning close to that rate.”
According to Risaliti and Harrison’s team’s research, NGC 1365’s black hole is spinning at a breakneck rate 84 percent of its theoretical maximum.
Shedding Light on Black Hole Evolution
So the first detection of a black hole’s spin has been made, providing observational evidence for something that, until now, has been purely theoretical or inferred. Why is this important?
“Well, first off it’s just cool that we’re seeing the effects of general relativity in the ‘strong field’ regime. Most tests of general relativity are done in the ‘weak field,’” said Harrison, referring to the fact that most tests of general relativity are done in “weak” gravitational fields like Earth’s. NuSTAR is probing the edge of the most extreme gravitational field possible.
Read more at Discovery News
By monitoring X-ray emissions from iron ions (iron atoms with some electrons missing) trapped in the black hole’s accretion disk, the rapidly-rotating inner edge of the disk of hot material has provided direct information about how fast the black hole is spinning.
And by doing this, a long-standing controversy surrounding black hole studies has been laid to rest.
The spinning supermassive black hole lives in the heart of the nucleus of NGC 1365, a nearby galaxy some 56 million light-years away.
X-Ray Fireworks
Accretion disks consist of any material that has drifted too close to the gravitational dominance of a black hole. Gas, dust, even stars succumb to the force inside an active galactic nucleus (AGN). Some material will feed the black hole, whereas a surplus of matter is ejected from the black hole’s poles, blasting into space as jets of material traveling close to the speed of light, generating an intense cosmic fireworks display.
AGNs can be dazzling, shining bright in X-ray radiation -- an indication that the supermassive black hole lurking inside is feeding.
Now, astronomers using data from NASA’s brand new Nuclear Spectroscopic Telescope Array (NuSTAR) -- that was launched into Earth orbit in June 2012 -- and the European observatory XMM-Newton have used this X-ray radiation as a tool to directly measure the spin of NGC 1365’s black hole.
“The accretion disk isn’t hot enough to generate X-rays itself, these X-rays generated in the jet shine down on the disk and reflect off of it, exciting the iron,” Fiona Harrison, professor of physics and astronomy at the California Institute of Technology, Pasadena, Calif., and principal investigator of the NuSTAR mission, told Discovery News. “That’s what enables us to see the accretion disk -- we’re seeing reflected X-rays off the disk.”
“We selected (NGC 1365) because it is bright in X-rays, and previous observations with less powerful satellites suggested that this could be a good candidate for such a study,” said astronomer Guido Risaliti, of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass., and the Italian National Institute for Astrophysics, and lead author of research published today (Feb. 27) in the journal Nature.
The environment near the black hole’s event horizon is extreme; the fabric of space-time itself is being warped by the spin of the black hole, dragging the inner edge of the accretion disk with it. As the disk of material rapidly rotates -- like the vortex of a water funnel down a plughole -- it is still emitting X-rays.
The emission from this component of the accretion disk should therefore be stretched, or redshifted, providing astronomers with a means of quantifying how fast the black hole is spinning.
“We’re actually using the rotation of the disk to measure the spin of the black hole,” Harrison added.
Astrophysical Controversy
However, until now, measurements of the X-ray emission spectrum have been limited to low energies and there were two explanations for the broadening (red-shifting) of the iron emission spectrum.
One theory was that the X-rays were being red-shifted by the extreme relativistic environment near the event horizon of a spinning black hole. The other theory was that the X-rays were being obscured by gas blocking our view of the central black hole, adding complexity to the detected X-ray signal. Through lack of convincing evidence supporting either model, an astrophysical controversy erupted.
NuSTAR, which detects more energetic X-ray emissions, has now definitively ended this controversy. The orbiting X-ray observatory has detected previously undetectable high-energy X-rays and provided conclusive evidence that NGC 1365’s black hole is spinning -- the line broadening is not therefore caused by absorption by intervening clouds of gas.
“It was my expectation, and the main scientific rationale for the project. Of course many colleagues would rather expect absorption as the right explanation ... but the whole project has been conceived to solve this puzzle,” said Risaliti.
“The interesting thing, especially in the system we looked at is that we know there’s partial absorbing clouds -- we see them going in front of the (galactic) nucleus causing time-variable absorption ... it’s not unreasonable to suppose that could be distorting the spectrum in a way that gives you broad lines,” said Harrison. “But when you add the NuSTAR data that can just be ruled out. Yes, there is absorption, but it’s not explaining the iron line.
“What they tell us is that the black hole HAS to be spinning. Now there’s a maximum rate a black hole can spin given by general relativity and that is telling us that this black hole is spinning close to that rate.”
According to Risaliti and Harrison’s team’s research, NGC 1365’s black hole is spinning at a breakneck rate 84 percent of its theoretical maximum.
Shedding Light on Black Hole Evolution
So the first detection of a black hole’s spin has been made, providing observational evidence for something that, until now, has been purely theoretical or inferred. Why is this important?
“Well, first off it’s just cool that we’re seeing the effects of general relativity in the ‘strong field’ regime. Most tests of general relativity are done in the ‘weak field,’” said Harrison, referring to the fact that most tests of general relativity are done in “weak” gravitational fields like Earth’s. NuSTAR is probing the edge of the most extreme gravitational field possible.
Read more at Discovery News
Hidden Moons Lurk in Saturn's Rings
Like Jupiter, Saturn is orbited by a large extended family of moons — 62, at last count — ranging in size from the gigantic 3,200-mile-wide Titan, wrapped in thick clouds, to the barely 2-mile-wide Methone, smooth as a river rock. But there are even more moons in the ringed planet’s retinue, tiny worlds embedded inside the icy rings themselves. Even with the Cassini spacecraft they are nearly impossible to see… until they give themselves away with their shining “propellers.”
In the image above we get a view across 9,000 miles of Saturn’s A ring, the outermost of the main ring structures, with Saturn itself well off frame to the left. Inside one of the darker segments of the rings, at lower left center, are two short, bright streaks — one pointing up, one pointing down. This is what the Cassini science team calls a “propeller,” a clumping of ring particles in front of and behind a tiny moonlet located between the two “blades.”
The moonlet is too small to be resolved here directly — it’s less than half a mile across — but its gravity is still strong enough to affect the tiny particles that comprise Saturn’s rings. Made mostly of water ice, the more the particles gather together the more they tend to reflect sunlight — highlighting the moonlet’s location for Cassini.
Depending on the angle of sunlight, propellers can also appear darker than the surrounding rings.
This particular propeller is nicknamed “Bleriot,” after the French aviator who made the first airplane flight across the English Channel in 1909. (The Cassini imaging team has fittingly decided to name propellers — albeit informally — after famous aviators.) First observed by Cassini in 2005 Bleriot has been repeatedly revisited, most recently in this observation from Nov. 11, 2012.
Read more at Discovery News
In the image above we get a view across 9,000 miles of Saturn’s A ring, the outermost of the main ring structures, with Saturn itself well off frame to the left. Inside one of the darker segments of the rings, at lower left center, are two short, bright streaks — one pointing up, one pointing down. This is what the Cassini science team calls a “propeller,” a clumping of ring particles in front of and behind a tiny moonlet located between the two “blades.”
The moonlet is too small to be resolved here directly — it’s less than half a mile across — but its gravity is still strong enough to affect the tiny particles that comprise Saturn’s rings. Made mostly of water ice, the more the particles gather together the more they tend to reflect sunlight — highlighting the moonlet’s location for Cassini.
Depending on the angle of sunlight, propellers can also appear darker than the surrounding rings.
This particular propeller is nicknamed “Bleriot,” after the French aviator who made the first airplane flight across the English Channel in 1909. (The Cassini imaging team has fittingly decided to name propellers — albeit informally — after famous aviators.) First observed by Cassini in 2005 Bleriot has been repeatedly revisited, most recently in this observation from Nov. 11, 2012.
Read more at Discovery News
Feb 26, 2013
Big Birds Sing Like Barry White
Some feathered crooners may advertise their size to females by hitting the low notes. Ornithologists at the Max Planck Institute found that only bigger-bodied birds belt out the bass.
The physical size of some birds may put a limit on the frequency of the birds’ songs, according to a study published in PLOS ONE. Since only a larger males hit lower notes, females may be able to use deeper voices as a reliable measure of a male’s size. Size matters to some songbird species, with females preferring larger males, so vocal limitations could affect some birds’ love lives.
The songs of purple-crowned fairy-wrens, Malurus coronatus coronatus, hit a range of notes. However the study found that in some songs, larger body size related to lower-pitched singing ability.
Further study will be needed to prove a relationship among body size, singing frequency and sexual success in fairy-wrens. The authors suggested that body size may be just one of many characteristics advertized by fairy-wrens songs.
The authors also noted that low-frequency singing ability may have resulted from good health as the male fairy-wrens grew up. Better health may have allowed better development of singing structures in the birds’ anatomies. The same healthy conditions could have also resulted in larger size. So size and singing would be correlated, but not causally related.
From Discovery News
The physical size of some birds may put a limit on the frequency of the birds’ songs, according to a study published in PLOS ONE. Since only a larger males hit lower notes, females may be able to use deeper voices as a reliable measure of a male’s size. Size matters to some songbird species, with females preferring larger males, so vocal limitations could affect some birds’ love lives.
The songs of purple-crowned fairy-wrens, Malurus coronatus coronatus, hit a range of notes. However the study found that in some songs, larger body size related to lower-pitched singing ability.
Further study will be needed to prove a relationship among body size, singing frequency and sexual success in fairy-wrens. The authors suggested that body size may be just one of many characteristics advertized by fairy-wrens songs.
The authors also noted that low-frequency singing ability may have resulted from good health as the male fairy-wrens grew up. Better health may have allowed better development of singing structures in the birds’ anatomies. The same healthy conditions could have also resulted in larger size. So size and singing would be correlated, but not causally related.
From Discovery News
800-Foot-Deep Canyon Found Under Red Sea
Watch your step, Moses. Anyone attempting to walk across the floor of the Red Sea might fall into the 820-foot-deep canyon that was recently discovered by a British navy ship.
A 3-D map of the canyon was produced using a multi-beam echo sounder attached to the hull of the HMS Enterprise, according to a UK Ministry of Defense news report. The mapping equipment functions like sonar. It bounces sound waves off the ocean floor and uses the echoes to create an image of the bottom of the sea.
“These features could be the result of ancient rivers scouring through the rock strata before the Red Sea flooded millennia ago,” said Commanding Officer of HMS Enterprise, Derek Rae, in the UK Ministry of Defense news report.
“Some may be far younger — and still in the process of being created by underwater currents driven by the winds and tidal streams as they flow through this area of the Red Sea, carving their way through the soft sediment and being diverted by harder bed rock. Or there is always the possibility that they are a combination of the two.”
The HMS Enterprise found the hidden canyon after leaving the Egyptian port of Safaga, 250 miles south of Suez. The ship will continue mapping the region until summer, as well as aid in anti-piracy efforts.
Read more at Discovery News
A 3-D map of the canyon was produced using a multi-beam echo sounder attached to the hull of the HMS Enterprise, according to a UK Ministry of Defense news report. The mapping equipment functions like sonar. It bounces sound waves off the ocean floor and uses the echoes to create an image of the bottom of the sea.
“These features could be the result of ancient rivers scouring through the rock strata before the Red Sea flooded millennia ago,” said Commanding Officer of HMS Enterprise, Derek Rae, in the UK Ministry of Defense news report.
“Some may be far younger — and still in the process of being created by underwater currents driven by the winds and tidal streams as they flow through this area of the Red Sea, carving their way through the soft sediment and being diverted by harder bed rock. Or there is always the possibility that they are a combination of the two.”
The HMS Enterprise found the hidden canyon after leaving the Egyptian port of Safaga, 250 miles south of Suez. The ship will continue mapping the region until summer, as well as aid in anti-piracy efforts.
Read more at Discovery News
Lack of Sleep Messes With Your Genes
Lack of sleep has been blamed for everything from poor driving to memory problems. Now, researchers think they’ve started to pinpoint why: week-long sleep studies of people resting for 10 hours a night vs. six hours showed significant changes in gene activity.
When sleep was restricted to under six hours a night, researchers noted that 444 genes showed suppressed activity, and 267 genes showed more activity. Those genes control everything from the body’s immune system to its reaction to stress.
“The surprise for us was that a relatively modest difference in sleep duration leads to these kinds of changes,” study author Professor Derk-Jan Dijk, director of the Surrey Sleep Research Centre at Surrey University, told The Guardian. “It’s an indication that sleep disruption or sleep restriction is doing more than just making you tired.”
The team’s study, published in the Proceedings of the National Academy of Sciences, analyzed the sleep patterns of 26 healthy men and women who spent two separate weeks in a lab where they were allowed to spend 10 hours in bed one week, and six the other week. During the longer nights, the participants averaged 8.5 hours of sleep. They averaged 5 hours, 42 minutes during the shorter nights.
Genes that govern the body’s wake-sleep cycles were dramatically affected, meaning that some sleep loss could provoke ongoing sleep issues.
One expert not involved in the study urged caution, however.
“We must be careful not to generalize such findings to, say, habitual six-hour sleepers who are happy with their sleep,” Jim Horne, professor of psychophysiology at Loughborough University’s Sleep Research Center, told The Guardian.
Read more at Discovery News
When sleep was restricted to under six hours a night, researchers noted that 444 genes showed suppressed activity, and 267 genes showed more activity. Those genes control everything from the body’s immune system to its reaction to stress.
“The surprise for us was that a relatively modest difference in sleep duration leads to these kinds of changes,” study author Professor Derk-Jan Dijk, director of the Surrey Sleep Research Centre at Surrey University, told The Guardian. “It’s an indication that sleep disruption or sleep restriction is doing more than just making you tired.”
The team’s study, published in the Proceedings of the National Academy of Sciences, analyzed the sleep patterns of 26 healthy men and women who spent two separate weeks in a lab where they were allowed to spend 10 hours in bed one week, and six the other week. During the longer nights, the participants averaged 8.5 hours of sleep. They averaged 5 hours, 42 minutes during the shorter nights.
Genes that govern the body’s wake-sleep cycles were dramatically affected, meaning that some sleep loss could provoke ongoing sleep issues.
One expert not involved in the study urged caution, however.
“We must be careful not to generalize such findings to, say, habitual six-hour sleepers who are happy with their sleep,” Jim Horne, professor of psychophysiology at Loughborough University’s Sleep Research Center, told The Guardian.
Read more at Discovery News
Russian Meteor Likely An Apollo Asteroid Chunk
On Feb. 15, the Urals region of Russia played host to a noisy cosmic visitor. A meteor entered the atmosphere and broke up over the city of Chelyabinsk, generating powerful shockwaves that slammed into the city, blowing out windows, causing 1,500 injuries and millions of dollars-worth of damage. Before it collided with Earth, however, the Chelyabinsk space rock was a 10,000 ton meteoroid and astronomers now think they know where it came from.
Helped by the extensive coverage of eyewitness cameras, CCTV footage and a fortuitous observation made by the Meteosat-9 weather satellite, Jorge Zuluaga and Ignacio Ferrin of the University of Antioquia in Medellin, Colombia, have been able to reconstruct the most likely orbit of the space rock around the sun before the Earth got in its way. What’s more, they know what type of space rock it was.
Using video evidence (most of which had precise timestamps), the location, speed and altitude of the fireball could be estimated. Add to that the location where a suspected meteorite fragment punched a hole into the ice of Lake Cherbakul and it’s a case of using some simple math to learn the characteristics of the object. But to trace the meteoroid’s path back out into space and assemble its orbital trajectory around the sun wasn’t so straight forward, according to the arXiv blog.
However, this analysis hinges on one important factor: “Assuming that the hole in the ice sheet of Lake Cherbakul was produced by a fragment of the meteoroid is also a very important hypothesis of this work. More importantly, our conclusions relies strongly onto assume that the direction of the trajectory of the fragment responsible for the breaking of the ice sheet in the Lake, is essentially the same as the direction of the parent body. It could be not the case. After the explosion and fragmentation of the meteoroid fragments could acquire different velocities and fall affecting areas far from the region where we expect to find,” the researchers write in their paper submitted to the arXiv pre-print service. So far, no meteorite has been recovered from Lake Cherbakul.
“According to our estimations, the Chelyabinski meteor started to brighten up when it was between 32 and 47 km up in the atmosphere … The velocity of the body predicted by our analysis was between 13 and 19 km/s (relative to the Earth) which encloses the preferred figure of 18 km/s assumed by other researchers,” they add.
Armed with this wealth of data farmed from various eyewitness sources, they used a piece of software called NOVAS (an acronym for “Naval Observatory Vector Astrometry Software”) developed by the U.S. Navy Observatory (USNO). This sophisticated program was able to consider the gravitational influence of the moon, plus eight other bodies in the solar system, ultimately helping Zuluaga and Ferrin track where the object was before impact.
Read more at Discovery News
Helped by the extensive coverage of eyewitness cameras, CCTV footage and a fortuitous observation made by the Meteosat-9 weather satellite, Jorge Zuluaga and Ignacio Ferrin of the University of Antioquia in Medellin, Colombia, have been able to reconstruct the most likely orbit of the space rock around the sun before the Earth got in its way. What’s more, they know what type of space rock it was.
Using video evidence (most of which had precise timestamps), the location, speed and altitude of the fireball could be estimated. Add to that the location where a suspected meteorite fragment punched a hole into the ice of Lake Cherbakul and it’s a case of using some simple math to learn the characteristics of the object. But to trace the meteoroid’s path back out into space and assemble its orbital trajectory around the sun wasn’t so straight forward, according to the arXiv blog.
However, this analysis hinges on one important factor: “Assuming that the hole in the ice sheet of Lake Cherbakul was produced by a fragment of the meteoroid is also a very important hypothesis of this work. More importantly, our conclusions relies strongly onto assume that the direction of the trajectory of the fragment responsible for the breaking of the ice sheet in the Lake, is essentially the same as the direction of the parent body. It could be not the case. After the explosion and fragmentation of the meteoroid fragments could acquire different velocities and fall affecting areas far from the region where we expect to find,” the researchers write in their paper submitted to the arXiv pre-print service. So far, no meteorite has been recovered from Lake Cherbakul.
“According to our estimations, the Chelyabinski meteor started to brighten up when it was between 32 and 47 km up in the atmosphere … The velocity of the body predicted by our analysis was between 13 and 19 km/s (relative to the Earth) which encloses the preferred figure of 18 km/s assumed by other researchers,” they add.
Armed with this wealth of data farmed from various eyewitness sources, they used a piece of software called NOVAS (an acronym for “Naval Observatory Vector Astrometry Software”) developed by the U.S. Navy Observatory (USNO). This sophisticated program was able to consider the gravitational influence of the moon, plus eight other bodies in the solar system, ultimately helping Zuluaga and Ferrin track where the object was before impact.
Read more at Discovery News
Feb 25, 2013
Mercury's Ancient Magma Oceans
Mercury is a funny little world. Made mostly of iron (much of which is solid), with a scuffed, pockmarked surface it might appear to have more in common with a cannonball than a planet. A barren place now, with a silent surface and no atmosphere save for a whisper of oxygen and gaseous sodium atoms, Mercury was once much more active. In its past, it even had oceans.
Oh, except that those oceans were actually made of molten rock.
Over 4 billion years ago, shortly after it had formed, Mercury was a fiery little place. A vast and beautifully deadly ocean of magma may have blanketed the planet. Perhaps created by the violent collisions that happened as the solar system’s planets were forming, or maybe in the face of much higher temperatures than today back when the sun was still contracting, the young Mercury would have stayed partially molten for the first few millions of years of its life.
NASA’s MESSENGER probe has been in orbit around our solar system’s smallest planet for nearly two years now, and it’s been sending back a wealth of data for planetary scientists to pick at. One particular instrument carried by MESSENGER is an X-ray fluorescence spectrometer, able to give us a glimpse at the individual chemical elements locked up in Mercury’s surface rocks.
Interestingly, the surface of the little world is made up of two distinct rock compositions, giving the people analyzing that data a puzzle to solve. What geological processes could lead to two very different types of rock on what appears to be a very continuous planetary surface?
If you know the combination of elements and compounds in any rock, even from another planet, it’s fairly easy to make a fake version from lab chemicals. Grove and company first created fake Mercurian rock, then subjected it to a few different pressures and temperatures, with the aim being to try and figure out what geological processes they might have experienced as they formed.
Rocks are made mostly from crystalline oxides. On Earth, there’s far more oxygen in the planet’s crust than there is in the atmosphere, chemically bound to other elements like silicon, magnesium, and aluminum; rocks behave the same way, chemically, on all four of the inner planets. Using the data from MESSENGER’s X-ray spectrometers, the team analyzed Mercury’s surface composition and determined the ratios of elements present in the planet’s rocky crust, in order to find out precisely what oxides it’s made of. With that knowledge, all they needed to do was make a copy.
As Grove’s team subjected their synthetic Mercury surface to different temperatures. As some rocks started to crystallize out of the synthetic magma, the composition of the remaining molten rock began to change. Very simply, an ancient ocean of magma on a planetary scale was the simplest explanation for the two different types of rock on Mercury’s surface. And I truly do mean ancient. To quote Grove directly: “The crust is probably more than 4 billion years old, so this magma ocean is a really ancient feature.”
Read more at Discovery News
Oh, except that those oceans were actually made of molten rock.
Over 4 billion years ago, shortly after it had formed, Mercury was a fiery little place. A vast and beautifully deadly ocean of magma may have blanketed the planet. Perhaps created by the violent collisions that happened as the solar system’s planets were forming, or maybe in the face of much higher temperatures than today back when the sun was still contracting, the young Mercury would have stayed partially molten for the first few millions of years of its life.
NASA’s MESSENGER probe has been in orbit around our solar system’s smallest planet for nearly two years now, and it’s been sending back a wealth of data for planetary scientists to pick at. One particular instrument carried by MESSENGER is an X-ray fluorescence spectrometer, able to give us a glimpse at the individual chemical elements locked up in Mercury’s surface rocks.
Interestingly, the surface of the little world is made up of two distinct rock compositions, giving the people analyzing that data a puzzle to solve. What geological processes could lead to two very different types of rock on what appears to be a very continuous planetary surface?
If you know the combination of elements and compounds in any rock, even from another planet, it’s fairly easy to make a fake version from lab chemicals. Grove and company first created fake Mercurian rock, then subjected it to a few different pressures and temperatures, with the aim being to try and figure out what geological processes they might have experienced as they formed.
Rocks are made mostly from crystalline oxides. On Earth, there’s far more oxygen in the planet’s crust than there is in the atmosphere, chemically bound to other elements like silicon, magnesium, and aluminum; rocks behave the same way, chemically, on all four of the inner planets. Using the data from MESSENGER’s X-ray spectrometers, the team analyzed Mercury’s surface composition and determined the ratios of elements present in the planet’s rocky crust, in order to find out precisely what oxides it’s made of. With that knowledge, all they needed to do was make a copy.
As Grove’s team subjected their synthetic Mercury surface to different temperatures. As some rocks started to crystallize out of the synthetic magma, the composition of the remaining molten rock began to change. Very simply, an ancient ocean of magma on a planetary scale was the simplest explanation for the two different types of rock on Mercury’s surface. And I truly do mean ancient. To quote Grove directly: “The crust is probably more than 4 billion years old, so this magma ocean is a really ancient feature.”
Read more at Discovery News
Red Planet Mars Not So Red Beneath Surface
The Red Planet's signature color is only skin deep.
NASA's Mars rover Curiosity drilled 2.5 inches into a Red Planet outcrop called "John Klein" earlier this month, revealing rock that's decidedly gray rather than the familiar rusty orange of the Martian surface.
"We're sort of seeing a new coloration for Mars here, and it's an exciting one to us," Joel Hurowitz, sampling system scientist for Curiosity at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif., told reporters Wednesday (Feb. 20).
Mars gets its red coloration from a surface layer of dust that has undergone a rusting process, during which iron was oxidized.
Curiosity's hammering drill allows scientists to peer beneath that dusty veneer for the first time ever, and the early views at John Klein -- where the rover performed its first full-up drilling and sample-collection operation -- are intriguing, rover team members said.
The gray powder Curiosity collected "may preserve some indication of what iron was doing in these samples without the effect of some later oxidative process that would've rusted the rocks into this orange color that is sort of typical of Mars," Hurowitz said.
Curiosity landed inside the Red Planet's huge Gale Crater last August, kicking off a two-year prime mission to determine whether the area could ever have supported microbial life. The 1-ton rover carries 10 different science instruments and 17 cameras, along with other tools such as its arm-mounted, rock-boring drill.
The drill was the last of Curiosity's gear to get vetted and tested on the Red Planet, and the rover team is thrilled that its first run went so smoothly.
"It's a real big turning point for us," said Curiosity lead scientist John Grotzinger, a geologist at Caltech in Pasadena.
Snagging rock powder from the depths of John Klein -- which shows signs of long-ago exposure to liquid water -- also cements Curiosity's place in the history books, rover team members said.
Read more at Discovery News
NASA's Mars rover Curiosity drilled 2.5 inches into a Red Planet outcrop called "John Klein" earlier this month, revealing rock that's decidedly gray rather than the familiar rusty orange of the Martian surface.
"We're sort of seeing a new coloration for Mars here, and it's an exciting one to us," Joel Hurowitz, sampling system scientist for Curiosity at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif., told reporters Wednesday (Feb. 20).
Mars gets its red coloration from a surface layer of dust that has undergone a rusting process, during which iron was oxidized.
Curiosity's hammering drill allows scientists to peer beneath that dusty veneer for the first time ever, and the early views at John Klein -- where the rover performed its first full-up drilling and sample-collection operation -- are intriguing, rover team members said.
The gray powder Curiosity collected "may preserve some indication of what iron was doing in these samples without the effect of some later oxidative process that would've rusted the rocks into this orange color that is sort of typical of Mars," Hurowitz said.
Curiosity landed inside the Red Planet's huge Gale Crater last August, kicking off a two-year prime mission to determine whether the area could ever have supported microbial life. The 1-ton rover carries 10 different science instruments and 17 cameras, along with other tools such as its arm-mounted, rock-boring drill.
The drill was the last of Curiosity's gear to get vetted and tested on the Red Planet, and the rover team is thrilled that its first run went so smoothly.
"It's a real big turning point for us," said Curiosity lead scientist John Grotzinger, a geologist at Caltech in Pasadena.
Snagging rock powder from the depths of John Klein -- which shows signs of long-ago exposure to liquid water -- also cements Curiosity's place in the history books, rover team members said.
Read more at Discovery News
Lost Continent Discovered Beneath Indian Ocean
A small, sunken continent was recently discovered beneath the Indian Ocean. The ancient mini-continent, called Mauritia, lies beneath the lava flows that created the islands of Reunion and Mauritius.
The lost continent dates back to when the early Earth’s super-continents, Laurasia and Gondwana, were shattering into the more familiar geography that we know today. Mauritia was once part of the chunk of Gondwana that gradually split into Madagascar, India, Australia and Antarctica after approximately 170 million years ago.
The micro-continent later broke away from Madagascar between 83.5 and 61 million years ago. The mini-continent was shredded as it passed over mid-ocean ridges. Lava eruptions then covered the sunken continent.
Volcanic eruptions on the island of Mauritius brought fragments of the lost continent to the surface. The fragments were crystals known as zircons that dated to 660 and 1,970 million years ago, far older than the rock making up the overlying crust and volcanic islands. This suggested that the rock beneath the crust was actually a part of the ancient mini-continent, according to the study documenting the discovery in the journal Nature Geoscience.
Read more at Discovery News
The lost continent dates back to when the early Earth’s super-continents, Laurasia and Gondwana, were shattering into the more familiar geography that we know today. Mauritia was once part of the chunk of Gondwana that gradually split into Madagascar, India, Australia and Antarctica after approximately 170 million years ago.
The micro-continent later broke away from Madagascar between 83.5 and 61 million years ago. The mini-continent was shredded as it passed over mid-ocean ridges. Lava eruptions then covered the sunken continent.
Volcanic eruptions on the island of Mauritius brought fragments of the lost continent to the surface. The fragments were crystals known as zircons that dated to 660 and 1,970 million years ago, far older than the rock making up the overlying crust and volcanic islands. This suggested that the rock beneath the crust was actually a part of the ancient mini-continent, according to the study documenting the discovery in the journal Nature Geoscience.
Read more at Discovery News
James Cameron Deep-Sea Dive Reveals New Species
When movie director James Cameron dove to the bottom of the Pacific Ocean early last year, he and his team captured hours of video of strange new deep sea life. Last Friday, a researcher gave a peek into this bizarre new world, presenting preliminary findings based on analysis on reams of footage from the so-called Deepsea Challenge expedition.
One of the strangest new finds is a sea cucumber seen in the Challenger Deep, the deepest spot in the world's oceans at approximately 36,000 feet (11 kilometers) below the surface, said Natalya Gallo, a doctoral student and researcher at the Scripps Institution of Oceanography at the University of California, San Diego. This new sea cucumber is almost certainly a new species, and lives in large numbers at this deep spot, Gallo told OurAmazingPlanet.
The research has likely revealed a second previously unknown species, a type of squid worm, Gallo said. These wormy animals are several inches long and live in the mid-water, above the sea floor, she said. "When you first see it, it looks like a squid because it has all of these modified feeding appendages," she said. Until actual physical specimens are collected, however, the new species won't be able to be definitely recorded, she added.
Video has also revealed the presence of giant single-celled amoebas called xenophyophores — bizarre creatures that are among the biggest cells known to humans — near the Challenger Deep, Gallo said.
She also examined video from expeditions to the nearby New Britain Trench and Ulithi, which has revealed a diverse mixture of life. In the New Britain Trench, Gallo noted the presence of hundreds of stalked anemones growing on pillow lavas at the bottom of the trench. The seafloor here is dominated by the spoon worm, an animal that burrows and licks organic matter off the sea bottom with its tonguelike proboscis. Ulithi, on the other hand, was home to atolls with a high biodiversity of sponges and corals, Gallo said.
Read more at Discovery News
One of the strangest new finds is a sea cucumber seen in the Challenger Deep, the deepest spot in the world's oceans at approximately 36,000 feet (11 kilometers) below the surface, said Natalya Gallo, a doctoral student and researcher at the Scripps Institution of Oceanography at the University of California, San Diego. This new sea cucumber is almost certainly a new species, and lives in large numbers at this deep spot, Gallo told OurAmazingPlanet.
The research has likely revealed a second previously unknown species, a type of squid worm, Gallo said. These wormy animals are several inches long and live in the mid-water, above the sea floor, she said. "When you first see it, it looks like a squid because it has all of these modified feeding appendages," she said. Until actual physical specimens are collected, however, the new species won't be able to be definitely recorded, she added.
Video has also revealed the presence of giant single-celled amoebas called xenophyophores — bizarre creatures that are among the biggest cells known to humans — near the Challenger Deep, Gallo said.
She also examined video from expeditions to the nearby New Britain Trench and Ulithi, which has revealed a diverse mixture of life. In the New Britain Trench, Gallo noted the presence of hundreds of stalked anemones growing on pillow lavas at the bottom of the trench. The seafloor here is dominated by the spoon worm, an animal that burrows and licks organic matter off the sea bottom with its tonguelike proboscis. Ulithi, on the other hand, was home to atolls with a high biodiversity of sponges and corals, Gallo said.
Read more at Discovery News
Feb 24, 2013
Graphene: A Material That Multiplies the Power of Light
Bottles, packaging, furniture, car parts... all made of plastic. Today we find it difficult to imagine our lives without this key material that revolutionized technology over the last century. There is wide-spread optimism in the scientific community that graphene will provide similar paradigm shifting advances in the decades to come. Mobile phones that fold, transparent and flexible solar panels, extra thin computers... the list of potential applications is endless.
The most recent discovery published in Nature Physics and made by researchers at the Institute of Photonic Science (ICFO), in collaboration with Massachusetts Institute of Technology, USA, Max Planck Institute for Polymer Research, Germany, and Graphenea S.L. Donostia-San Sebastian, Spain, demonstrate that graphene is able to convert a single photon that it absorbs into multiple electrons that could drive electric current (excited electrons) -- a very promising discovery that makes graphene an important alternative material for light detection and harvesting technologies, now based on conventional semiconductors like silicon.
"In most materials, one absorbed photon generates one electron, but in the case of graphene, we have seen that one absorbed photon is able to produce many excited electrons, and therefore generate larger electrical signals" explains Frank Koppens, group leader at ICFO. This feature makes graphene an ideal building block for any device that relies on converting light into electricity. In particular, it enables efficient light detectors and potentially also solar cells that can harvest light energy from the full solar spectrum with lower loss.
The experiment consisted in sending a known number of photons with different energies (different colors) onto a monolayer of graphene. "We have seen that high energy photons (e.g. violet) are converted into a larger number of excited electrons than low energy photons (e.g. infrared). The observed relation between the photon energy and the number of generated excited electrons shows that graphene converts light into electricity with very high efficiency. Even though it was already speculated that graphene holds potential for light-to-electricity conversion, it now turns out that it is even more suitable than expected!" explains Tielrooij, researcher at ICFO.
Although there are some issues for direct applications, such as graphene's low absorption, graphene holds the potential to cause radical changes in many technologies that are currently based on conventional semiconductors. "It was known that graphene is able to absorb a very large spectrum of light colors. However now we know that once the material has absorbed light, the energy conversion efficiency is very high. Our next challenge will be to find ways of extracting the electrical current and enhance the absorption of graphene. Then we will be able to design graphene devices that detect light more efficiently and could potentially even lead to more efficient solar cells." concludes Koppens.
Read more at Science Daily
The most recent discovery published in Nature Physics and made by researchers at the Institute of Photonic Science (ICFO), in collaboration with Massachusetts Institute of Technology, USA, Max Planck Institute for Polymer Research, Germany, and Graphenea S.L. Donostia-San Sebastian, Spain, demonstrate that graphene is able to convert a single photon that it absorbs into multiple electrons that could drive electric current (excited electrons) -- a very promising discovery that makes graphene an important alternative material for light detection and harvesting technologies, now based on conventional semiconductors like silicon.
"In most materials, one absorbed photon generates one electron, but in the case of graphene, we have seen that one absorbed photon is able to produce many excited electrons, and therefore generate larger electrical signals" explains Frank Koppens, group leader at ICFO. This feature makes graphene an ideal building block for any device that relies on converting light into electricity. In particular, it enables efficient light detectors and potentially also solar cells that can harvest light energy from the full solar spectrum with lower loss.
The experiment consisted in sending a known number of photons with different energies (different colors) onto a monolayer of graphene. "We have seen that high energy photons (e.g. violet) are converted into a larger number of excited electrons than low energy photons (e.g. infrared). The observed relation between the photon energy and the number of generated excited electrons shows that graphene converts light into electricity with very high efficiency. Even though it was already speculated that graphene holds potential for light-to-electricity conversion, it now turns out that it is even more suitable than expected!" explains Tielrooij, researcher at ICFO.
Although there are some issues for direct applications, such as graphene's low absorption, graphene holds the potential to cause radical changes in many technologies that are currently based on conventional semiconductors. "It was known that graphene is able to absorb a very large spectrum of light colors. However now we know that once the material has absorbed light, the energy conversion efficiency is very high. Our next challenge will be to find ways of extracting the electrical current and enhance the absorption of graphene. Then we will be able to design graphene devices that detect light more efficiently and could potentially even lead to more efficient solar cells." concludes Koppens.
Read more at Science Daily
How Dinosaurs Grew The World's Longest Necks
How did the largest of all dinosaurs evolve necks longer than any other creature that has ever lived? One secret: mostly hollow neck bones, researchers say.
The largest creatures to ever walk the Earth were the long-necked, long-tailed dinosaurs known as the sauropods. These vegetarians had by far the longest necks of any known animal. The dinosaurs' necks reached up to 50 feet (15 meters) in length, six times longer than that of the current world-record holder, the giraffe, and at least five times longer than those of any other animal that has lived on land.
"They were really stupidly, absurdly oversized," said researcher Michael Taylor, a vertebrate paleontologist at the University of Bristol in England. "In our feeble, modern world, we're used to thinking of elephants as big, but sauropods reached 10 times the size elephants do. They were the size of walking whales."
Amazing Necks
To find out how sauropod necks could get so long, scientists analyzed other long-necked creatures and compared sauropod anatomy with that of the dinosaurs' nearest living relatives, the birds and crocodilians.
"Extinct animals — and living animals, too, for that matter — are much more amazing than we realize," Taylor told LiveScience. "Time and again, people have proposed limits to possible animal sizes, like the five-meter (16-foot) wingspan that was supposed to be the limit for flying animals. And time and again, they've been blown away. We now know of flying pterosaurs with 10-meter (33-foot) wingspans. And these extremes are achieved by a startling array of anatomical innovations."
Among living animals, adult bull giraffes have the longest necks, capable of reaching about 8 feet (2.4 m) long. No other living creature exceeds half this length. For instance, ostriches typically have necks only about 3 feet (1 m) long.
When it comes to extinct animals, the largest land-living mammal of all time was the rhino-like creature Paraceratherium, which had a neck maybe 8.2 feet (2.5 m) long. The flying reptiles known as pterosaurs could also have surprisingly long necks, such as Arambourgiania, whose neck may have exceeded 10 feet (3 m).
The necks of the Loch Ness Monster-like marine reptiles known as plesiosaurs could reach an impressive 23 feet (7 m), probably because the water they lived in could support their weight. But these necks were still less than half the lengths of the longest-necked sauropods.
Sauropod Secrets
In their study, Taylor and his colleagues found that the neck bones of sauropods possessed a number of traits that supported such long necks. For instance, air often made up 60 percent of these animals' necks, with some as light as birds' bones, making it easier to support long chains of the bones. The muscles, tendons and ligaments were also positioned around these vertebrae in a way that helped maximize leverage, making neck movements more efficient.
In addition, the dinosaurs' giant torsos and four-legged stances helped provide a stable platform for their necks. In contrast, giraffes have relatively small torsos, while ostriches have two-legged stances.
Sauropods also had plenty of neck vertebrae, up to 19. In contrast, nearly all mammals have no more than seven, from mice to whales to giraffes, limiting how long their necks can get. (The only exceptions among mammals are sloths and aquatic mammals known as sirenians, such as manatees.)
Moreover, while pterosaur Arambourgiania had a relatively giant head with long, spear-like jaws that it likely used to help capture prey, sauropods had small, light heads that were easy to support. These dinosaurs did not chew their meals, lacking even cheeks to store food in their mouths; they merely swallowed it, letting their guts break it down.
"Sauropod heads are essentially all mouth. The jaw joint is at the very back of the skull, and they didn't have cheeks, so they came pretty close to having Pac Man-Cookie Monster flip-top heads," researcher Mathew Wedel at the Western University of Health Sciences in Pomona, Calif., told LiveScience.
"It's natural to wonder if the lack of chewing didn't, well, come back to bite them, in terms of digestive efficiency. But some recent work on digestion in large animals has shown that after about 3 days, animals have gotten all the nutrition they can from their food, regardless of particle size.
"And sauropods were so big that the food would have spent that long going through them anyway," Wedel said. "They could stop chewing entirely, with no loss of digestive efficiency."
What's a Long Neck Good For?
Furthermore, sauropods and other dinosaurs probably could breathe like birds, drawing fresh air through their lungs continuously, instead of having to breathe out before breathing in to fill their lungs with fresh air like mammals do. This may have helped sauropods get vital oxygen down their long necks to their lungs.
"The problem of breathing through a long tube is something that's very hard for mammals to do. Just try it with a length of garden hose," Taylor said.
As to why sauropods evolved such long necks, there are currently three theories. Some of the dinosaurs may have used their long necks to feed on high leaves, like giraffes do. Others may have used their necks to graze on large swaths of vegetation by sweeping the ground side to side like geese do. This helped them make the most out of every step, which would be a big deal for such heavy creatures.
Scientists have also suggested that long necks may have been sexually attractive, therefore driving the evolution of ever-longer necks; however, Taylor and his colleagues have found no evidence this was the case.
Read more at Discovery News
The largest creatures to ever walk the Earth were the long-necked, long-tailed dinosaurs known as the sauropods. These vegetarians had by far the longest necks of any known animal. The dinosaurs' necks reached up to 50 feet (15 meters) in length, six times longer than that of the current world-record holder, the giraffe, and at least five times longer than those of any other animal that has lived on land.
"They were really stupidly, absurdly oversized," said researcher Michael Taylor, a vertebrate paleontologist at the University of Bristol in England. "In our feeble, modern world, we're used to thinking of elephants as big, but sauropods reached 10 times the size elephants do. They were the size of walking whales."
Amazing Necks
To find out how sauropod necks could get so long, scientists analyzed other long-necked creatures and compared sauropod anatomy with that of the dinosaurs' nearest living relatives, the birds and crocodilians.
"Extinct animals — and living animals, too, for that matter — are much more amazing than we realize," Taylor told LiveScience. "Time and again, people have proposed limits to possible animal sizes, like the five-meter (16-foot) wingspan that was supposed to be the limit for flying animals. And time and again, they've been blown away. We now know of flying pterosaurs with 10-meter (33-foot) wingspans. And these extremes are achieved by a startling array of anatomical innovations."
Among living animals, adult bull giraffes have the longest necks, capable of reaching about 8 feet (2.4 m) long. No other living creature exceeds half this length. For instance, ostriches typically have necks only about 3 feet (1 m) long.
When it comes to extinct animals, the largest land-living mammal of all time was the rhino-like creature Paraceratherium, which had a neck maybe 8.2 feet (2.5 m) long. The flying reptiles known as pterosaurs could also have surprisingly long necks, such as Arambourgiania, whose neck may have exceeded 10 feet (3 m).
The necks of the Loch Ness Monster-like marine reptiles known as plesiosaurs could reach an impressive 23 feet (7 m), probably because the water they lived in could support their weight. But these necks were still less than half the lengths of the longest-necked sauropods.
Sauropod Secrets
In their study, Taylor and his colleagues found that the neck bones of sauropods possessed a number of traits that supported such long necks. For instance, air often made up 60 percent of these animals' necks, with some as light as birds' bones, making it easier to support long chains of the bones. The muscles, tendons and ligaments were also positioned around these vertebrae in a way that helped maximize leverage, making neck movements more efficient.
In addition, the dinosaurs' giant torsos and four-legged stances helped provide a stable platform for their necks. In contrast, giraffes have relatively small torsos, while ostriches have two-legged stances.
Sauropods also had plenty of neck vertebrae, up to 19. In contrast, nearly all mammals have no more than seven, from mice to whales to giraffes, limiting how long their necks can get. (The only exceptions among mammals are sloths and aquatic mammals known as sirenians, such as manatees.)
Moreover, while pterosaur Arambourgiania had a relatively giant head with long, spear-like jaws that it likely used to help capture prey, sauropods had small, light heads that were easy to support. These dinosaurs did not chew their meals, lacking even cheeks to store food in their mouths; they merely swallowed it, letting their guts break it down.
"Sauropod heads are essentially all mouth. The jaw joint is at the very back of the skull, and they didn't have cheeks, so they came pretty close to having Pac Man-Cookie Monster flip-top heads," researcher Mathew Wedel at the Western University of Health Sciences in Pomona, Calif., told LiveScience.
"It's natural to wonder if the lack of chewing didn't, well, come back to bite them, in terms of digestive efficiency. But some recent work on digestion in large animals has shown that after about 3 days, animals have gotten all the nutrition they can from their food, regardless of particle size.
"And sauropods were so big that the food would have spent that long going through them anyway," Wedel said. "They could stop chewing entirely, with no loss of digestive efficiency."
What's a Long Neck Good For?
Furthermore, sauropods and other dinosaurs probably could breathe like birds, drawing fresh air through their lungs continuously, instead of having to breathe out before breathing in to fill their lungs with fresh air like mammals do. This may have helped sauropods get vital oxygen down their long necks to their lungs.
"The problem of breathing through a long tube is something that's very hard for mammals to do. Just try it with a length of garden hose," Taylor said.
As to why sauropods evolved such long necks, there are currently three theories. Some of the dinosaurs may have used their long necks to feed on high leaves, like giraffes do. Others may have used their necks to graze on large swaths of vegetation by sweeping the ground side to side like geese do. This helped them make the most out of every step, which would be a big deal for such heavy creatures.
Scientists have also suggested that long necks may have been sexually attractive, therefore driving the evolution of ever-longer necks; however, Taylor and his colleagues have found no evidence this was the case.
Read more at Discovery News
Subscribe to:
Posts (Atom)