For the first time, scientists have captured images of auroras above the giant ice planet Uranus, finding further evidence of just how peculiar a world that distant planet is. Detected by means of carefully scheduled observations from the Hubble Space Telescope, the newly witnessed Uranian light show consisted of short-lived, faint, glowing dots -- a world of difference from the colorful curtains of light that often ring Earth's poles.
In the new observations, which are the first to glimpse the Uranian aurora with an Earth-based telescope, the researchers detected the luminous spots twice on the dayside of Uranus -- the side that's visible from Hubble. Previously, the distant aurora had only been measured using instruments on a passing spacecraft. Unlike auroras on Earth, which can turn the sky greens and purples for hours, the newly detected auroras on Uranus appeared to only last a couple minutes.
In general, auroras are a feature of the magnetosphere, the area surrounding a planet that is controlled by its magnetic field and shaped by the solar wind, a steady flow of charged particles emanating from the sun. Auroras are produced in the atmosphere as charged solar wind particles accelerate in the magnetosphere and are guided by the magnetic field close to the magnetic poles -- that's why the Earthly auroras are found around high latitudes.
But contrary to Earth -- or even Jupiter and Saturn -- "the magnetosphere of Uranus is very poorly known," said Laurent Lamy, with the Observatoire de Paris in Meudon, France, who led the new research.
The results from his team, which includes researchers from France, the United Kingdom, and the United States, will be published on April 14 in Geophysical Research Letters, a journal of the American Geophysical Union.
Auroras on Uranus are fainter than they are on Earth, and the planet is more than 4 billion kilometers (2.5 billion miles) away. Previous Earth-bound attempts to detect the faint auroras were inconclusive. Astronomers got their last good look at Uranian auroras 25 years ago when the Voyager 2 spacecraft whizzed past the planet and recorded spectra from of the radiant display.
"This planet was only investigated in detail once, during the Voyager flyby, dating from 1986. Since then, we've had no opportunities to get new observations of this very unusual magnetosphere," Lamy noted.
Planetary scientists know that Uranus is an oddball among the solar system's planets when it comes to the orientation of its rotation axis. Whereas the other planets resemble spinning tops, circulating around the Sun, Uranus is like a top that was knocked on its side -- but still keeps spinning.
The researchers suspect that the unfamiliar appearance of the newly observed auroras is due to Uranus' rotational weirdness and peculiar traits of its magnetic axis. The magnetic axis is both offset from the center of the planet and lists at an angle of 60 degrees from the rotational axis -- an extreme tilt compared to the 11 degree difference on Earth. Scientists theorize that Uranus's magnetic field is generated by a salty ocean within the planet, resulting in the off-center magnetic axis.
The 2011 auroras differ not only from Earth's auroras but also from the Uranian ones previously detected by Voyager 2. When that spacecraft made its flyby decades ago, Uranus was near its solstice -- its rotational axis was pointed toward the Sun. In that configuration, the magnetic axis stayed at a large angle from the solar wind flow, producing a magnetosphere similar to Earth's magnetosphere, although more dynamic. Under those 1986 solstice conditions, the auroras lasted longer than the recently witnessed ones and were mainly seen on the nightside of the planet, similar to what's observed on Earth, Lamy said. Hubble can't see the far side of the planet, however, so researchers don't know what types of auroras, if any, were generated there.
The new set of observations, however, is from when the planet was near equinox, when neither end of the Uranian rotational axis aims at the Sun, and the axis aligns almost perpendicular to the solar wind flow. Because the planet's magnetic axis is tilted, the daily rotation of Uranus during the period around the equinox causes each of its magnetic poles to point once a day toward the Sun, likely responsible for a very different type of aurora than the one that was seen at solstice, Lamy explained.
"This configuration is unique in the solar system," added Lamy, who noted that the two transient, illuminated spots observed in 2011 were close to the latitude of Uranus's northern magnetic pole.
Capturing the images of Uranus's auroras resulted from a combination of good luck and careful planning. In 2011, Earth, Jupiter and Uranus were lined up so that the solar wind could flow from the Sun, past Earth and Jupiter, and then toward Uranus. When the Sun produced several large bursts of charged particles in mid-September 2011, the researchers used Earth-orbiting satellites to monitor the solar wind's local arrival two to three days later. Two weeks after that, the solar wind sped past Jupiter at 500 kilometers per second (310 miles per second). Calculating that the charged particles would reach Uranus in mid-November, the team scrambled to scheduled time on the Hubble Space Telescope.
Read more at Science Daily
Apr 14, 2012
Weird Super-Earths Found Orbiting Neighbor Star
Astronomers believe they have found a second distant planet around Fomalhaut, a bright young neighbor star, and that the far-out world -- like its sister planet -- is shepherding and shaping the star's ring of dust.
If confirmed, theorists have some work to do explaining how the planet, believed to be a few times bigger than Mars, ended up 155 times as far away from its parent star as Earth is to the sun.
"We're learning a lot about planets that are close to their stars, but that is not the full picture. We also want to know about systems where planets are very far out. By considering near-, far- and mid-range, we can get a complete picture of planet formation,” University of Florida astronomer Aaron Boley told Discovery News.
Of key interest is figuring out whether the planets formed in place or somehow migrated out there, bumped like celestial billiard balls after gravitational encounters with another body or bodies closer to the star.
"Whether that can actually happen is very active area of research," Boley said.
If Fomalhaut's planets are indeed ring shepherds, they’ve been on the job a long time, roughly 100 million years.
"Relative to the age of the star, they must have formed quickly," Boley added.
The suspected planet would be the second planet found orbiting Fomalhaut, a very bright star located about 25 light-years away in the constellation Piscis Austrinus. Fomalhaut is twice as big as our sun and encircled by a disk of dust 16 times wider than the span between the sun and Earth.
The inner edge of the ring is about 135 times as far as away from the star as Earth is to the sun.
The finding was made with a new telescope called ALMA, an acronym for Atacama Large Millimeter/submillimeter Array. With just 15 of its planned 66 antennas operational, astronomers already are expecting ALMA to revolutionize millimeter-wavelength astronomy, much like the Hubble Space Telescope transformed optical astronomy.
"The Fomalhaut image is just the beginning. They haven't even finished getting all of their data. This is just a sneak peak," astronomer Paul Kalas, with the University of California at Berkeley, told Discovery News.
Kalas and colleagues used Hubble Space Telescope images taken in 2004 and 2006 to pinpoint a speck of light believed to be the first direct picture of a planet in orbit around another star system. Astronomers predicted Fomalhaut had a planet smaller than Saturn inward of the ring after earlier observations showed the ring's sharp inner edge.
Read more at Discovery News
If confirmed, theorists have some work to do explaining how the planet, believed to be a few times bigger than Mars, ended up 155 times as far away from its parent star as Earth is to the sun.
"We're learning a lot about planets that are close to their stars, but that is not the full picture. We also want to know about systems where planets are very far out. By considering near-, far- and mid-range, we can get a complete picture of planet formation,” University of Florida astronomer Aaron Boley told Discovery News.
Of key interest is figuring out whether the planets formed in place or somehow migrated out there, bumped like celestial billiard balls after gravitational encounters with another body or bodies closer to the star.
"Whether that can actually happen is very active area of research," Boley said.
If Fomalhaut's planets are indeed ring shepherds, they’ve been on the job a long time, roughly 100 million years.
"Relative to the age of the star, they must have formed quickly," Boley added.
The suspected planet would be the second planet found orbiting Fomalhaut, a very bright star located about 25 light-years away in the constellation Piscis Austrinus. Fomalhaut is twice as big as our sun and encircled by a disk of dust 16 times wider than the span between the sun and Earth.
The inner edge of the ring is about 135 times as far as away from the star as Earth is to the sun.
The finding was made with a new telescope called ALMA, an acronym for Atacama Large Millimeter/submillimeter Array. With just 15 of its planned 66 antennas operational, astronomers already are expecting ALMA to revolutionize millimeter-wavelength astronomy, much like the Hubble Space Telescope transformed optical astronomy.
"The Fomalhaut image is just the beginning. They haven't even finished getting all of their data. This is just a sneak peak," astronomer Paul Kalas, with the University of California at Berkeley, told Discovery News.
Kalas and colleagues used Hubble Space Telescope images taken in 2004 and 2006 to pinpoint a speck of light believed to be the first direct picture of a planet in orbit around another star system. Astronomers predicted Fomalhaut had a planet smaller than Saturn inward of the ring after earlier observations showed the ring's sharp inner edge.
Read more at Discovery News
Apr 13, 2012
Nanoscientists Find Long-Sought Majorana Particle
Scientists at TU Delft's Kavli Institute and the Foundation for Fundamental Research on Matter (FOM Foundation) have succeeded for the first time in detecting a Majorana particle. In the 1930s, the brilliant Italian physicist Ettore Majorana deduced from quantum theory the possibility of the existence of a very special particle, a particle that is its own anti-particle: the Majorana fermion. That 'Majorana' would be right on the border between matter and anti-matter.
Nanoscientist Leo Kouwenhoven already caused great excitement among scientists in February by presenting the preliminary results at a scientific congress. Today, the scientists have published their research in Science. The research was financed by the FOM Foundation and Microsoft.
Quantum computer and dark matter
Majorana fermions are very interesting -- not only because their discovery opens up a new and uncharted chapter of fundamental physics; they may also play a role in cosmology. A proposed theory assumes that the mysterious 'dark matter', which forms the greatest part of the universe, is composed of Majorana fermions. Furthermore, scientists view the particles as fundamental building blocks for the quantum computer. Such a computer is far more powerful than the best supercomputer, but only exists in theory so far. Contrary to an 'ordinary' quantum computer, a quantum computer based on Majorana fermions is exceptionally stable and barely sensitive to external influences.
Nanowire
For the first time, scientists in Leo Kouwenhoven's research group managed to create a nanoscale electronic device in which a pair of Majorana fermions 'appear' at either end of a nanowire. They did this by combining an extremely small nanowire, made by colleagues from Eindhoven University of Technology, with a superconducting material and a strong magnetic field. "The measurements of the particle at the ends of the nanowire cannot otherwise be explained than through the presence of a pair of Majorana fermions," says Leo Kouwenhoven.
Particle accelerators
It is theoretically possible to detect a Majorana fermion with a particle accelerator such as the one at CERN. The current Large Hadron Collider appears to be insufficiently sensitive for that purpose but, according to physicists, there is another possibility: Majorana fermions can also appear in properly designed nanostructures. "What's magical about quantum mechanics is that a Majorana particle created in this way is similar to the ones that may be observed in a particle accelerator, although that is very difficult to comprehend," explains Kouwenhoven. "In 2010, two different groups of theorists came up with a solution using nanowires, superconductors and a strong magnetic field. We happened to be very familiar with those ingredients here at TU Delft through earlier research." Microsoft approached Leo Kouwenhoven to help them lead a special FOM programme in search of Majorana fermions, resulting in a successful outcome..
Read more at Science Daily
Nanoscientist Leo Kouwenhoven already caused great excitement among scientists in February by presenting the preliminary results at a scientific congress. Today, the scientists have published their research in Science. The research was financed by the FOM Foundation and Microsoft.
Quantum computer and dark matter
Majorana fermions are very interesting -- not only because their discovery opens up a new and uncharted chapter of fundamental physics; they may also play a role in cosmology. A proposed theory assumes that the mysterious 'dark matter', which forms the greatest part of the universe, is composed of Majorana fermions. Furthermore, scientists view the particles as fundamental building blocks for the quantum computer. Such a computer is far more powerful than the best supercomputer, but only exists in theory so far. Contrary to an 'ordinary' quantum computer, a quantum computer based on Majorana fermions is exceptionally stable and barely sensitive to external influences.
Nanowire
For the first time, scientists in Leo Kouwenhoven's research group managed to create a nanoscale electronic device in which a pair of Majorana fermions 'appear' at either end of a nanowire. They did this by combining an extremely small nanowire, made by colleagues from Eindhoven University of Technology, with a superconducting material and a strong magnetic field. "The measurements of the particle at the ends of the nanowire cannot otherwise be explained than through the presence of a pair of Majorana fermions," says Leo Kouwenhoven.
Particle accelerators
It is theoretically possible to detect a Majorana fermion with a particle accelerator such as the one at CERN. The current Large Hadron Collider appears to be insufficiently sensitive for that purpose but, according to physicists, there is another possibility: Majorana fermions can also appear in properly designed nanostructures. "What's magical about quantum mechanics is that a Majorana particle created in this way is similar to the ones that may be observed in a particle accelerator, although that is very difficult to comprehend," explains Kouwenhoven. "In 2010, two different groups of theorists came up with a solution using nanowires, superconductors and a strong magnetic field. We happened to be very familiar with those ingredients here at TU Delft through earlier research." Microsoft approached Leo Kouwenhoven to help them lead a special FOM programme in search of Majorana fermions, resulting in a successful outcome..
Read more at Science Daily
Mummified Kitten Served As Egyptian Offering
Two thousand years ago, an Egyptian purchased a mummified kitten from a breeder, to offer as a sacrifice to the goddess Bastet, new research suggests.
Between about 332 B.C. and 30 B.C. in Egypt, cats were bred near temples specifically to be mummified and used as offerings.
The cat mummy came from the Egyptian Collection of the National Archeological Museum in Parma, Italy. It was bought by the museum in the 18th century from a collector. Because of how the museum acquired it, there's no documentation about where the mummy came from.
The cat mummies from this period are common, especially kittens. "Kittens, aged 2 to 4 months old, were sacrificed in huge numbers, because they were more suitable for mummification," the authors write in the paper, published in the April 2012 issue of the Journal of Feline Medicine and Surgery.
The researchers did a radiograph — similar to an X-ray — of the mummy, to see under the wrappings, finding the small cat was actually a kitten, only about 5 or 6 months old.
"The fact that the cat was young suggests that it was one of those bred specifically for mummification," study researcher Giacomo Gnudi, a professor at the University of Parma, said in a statement.
The cat was wrapped as tightly as possible, and had been placed in a sitting position before mummification, similar to the seated cats depicted in hieroglyphics from the same era. To make the cat take up as little space as possible, the embalmers fractured some of the cat's bones, including a backbone at the base of the spine to position the tail as close to the body as possible, and ribs to make the front limbs sit closer to the body.
Read more at Discovery News
Between about 332 B.C. and 30 B.C. in Egypt, cats were bred near temples specifically to be mummified and used as offerings.
The cat mummy came from the Egyptian Collection of the National Archeological Museum in Parma, Italy. It was bought by the museum in the 18th century from a collector. Because of how the museum acquired it, there's no documentation about where the mummy came from.
The cat mummies from this period are common, especially kittens. "Kittens, aged 2 to 4 months old, were sacrificed in huge numbers, because they were more suitable for mummification," the authors write in the paper, published in the April 2012 issue of the Journal of Feline Medicine and Surgery.
The researchers did a radiograph — similar to an X-ray — of the mummy, to see under the wrappings, finding the small cat was actually a kitten, only about 5 or 6 months old.
"The fact that the cat was young suggests that it was one of those bred specifically for mummification," study researcher Giacomo Gnudi, a professor at the University of Parma, said in a statement.
The cat was wrapped as tightly as possible, and had been placed in a sitting position before mummification, similar to the seated cats depicted in hieroglyphics from the same era. To make the cat take up as little space as possible, the embalmers fractured some of the cat's bones, including a backbone at the base of the spine to position the tail as close to the body as possible, and ribs to make the front limbs sit closer to the body.
Read more at Discovery News
Labels:
Animals,
Archeology,
History,
Human,
Science
Fair-Furred Leopard is a True Pink Panther
The latest leopard fashions are in from Africa, and this year pink is hot, hot, HOT!
A male leopard in South Africa's Madikwe Game Reserve is dazzling the gawking tourists with his strawberry locks. But watch out gazelles, this fair-furred feline has still got the spots to keep him camouflaged and make him a killer when he's out on the prowl.
They say a leopard can't change his spots, but when you look as good as this pink panther, who would want to?
Perhaps the leopard took some fashion hints from Snappy the orange crocodile.
Tourists had reported seeing the fashion forward feline, but only recently Deon De Villiers, a photographer and safari guide, caught the kitty on film. He sent the photo to Panthera, a wild cat conservation group.
The light-furred leopard may have erythrism said Panthera's president Luke Hunter in National Geographic. Erythrism is a genetic condition believed to cause production of either too much red or too little dark pigment.
"It's really rare—I don't know of another credible example in leopards," said Hunter.
Read more at Discovery News
A male leopard in South Africa's Madikwe Game Reserve is dazzling the gawking tourists with his strawberry locks. But watch out gazelles, this fair-furred feline has still got the spots to keep him camouflaged and make him a killer when he's out on the prowl.
They say a leopard can't change his spots, but when you look as good as this pink panther, who would want to?
Perhaps the leopard took some fashion hints from Snappy the orange crocodile.
Tourists had reported seeing the fashion forward feline, but only recently Deon De Villiers, a photographer and safari guide, caught the kitty on film. He sent the photo to Panthera, a wild cat conservation group.
The light-furred leopard may have erythrism said Panthera's president Luke Hunter in National Geographic. Erythrism is a genetic condition believed to cause production of either too much red or too little dark pigment.
"It's really rare—I don't know of another credible example in leopards," said Hunter.
Read more at Discovery News
Mars Viking Robots 'Found Life'
New analysis of 36-year-old data, resuscitated from printouts, shows NASA found life on Mars, an international team of mathematicians and scientists conclude in a paper published this week.
Further, NASA doesn't need a human expedition to Mars to nail down the claim, neuropharmacologist and biologist Joseph Miller, with the University of Southern California Keck School of Medicine, told Discovery News.
"The ultimate proof is to take a video of a Martian bacteria. They should send a microscope -- watch the bacteria move," Miller said.
"On the basis of what we've done so far, I'd say I'm 99 percent sure there's life there," he added.
Miller's confidence stems in part from a new study that re-analyzed results from a life-detection experiment conducted by NASA's Viking Mars robots in 1976.
Researchers crunched raw data collected during runs of the Labeled Release experiment, which looked for signs of microbial metabolism in soil samples scooped up and processed by the two Viking landers. General consensus of scientists has been that the experiment found geological, not biological, activity.
The new study took a different approach. Researchers distilled the Viking Labeled Release data, provided as hard copies by the original researchers, into sets of numbers and analyzed the results for complexity. Since living systems are more complicated than non-biological processes, the idea was to look at the experiment results from a purely numerical perspective.
They found close correlations between the Viking experiment results' complexity and those of terrestrial biological data sets. They say the high degree of order is more characteristic of biological, rather than purely physical processes.
Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it's premature to draw any conclusions.
"Ideally to use a technique on data from Mars one would want to show that the technique has been well calibrated and well established on Earth. The need to do so is clear; on Mars we have no way to test the method, while on Earth we can," planetary scientist and astrobiologist Christopher McKay, with NASA's Ames Research Center in Moffett Field, Calif., told Discovery News.
Read more at Discovery News
Further, NASA doesn't need a human expedition to Mars to nail down the claim, neuropharmacologist and biologist Joseph Miller, with the University of Southern California Keck School of Medicine, told Discovery News.
"The ultimate proof is to take a video of a Martian bacteria. They should send a microscope -- watch the bacteria move," Miller said.
"On the basis of what we've done so far, I'd say I'm 99 percent sure there's life there," he added.
Miller's confidence stems in part from a new study that re-analyzed results from a life-detection experiment conducted by NASA's Viking Mars robots in 1976.
Researchers crunched raw data collected during runs of the Labeled Release experiment, which looked for signs of microbial metabolism in soil samples scooped up and processed by the two Viking landers. General consensus of scientists has been that the experiment found geological, not biological, activity.
The new study took a different approach. Researchers distilled the Viking Labeled Release data, provided as hard copies by the original researchers, into sets of numbers and analyzed the results for complexity. Since living systems are more complicated than non-biological processes, the idea was to look at the experiment results from a purely numerical perspective.
They found close correlations between the Viking experiment results' complexity and those of terrestrial biological data sets. They say the high degree of order is more characteristic of biological, rather than purely physical processes.
Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it's premature to draw any conclusions.
"Ideally to use a technique on data from Mars one would want to show that the technique has been well calibrated and well established on Earth. The need to do so is clear; on Mars we have no way to test the method, while on Earth we can," planetary scientist and astrobiologist Christopher McKay, with NASA's Ames Research Center in Moffett Field, Calif., told Discovery News.
Read more at Discovery News
Apr 12, 2012
Significant Skull Differences Between Closely Linked Groups
In order to accurately identify skulls as male or female, forensic anthropologists need to have a good understanding of how the characteristics of male and female skulls differ between populations. A new study from North Carolina State University shows that these differences can be significant, even between populations that are geographically close to one another.
The researchers looked at the skulls of 27 women and 28 men who died in Lisbon, Portugal, between 1880 and 1975. They also evaluated the skulls of 40 women and 39 men who died between 1895 and 1903 in the rural area of Coimbra, just over 120 miles north of Lisbon.
The researchers found significant variation between female skulls from Lisbon and those from Coimbra. "The differences were in the shape of the skull, not the size," says Dr. Ann Ross, professor of anthropology at NC State and co-author of a paper describing the study. "This indicates that the variation is due to genetic differences, rather than differences of diet or nutrition." The researchers found little difference between the male skulls.
Specifically, the researchers found that the female skulls from Lisbon exhibited greater intraorbital distance than the skulls of Coimbra females. In other words, the women from Lisbon had broader noses and eyes that were spaced further apart.
This difference in craniofacial characteristics may stem from an influx of immigrants into Lisbon, which is a port city, Ross says. However, it may also be a result of preferential mate selection -- meaning Lisbon men were finding mates abroad, or were more attracted to women with those facial features.
Read more at Science Daily
The researchers looked at the skulls of 27 women and 28 men who died in Lisbon, Portugal, between 1880 and 1975. They also evaluated the skulls of 40 women and 39 men who died between 1895 and 1903 in the rural area of Coimbra, just over 120 miles north of Lisbon.
The researchers found significant variation between female skulls from Lisbon and those from Coimbra. "The differences were in the shape of the skull, not the size," says Dr. Ann Ross, professor of anthropology at NC State and co-author of a paper describing the study. "This indicates that the variation is due to genetic differences, rather than differences of diet or nutrition." The researchers found little difference between the male skulls.
Specifically, the researchers found that the female skulls from Lisbon exhibited greater intraorbital distance than the skulls of Coimbra females. In other words, the women from Lisbon had broader noses and eyes that were spaced further apart.
This difference in craniofacial characteristics may stem from an influx of immigrants into Lisbon, which is a port city, Ross says. However, it may also be a result of preferential mate selection -- meaning Lisbon men were finding mates abroad, or were more attracted to women with those facial features.
Read more at Science Daily
'Time Machine' Will Study the Early Universe
A new scientific instrument, a "time machine" of sorts, built by UCLA astronomers and colleagues, will allow scientists to study the earliest galaxies in the universe, which could never be studied before.
The five-ton instrument, the most advanced and sophisticated of its kind in the world, goes by the name MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration) and has been installed in the Keck I Telescope at the W.M. Keck Observatory atop Mauna Kea in Hawaii.
MOSFIRE gathers light in infrared wavelengths -- invisible to the human eye -- allowing it to penetrate cosmic dust and see distant objects whose light has been stretched or "redshifted" to the infrared by the expansion of the universe.
"The instrument was designed to study the most distant, faintest galaxies," said UCLA physics and astronomy professor Ian S. McLean, project leader on MOSFIRE and director of UCLA's Infrared Laboratory for Astrophysics. "When we look at the most distant galaxies, we see them not as they are now but as they were when the light left them that is just now arriving here. Some of the galaxies that we are studying were formed some 10 billion years ago -- only a few billion years after the Big Bang. We are looking back in time to the era of the formation of some of the very first galaxies, which are small and very faint. That is an era that we need to study if we are going to understand the large-scale structure of the universe."
With MOSFIRE, it will now become much easier to identify faint galaxies, "families of galaxies" and merging galaxies. The instrument also will enable detailed observations of planets orbiting nearby stars, star formation within our own galaxy, the distribution of dark matter in the universe and much more.
"We would like to study the environment of those early galaxies," said McLean, who built the instrument with colleagues from UCLA, the California Institute of Technology and UC Santa Cruz, along with industrial sub-contractors. "Sometimes there are large clusters with thousands of galaxies, sometimes small clusters. Often, black holes formed in the centers of galaxies."
Light collected by the Keck I Telescope was fed into MOSFIRE for the first time on April 4, producing an astronomical image. Astronomers are expected to start using MOSFIRE by September, following testing and evaluation in May and June.
MOSFIRE allows astronomers to take an infrared image of a field and to study 46 galaxies simultaneously, providing the infrared spectrum for each galaxy. Currently, it can take three hours or longer to obtain a good spectrum of just one galaxy, McLean noted.
McLean built the world's first infrared camera for wide use by astronomers in 1986 and since then has built eight increasingly sophisticated infrared cameras and spectrometers -- which split light into its component colors -- as well as helping on a few others.
McLean and Charles Steidel, the Lee A. DuBridge Professor of Astronomy at the California Institute of Technology, led the project to build MOSFIRE from scratch over seven years. Harland Epps, a UC Santa Cruz professor of astronomy and astrophysics, designed the optics for the instrument. A team of nearly two dozen people helped, including Kristin Kulas and Gregory Mace, UCLA graduate students in physics and astronomy who work in McLean's laboratory; Keith Matthews, an instrument designer from Caltech; and Sean Adkins, an engineer who is the instrument program manager for the Keck Observatory in Hawaii. Most of the mechanical parts for MOSFIRE were built at UCLA and Caltech. The slit unit that enables 46 objects to be isolated was manufactured in Switzerland. The computer programming was led by UCLA.
"My father, who was an engineer, called me an astronomer by inclination, a physicist by training and an engineer by default," McLean said. "I'm an applied physicist and an astronomer."
MOSFIRE cost $14 million and likely would have cost at least twice as much if the scientists had not built it themselves, McLean estimates.
MOSFIRE was federally funded by the National Science Foundation (through the Telescope System Instrumentation program), and by Gordon and Betty Moore. Gordon Moore is co-founder, former chairman and chief executive officer, and chairman emeritus of Intel Corp.
"He is a wonderful man with a penetrating intellect," McLean said of Moore. "We are deeply indebted to him and hope to be able to show him MOSFIRE this summer."
"We had an outstanding team," he added, "with four institutions involved and many industrial partners. It was a fantastic team effort."
In the late 1990s, McLean delivered an infrared spectrometer called NIRSPEC to the Keck Observatory in Hawaii, which housed the world's largest optical and infrared telescope at the time and which contains what had been the most powerful infrared spectrometer in the world. NIRSPEC is still on the Keck II Telescope.
While NIRSPEC's camera has one megapixel, MOSFIRE has four megapixels. MOSFIRE's detectors are approximately five times more sensitive than those on NIRSPEC and about 100 times more sensitive than those from McLean's 1986 infrared camera. In addition, the digital imaging devices available today are far superior to those of 15 years ago. The result is that MOSFIRE is much more sensitive to faint objects.
Discoveries made with NIRSPEC include the detection of water on comets, insights into the stars orbiting the enormous black hole at the center of the Milky Way galaxy, and the discovery of the chemical composition of brown dwarfs. Brown dwarfs, failed stars about the size of Jupiter but with a much larger mass, are considered the "missing link" between gas giant planets like Jupiter and small, low-mass stars.
Read more at Science Daily
The five-ton instrument, the most advanced and sophisticated of its kind in the world, goes by the name MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration) and has been installed in the Keck I Telescope at the W.M. Keck Observatory atop Mauna Kea in Hawaii.
MOSFIRE gathers light in infrared wavelengths -- invisible to the human eye -- allowing it to penetrate cosmic dust and see distant objects whose light has been stretched or "redshifted" to the infrared by the expansion of the universe.
"The instrument was designed to study the most distant, faintest galaxies," said UCLA physics and astronomy professor Ian S. McLean, project leader on MOSFIRE and director of UCLA's Infrared Laboratory for Astrophysics. "When we look at the most distant galaxies, we see them not as they are now but as they were when the light left them that is just now arriving here. Some of the galaxies that we are studying were formed some 10 billion years ago -- only a few billion years after the Big Bang. We are looking back in time to the era of the formation of some of the very first galaxies, which are small and very faint. That is an era that we need to study if we are going to understand the large-scale structure of the universe."
With MOSFIRE, it will now become much easier to identify faint galaxies, "families of galaxies" and merging galaxies. The instrument also will enable detailed observations of planets orbiting nearby stars, star formation within our own galaxy, the distribution of dark matter in the universe and much more.
"We would like to study the environment of those early galaxies," said McLean, who built the instrument with colleagues from UCLA, the California Institute of Technology and UC Santa Cruz, along with industrial sub-contractors. "Sometimes there are large clusters with thousands of galaxies, sometimes small clusters. Often, black holes formed in the centers of galaxies."
Light collected by the Keck I Telescope was fed into MOSFIRE for the first time on April 4, producing an astronomical image. Astronomers are expected to start using MOSFIRE by September, following testing and evaluation in May and June.
MOSFIRE allows astronomers to take an infrared image of a field and to study 46 galaxies simultaneously, providing the infrared spectrum for each galaxy. Currently, it can take three hours or longer to obtain a good spectrum of just one galaxy, McLean noted.
McLean built the world's first infrared camera for wide use by astronomers in 1986 and since then has built eight increasingly sophisticated infrared cameras and spectrometers -- which split light into its component colors -- as well as helping on a few others.
McLean and Charles Steidel, the Lee A. DuBridge Professor of Astronomy at the California Institute of Technology, led the project to build MOSFIRE from scratch over seven years. Harland Epps, a UC Santa Cruz professor of astronomy and astrophysics, designed the optics for the instrument. A team of nearly two dozen people helped, including Kristin Kulas and Gregory Mace, UCLA graduate students in physics and astronomy who work in McLean's laboratory; Keith Matthews, an instrument designer from Caltech; and Sean Adkins, an engineer who is the instrument program manager for the Keck Observatory in Hawaii. Most of the mechanical parts for MOSFIRE were built at UCLA and Caltech. The slit unit that enables 46 objects to be isolated was manufactured in Switzerland. The computer programming was led by UCLA.
"My father, who was an engineer, called me an astronomer by inclination, a physicist by training and an engineer by default," McLean said. "I'm an applied physicist and an astronomer."
MOSFIRE cost $14 million and likely would have cost at least twice as much if the scientists had not built it themselves, McLean estimates.
MOSFIRE was federally funded by the National Science Foundation (through the Telescope System Instrumentation program), and by Gordon and Betty Moore. Gordon Moore is co-founder, former chairman and chief executive officer, and chairman emeritus of Intel Corp.
"He is a wonderful man with a penetrating intellect," McLean said of Moore. "We are deeply indebted to him and hope to be able to show him MOSFIRE this summer."
"We had an outstanding team," he added, "with four institutions involved and many industrial partners. It was a fantastic team effort."
In the late 1990s, McLean delivered an infrared spectrometer called NIRSPEC to the Keck Observatory in Hawaii, which housed the world's largest optical and infrared telescope at the time and which contains what had been the most powerful infrared spectrometer in the world. NIRSPEC is still on the Keck II Telescope.
While NIRSPEC's camera has one megapixel, MOSFIRE has four megapixels. MOSFIRE's detectors are approximately five times more sensitive than those on NIRSPEC and about 100 times more sensitive than those from McLean's 1986 infrared camera. In addition, the digital imaging devices available today are far superior to those of 15 years ago. The result is that MOSFIRE is much more sensitive to faint objects.
Discoveries made with NIRSPEC include the detection of water on comets, insights into the stars orbiting the enormous black hole at the center of the Milky Way galaxy, and the discovery of the chemical composition of brown dwarfs. Brown dwarfs, failed stars about the size of Jupiter but with a much larger mass, are considered the "missing link" between gas giant planets like Jupiter and small, low-mass stars.
Read more at Science Daily
Tardigrade Eggs Might Survive Interplanetary Trip
Microscopic animals called tardigrades are among the few lifeforms thought capable of surviving the intense radiation, extreme temperatures and life-sucking vacuum of outer space.
Even their eggs can survive space-like conditions, hinting at the possibility of successful hatches on other planets.
“[I]f we are to assess the ability of tardigrades to survive transfer among planets or to thrive in extreme environments, they must be able to reproduce,” wrote astrobiologists who tested tardigrades in a study published April 10 in Astrobiology.
Adult tardigrades, also known as water bears, thrive in wet conditions and eat algae, bacteria or single-celled animals. If their puddles dry, they don’t die, but enter a state of total metabolic shutdown called anhydrobiosis. There they can remain for up to a decade, then spring back to life when it’s wet again.
Researchers in 2007 launched anhydrobiotic adults into orbit above Earth to see if they would survive. Those animals endured naked exposure to space for 10 days, and a few even made it through an excessive dose of ultraviolet radiation while back on Earth.
Other laboratory experiments show that adult tardigrades can survive cold near absolute zero (-459 degrees Fahrenheit), heat exceeding 300 degrees Fahrenheit, pressures dozens of times greater than at the bottom of the Marianas Trench, and intense blasts of radiation.
But what of tardigrade eggs? Some flew on the 2007 mission, but they weren’t exposed to the extreme temperatures and radiation found outside Earth’s protective magnetic shield.
To learn how the eggs would fare, NASA and astrobiologists in Japan devised three extreme stress tests for the eggs of a tardigrade species called Ramazzottius varieornatus.
In one set of tests, more than 70 percent of anhydrobiotic eggs survived temperatures as low as -320 degrees Fahrenheit and as high as 122 degrees Fahrenheit. Eggs exposed to vacuum-like conditions hatched just as well as normal eggs. Finally, more than half of anhydrobiotic eggs endured 1,690 Grays of radiation. A human would die in days if exposed to one percent of that dose.
Fully hydrated eggs, however, barely survived any of the tests.
It’s not known how tardigrade eggs survive such punishment. Whatever the mechanisms, the study’s authors think their results are good news for dried-out tardigrade families ejected into space, perhaps by an asteroid strike.
And if a few should somehow end up on, say, Mars, and be fortunate enough to find liquid water at Earth-room temperature, they might even hatch there.
Read more at Wired Science
Even their eggs can survive space-like conditions, hinting at the possibility of successful hatches on other planets.
“[I]f we are to assess the ability of tardigrades to survive transfer among planets or to thrive in extreme environments, they must be able to reproduce,” wrote astrobiologists who tested tardigrades in a study published April 10 in Astrobiology.
Adult tardigrades, also known as water bears, thrive in wet conditions and eat algae, bacteria or single-celled animals. If their puddles dry, they don’t die, but enter a state of total metabolic shutdown called anhydrobiosis. There they can remain for up to a decade, then spring back to life when it’s wet again.
Researchers in 2007 launched anhydrobiotic adults into orbit above Earth to see if they would survive. Those animals endured naked exposure to space for 10 days, and a few even made it through an excessive dose of ultraviolet radiation while back on Earth.
Other laboratory experiments show that adult tardigrades can survive cold near absolute zero (-459 degrees Fahrenheit), heat exceeding 300 degrees Fahrenheit, pressures dozens of times greater than at the bottom of the Marianas Trench, and intense blasts of radiation.
But what of tardigrade eggs? Some flew on the 2007 mission, but they weren’t exposed to the extreme temperatures and radiation found outside Earth’s protective magnetic shield.
To learn how the eggs would fare, NASA and astrobiologists in Japan devised three extreme stress tests for the eggs of a tardigrade species called Ramazzottius varieornatus.
In one set of tests, more than 70 percent of anhydrobiotic eggs survived temperatures as low as -320 degrees Fahrenheit and as high as 122 degrees Fahrenheit. Eggs exposed to vacuum-like conditions hatched just as well as normal eggs. Finally, more than half of anhydrobiotic eggs endured 1,690 Grays of radiation. A human would die in days if exposed to one percent of that dose.
Fully hydrated eggs, however, barely survived any of the tests.
It’s not known how tardigrade eggs survive such punishment. Whatever the mechanisms, the study’s authors think their results are good news for dried-out tardigrade families ejected into space, perhaps by an asteroid strike.
And if a few should somehow end up on, say, Mars, and be fortunate enough to find liquid water at Earth-room temperature, they might even hatch there.
Read more at Wired Science
Baboons Can Recognize Words
Baboons can learn to tell the difference between real four-letter words and nonsense combinations of letters. And once they figure out the patterns, these monkeys can guess with impressive accuracy whether a new word is real or fake.
Because baboons can’t actually read, a new study supports the theory that the brains of our primate ancestors held the necessary hardware for understanding written words long before humans evolved. Only after we starting writing and reading about 5,400 years or so did we apply our object-recognition abilities to letter symbols.
And even though we think of letters as sound units that allow us to piece words together, the new findings suggest that our brains may also view written letters like the legs on a table or the wheels on a car. Each part fits together to create an object that we recognize as a whole.
Eventually, the findings might weigh in on debates about how best to teach children to read.
“Obviously, we are using letters to get from the printed to the spoken form, and it is absolutely essential for kids to learn that this has to happen, but that’s only part of the story,” said Jonathan Grainger, a cognitive psychologist at CNRS, a national research center in Marseille, France. “The other reason we use letters in the very first phases of learning to read is that we’re basically doing what we do with ordinary everyday objects – using object parts to reconstruct the whole identity.”
“We can now look at what happens when baboons are learning words and also associating them with meaning,” he added. “We have a new paradigm that needs to be explored.”
In a large enclosure about 30 miles from Marseille, resident baboons can enter small testing booths whenever they feel like it. Inside, a computer scans a microchip embedded in each animal’s arm and launches the appropriate experiment.
For the new study, six baboons spent about six weeks learning to recognize four-letter English words on a computer screen. In 100-round trials, words came up one at a time on the screen. After tapping the word, baboons touched either an oval to indicate that it was a real word or a plus sign to signal a nonsense word.
Within each trial, a single word would come up again and again, intermixed with real words that the baboon had already learned as well as fake words. All of the words, both real and fake, contained three consonants and one vowel. For each correct answer, baboons received a food reward.
By the end of the training period, which included about 50,000 trials for each animal, all of the baboons had learned to recognize at least 81 words at an accuracy rate of about 75 percent, the researchers report today in the journal Science. One animal learned more than 300 words.
Once the baboons had boosted their vocabularies, further testing showed that the animals could often tell whether a word they had never seen before was real or fake. The more similar the fake word was to actual words, the more likely the animals were to guess that it was real, suggesting that they had learned to recognize patterns of letters that often show up in the English language.
The baboons in the experiment were not actually reading, nor did they understand that what they were looking at had symbolic meaning, said Michael Platt, a neurobiologist at Duke University in Durham, North Carolina.
Instead, the baboons’ ability to recognize letter patterns suggests that, when humans started reading and writing, they probably tapped into already existing brain circuitry that developed to recognize visual patterns.
Along with other research, the findings support a theory that alphabets look the way they do because their shapes are easily recognized by these brain systems. One implication is that dyslexia, at its root, might be a kind of visual disorder.
Read more at Discovery News
Because baboons can’t actually read, a new study supports the theory that the brains of our primate ancestors held the necessary hardware for understanding written words long before humans evolved. Only after we starting writing and reading about 5,400 years or so did we apply our object-recognition abilities to letter symbols.
And even though we think of letters as sound units that allow us to piece words together, the new findings suggest that our brains may also view written letters like the legs on a table or the wheels on a car. Each part fits together to create an object that we recognize as a whole.
Eventually, the findings might weigh in on debates about how best to teach children to read.
“Obviously, we are using letters to get from the printed to the spoken form, and it is absolutely essential for kids to learn that this has to happen, but that’s only part of the story,” said Jonathan Grainger, a cognitive psychologist at CNRS, a national research center in Marseille, France. “The other reason we use letters in the very first phases of learning to read is that we’re basically doing what we do with ordinary everyday objects – using object parts to reconstruct the whole identity.”
“We can now look at what happens when baboons are learning words and also associating them with meaning,” he added. “We have a new paradigm that needs to be explored.”
In a large enclosure about 30 miles from Marseille, resident baboons can enter small testing booths whenever they feel like it. Inside, a computer scans a microchip embedded in each animal’s arm and launches the appropriate experiment.
For the new study, six baboons spent about six weeks learning to recognize four-letter English words on a computer screen. In 100-round trials, words came up one at a time on the screen. After tapping the word, baboons touched either an oval to indicate that it was a real word or a plus sign to signal a nonsense word.
Within each trial, a single word would come up again and again, intermixed with real words that the baboon had already learned as well as fake words. All of the words, both real and fake, contained three consonants and one vowel. For each correct answer, baboons received a food reward.
By the end of the training period, which included about 50,000 trials for each animal, all of the baboons had learned to recognize at least 81 words at an accuracy rate of about 75 percent, the researchers report today in the journal Science. One animal learned more than 300 words.
Once the baboons had boosted their vocabularies, further testing showed that the animals could often tell whether a word they had never seen before was real or fake. The more similar the fake word was to actual words, the more likely the animals were to guess that it was real, suggesting that they had learned to recognize patterns of letters that often show up in the English language.
The baboons in the experiment were not actually reading, nor did they understand that what they were looking at had symbolic meaning, said Michael Platt, a neurobiologist at Duke University in Durham, North Carolina.
Instead, the baboons’ ability to recognize letter patterns suggests that, when humans started reading and writing, they probably tapped into already existing brain circuitry that developed to recognize visual patterns.
Along with other research, the findings support a theory that alphabets look the way they do because their shapes are easily recognized by these brain systems. One implication is that dyslexia, at its root, might be a kind of visual disorder.
Read more at Discovery News
Apr 11, 2012
Astronomers Identify 12-Billion-Year-Old White Dwarf Stars Only 100 Light Years Away
A University of Oklahoma assistant professor and colleagues have identified two white dwarf stars considered the oldest and closest known. Astronomers identified these 11- to 12-billion-year-old white dwarf stars only 100 light years away from Earth. These stars are the closest known examples of the oldest stars in the Universe forming soon after the Big Bang, according to the OU research group.
Mukremin Kilic, assistant professor of physics and astronomy in the OU College of Arts and Sciences and lead author on a recently published paper, announced the discovery. Kilic says, "A white dwarf is like a hot stove; once the stove is off, it cools slowly over time. By measuring how cool the stove is, we can tell how long it has been off. The two stars we identified have been cooling for billions of years."
Kilic explains that white dwarf stars are the burned out cores of stars similar to the Sun. In about 5 billion years, the Sun also will burn out and turn into a white dwarf star. It will lose its outer layers as it dies and turn into an incredibly dense star the size of Earth.
Known as WD 0346+246 and SDSS J110217, 48+411315.4 (J1102), these stars are located in the constellations Taurus and Ursa Major, respectively. Kilic and colleagues obtained infrared images using NASA's Spitzer Space Telescope to measure the temperature of the stars. And, over a three-year period, they measured J1102's distance by tracking its motion using the MDM Observatory's 2.4m telescope near Tucson, Arizona.
"Most stars stay almost perfectly fixed in the sky, but J1102 is moving at a speed of 600,000 miles per hour and is a little more than 100 light years from Earth," remarks co-author John Thorstensen of Dartmouth College. "We found its distance by measuring a tiny wiggle in its path caused by the Earth's motion -- it's the size of a dime viewed from 80 miles away."
"Based on the optical and infrared observations of these stars and our analysis, these stars are about 3700 and 3800 degrees on the surface," said co-author Piotr Kowalski of Helmholtz Centre Potsdam in Germany. Kowalski modeled the atmospheric parameters of these stars. Based on these temperature measurements, Kilic and his colleagues were able to estimate the ages of the stars.
"It is like a crime scene investigation," added Kilic. "We measure the temperature of the dead body -- in our case a dead star, then determine the time of the crime. These two white dwarf stars have been dead and cooling off almost for the entire history of the Universe."
Read more at Science Daily
Mukremin Kilic, assistant professor of physics and astronomy in the OU College of Arts and Sciences and lead author on a recently published paper, announced the discovery. Kilic says, "A white dwarf is like a hot stove; once the stove is off, it cools slowly over time. By measuring how cool the stove is, we can tell how long it has been off. The two stars we identified have been cooling for billions of years."
Kilic explains that white dwarf stars are the burned out cores of stars similar to the Sun. In about 5 billion years, the Sun also will burn out and turn into a white dwarf star. It will lose its outer layers as it dies and turn into an incredibly dense star the size of Earth.
Known as WD 0346+246 and SDSS J110217, 48+411315.4 (J1102), these stars are located in the constellations Taurus and Ursa Major, respectively. Kilic and colleagues obtained infrared images using NASA's Spitzer Space Telescope to measure the temperature of the stars. And, over a three-year period, they measured J1102's distance by tracking its motion using the MDM Observatory's 2.4m telescope near Tucson, Arizona.
"Most stars stay almost perfectly fixed in the sky, but J1102 is moving at a speed of 600,000 miles per hour and is a little more than 100 light years from Earth," remarks co-author John Thorstensen of Dartmouth College. "We found its distance by measuring a tiny wiggle in its path caused by the Earth's motion -- it's the size of a dime viewed from 80 miles away."
"Based on the optical and infrared observations of these stars and our analysis, these stars are about 3700 and 3800 degrees on the surface," said co-author Piotr Kowalski of Helmholtz Centre Potsdam in Germany. Kowalski modeled the atmospheric parameters of these stars. Based on these temperature measurements, Kilic and his colleagues were able to estimate the ages of the stars.
"It is like a crime scene investigation," added Kilic. "We measure the temperature of the dead body -- in our case a dead star, then determine the time of the crime. These two white dwarf stars have been dead and cooling off almost for the entire history of the Universe."
Read more at Science Daily
Oldest-Ever Reptile Embryos Unearthed
Dating back 280 million years or so, the oldest known fossil reptile embryos have been unearthed in Uruguay and Brazil. They belong to the ancient aquatic reptiles, mesosaurs. The study of these exceptionally well-preserved fossils suggests that mesosaurs were either viviparous[1] (pushing back this mode of reproduction by 60 million years) or that they laid eggs in advanced stages of development.
These finds, published in the journal Historical Biology, were revealed by an international team including Michel Laurin, CNRS senior researcher at the Centre de Recherche sur la Paléobiodiversité et les Paléoenvironnements (CNRS/Museum national d’histoire naturelle/UPMC).
Although the oldest known adult amniote[2] fossils date back some 315 million years, very few collections of fossil eggs and embryos are available to paleontologists. The discovery by an international team including Michel Laurin, from the Centre de Recherche sur la Paléobiodiversité et les Paléoenvironnements (CNRS/Museum national d’histoire naturelle/UPMC), of fossilized embryos of mesosaurs, ancient aquatic reptiles that lived ca. 280 million years ago, sheds light on these animals' reproductive mechanism.
In Brazil, the team uncovered a fossil specimen in gestation, which revealed that mesosaur embryos were retained in the uterus during most of their development. These reptiles, therefore, were probably viviparous1.
In addition, the same researchers unearthed 26 adult mesosaur specimens in Uruguay, all of which were associated with embryos or very young individuals, dating from the same period as the Brazilian fossil. Although these more or less disarticulated specimens are difficult to interpret, most of them are probably embryos in the uterus, thus backing up the hypothesis that mesosaurs were viviparous. The largest of these fossils may be young animals that were looked after by at least one of the parents, pointing to the existence of parental care. However, one isolated mesosaur egg (see photograph below) was also found at the Uruguayan site. This find casts doubt on the hypothesis of viviparity (which, in theory, excludes the laying of eggs). It suggests that the Uruguay mesosaurs laid eggs at an advanced stage of development, which then hatched shortly afterwards (several minutes to days later).
Read more at Science Daily
These finds, published in the journal Historical Biology, were revealed by an international team including Michel Laurin, CNRS senior researcher at the Centre de Recherche sur la Paléobiodiversité et les Paléoenvironnements (CNRS/Museum national d’histoire naturelle/UPMC).
Although the oldest known adult amniote[2] fossils date back some 315 million years, very few collections of fossil eggs and embryos are available to paleontologists. The discovery by an international team including Michel Laurin, from the Centre de Recherche sur la Paléobiodiversité et les Paléoenvironnements (CNRS/Museum national d’histoire naturelle/UPMC), of fossilized embryos of mesosaurs, ancient aquatic reptiles that lived ca. 280 million years ago, sheds light on these animals' reproductive mechanism.
In Brazil, the team uncovered a fossil specimen in gestation, which revealed that mesosaur embryos were retained in the uterus during most of their development. These reptiles, therefore, were probably viviparous1.
In addition, the same researchers unearthed 26 adult mesosaur specimens in Uruguay, all of which were associated with embryos or very young individuals, dating from the same period as the Brazilian fossil. Although these more or less disarticulated specimens are difficult to interpret, most of them are probably embryos in the uterus, thus backing up the hypothesis that mesosaurs were viviparous. The largest of these fossils may be young animals that were looked after by at least one of the parents, pointing to the existence of parental care. However, one isolated mesosaur egg (see photograph below) was also found at the Uruguayan site. This find casts doubt on the hypothesis of viviparity (which, in theory, excludes the laying of eggs). It suggests that the Uruguay mesosaurs laid eggs at an advanced stage of development, which then hatched shortly afterwards (several minutes to days later).
Read more at Science Daily
Tennessee’s Anti-Science Bill Becomes Law
After the US Supreme Court’s 1987 decision forbidding the teaching of creationism in science classes, those who objected to the teaching of evolution modified their ideas slightly. They relabeled these ideas “Intelligent Design.” In the wake of that tactic’s defeat in the courts, the opponents of science education retooled again.
This time, they targeted a number of state legislatures with two categories of bills that shared nearly identical wording. This tactic saw success in Louisiana, although a number of similar bills were considered in other states. They’ve now achieved their second success — the passage of a law in Tennessee.
One approach to diluting science education was a series of bills that allowed schools to use supplementary materials in science classes; conveniently, the anti-evolution Discovery Institute published a supplementary text at about the same time.
An alternate approach has appeared in a number of bills (again, all with nearly identical language) that would protect teachers who present the “strengths and weaknesses” of scientific theories, although the bills single out evolution, climate change, and a couple of topics that aren’t even theories. Again, the goal seems to be to use neutral language that will allow teachers to reiterate many of the spurious arguments against the widely accepted scientific understandings. Tennessee’s House and Senate had passed a bill that took precisely this approach.
The state’s governor, saying the bill doesn’t “bring clarity,” has decided not to sign it. But he’s decided not to veto it either, which will allow it to become law.
Read more at Wired Science
This time, they targeted a number of state legislatures with two categories of bills that shared nearly identical wording. This tactic saw success in Louisiana, although a number of similar bills were considered in other states. They’ve now achieved their second success — the passage of a law in Tennessee.
One approach to diluting science education was a series of bills that allowed schools to use supplementary materials in science classes; conveniently, the anti-evolution Discovery Institute published a supplementary text at about the same time.
An alternate approach has appeared in a number of bills (again, all with nearly identical language) that would protect teachers who present the “strengths and weaknesses” of scientific theories, although the bills single out evolution, climate change, and a couple of topics that aren’t even theories. Again, the goal seems to be to use neutral language that will allow teachers to reiterate many of the spurious arguments against the widely accepted scientific understandings. Tennessee’s House and Senate had passed a bill that took precisely this approach.
The state’s governor, saying the bill doesn’t “bring clarity,” has decided not to sign it. But he’s decided not to veto it either, which will allow it to become law.
Read more at Wired Science
The Simpsons creator Matt Groening reveals Springfield, Oregon is his inspiration
Springfield, Oregon is 100 miles south of Groening's hometown of Portland, where he grew up in Evergreen Terrace – the same name as The Simpsons' family home.
"Springfield was named after Springfield, Oregon. The only reason is that when I was a kid, the TV show "Father Knows Best" took place in the town of Springfield, and I was thrilled because I imagined that it was the town next to Portland, my hometown," he told Smithsonian magazine.
"When I grew up, I realised it was just a fictitious name. I also figured out that Springfield was one of the most common names for a city in the US. In anticipation of the success of the show, I thought, 'This will be cool; everyone will think it's their Springfield'. And they do."
The cultural phenomenon which is The Simpsons has been on air for 22 years and, after over 500 episodes, is both the longest-running sitcom and cartoon in America.
Groening also spoke about how Homer, Marge, Lisa and Maggie are named after his own father, mother and two sisters.
He revealed that, while his father was the inspiration for the father in The Simpsons, the real life Homer did not eat doughnuts.
"My father was a really sharp cartoonist and filmmaker. He used to tape-record the family surreptitiously, either while we were driving around or at dinner, and in 1963 he and I made up a story about a brother and a sister, Lisa and Matt, having an adventure out in the woods with animals. I told it to my sister Lisa, and she in turn told it to my sister Maggie.
"My father recorded the telling of the story by Lisa to Maggie, and then he used it as the soundtrack to a movie. So the idea of dramatising the family – Lisa, Maggie, Matt – I think was the inspiration for doing something kind of autobiographical with The Simpsons.
Groening said he has long given fake answers when asked about the Simpsons' hometown, leaving open the possibility that his latest one is itself another fake.
"I don't want to ruin it for people, you know? Whenever people say it's Springfield, Ohio, or Springfield, Massachusetts, or Springfield, wherever, I always go, "Yup, that's right."
However, the city has already incorporated the Simpsons into its own town lore. The Springfield Museum features a couch similar to the animated one shown in the show's opening credits, and a plaque marking the movie's release.
"Yo to Springfield, Oregon – the real Springfield!" Groening wrote. "Your pal, Matt Groening proud Oregonian!"
The show has made a running joke of hiding the true Springfield's location. In one episode, daughter Lisa points to Springfield on a map, but the animated "camera view" is blocked by son Bart's head.
People in the real Springfield – the one in Oregon – took on the mantle of the show's hometown after Groening visited during a tour before the 2007 film The Simpsons Movie.
Back then, tiny Springfield, Vermont, beat out 13 other like-named cities, including the one in Oregon, to host the movie premiere. The cities submitted videos meant to connect themselves to the fictional Springfield.
When Springfield, Oregon, community-relations manager Niel Laudati was told about Groening's announcement, he said: "Oh OK, we knew that."
The Springfield depicted in "The Simpsons" isn't always a flattering portrait. The school is falling apart, there's a constant fire at the town dump, and Mayor Quimby is chronically, helplessly corrupt.
"We kind of got past it," Laudati said. "We don't dwell on the bad stuff. Obviously we don't have a nuclear power plant. We don't have a lot of stuff in the Simpsons.
Read more at The Telegraph
"Springfield was named after Springfield, Oregon. The only reason is that when I was a kid, the TV show "Father Knows Best" took place in the town of Springfield, and I was thrilled because I imagined that it was the town next to Portland, my hometown," he told Smithsonian magazine.
"When I grew up, I realised it was just a fictitious name. I also figured out that Springfield was one of the most common names for a city in the US. In anticipation of the success of the show, I thought, 'This will be cool; everyone will think it's their Springfield'. And they do."
The cultural phenomenon which is The Simpsons has been on air for 22 years and, after over 500 episodes, is both the longest-running sitcom and cartoon in America.
Groening also spoke about how Homer, Marge, Lisa and Maggie are named after his own father, mother and two sisters.
He revealed that, while his father was the inspiration for the father in The Simpsons, the real life Homer did not eat doughnuts.
"My father was a really sharp cartoonist and filmmaker. He used to tape-record the family surreptitiously, either while we were driving around or at dinner, and in 1963 he and I made up a story about a brother and a sister, Lisa and Matt, having an adventure out in the woods with animals. I told it to my sister Lisa, and she in turn told it to my sister Maggie.
"My father recorded the telling of the story by Lisa to Maggie, and then he used it as the soundtrack to a movie. So the idea of dramatising the family – Lisa, Maggie, Matt – I think was the inspiration for doing something kind of autobiographical with The Simpsons.
Groening said he has long given fake answers when asked about the Simpsons' hometown, leaving open the possibility that his latest one is itself another fake.
"I don't want to ruin it for people, you know? Whenever people say it's Springfield, Ohio, or Springfield, Massachusetts, or Springfield, wherever, I always go, "Yup, that's right."
However, the city has already incorporated the Simpsons into its own town lore. The Springfield Museum features a couch similar to the animated one shown in the show's opening credits, and a plaque marking the movie's release.
"Yo to Springfield, Oregon – the real Springfield!" Groening wrote. "Your pal, Matt Groening proud Oregonian!"
The show has made a running joke of hiding the true Springfield's location. In one episode, daughter Lisa points to Springfield on a map, but the animated "camera view" is blocked by son Bart's head.
People in the real Springfield – the one in Oregon – took on the mantle of the show's hometown after Groening visited during a tour before the 2007 film The Simpsons Movie.
Back then, tiny Springfield, Vermont, beat out 13 other like-named cities, including the one in Oregon, to host the movie premiere. The cities submitted videos meant to connect themselves to the fictional Springfield.
When Springfield, Oregon, community-relations manager Niel Laudati was told about Groening's announcement, he said: "Oh OK, we knew that."
The Springfield depicted in "The Simpsons" isn't always a flattering portrait. The school is falling apart, there's a constant fire at the town dump, and Mayor Quimby is chronically, helplessly corrupt.
"We kind of got past it," Laudati said. "We don't dwell on the bad stuff. Obviously we don't have a nuclear power plant. We don't have a lot of stuff in the Simpsons.
Read more at The Telegraph
Celestial Paternity Test: Moon is Earth's Child
Men are from Mars, women are from Venus, but the moon is exclusively a child of Earth, according to a new "paternity test" carried out on Apollo lunar rock samples.
Historically, astronomers have long pondered where the moon came from. Analysis of the moon rocks brought back to Earth during the Apollo manned missions led to a radical hypothesis that was finally embraced in the mid-1980s.
The leading theory has been that a Mars-sized protoplanet barreled into Earth 4.4 billion years ago and the two bodies merged -- like a kid kneading yellow and blue Play-Doh together to make a green ball. The moon condensed quickly from the spillover ejecta. This same scenario has been applied to the birth of Pluto's moon Charon.
However, a comparative analysis of titanium from the moon, Earth and meteorites, indicates the moon's material came exclusively from Earth. Imagine, Mother Earth as a single parent!
The research, published by Junjun Zhang at the University of Chicago and co-authors, appears in the March 25 edition of Nature Geoscience.
If the moon was born out of a collision, the logic follows that the moon should have inherited half of its material from the Earth and the rest from the impactor. This is natural consequence, at least, in the world of biological reproduction, but should also apply to the basic physics of collisions.
It turns out that forms of titanium tended to remain in a solid or molten state rather than being vaporized in the moon's birth. This means it's unlikely titanium would become incorporated by the Earth and the moon in equal amounts. Titanium isotopes, forged in supernova explosions, also slightly vary in chemical signature and abundance across the solar system. They carry sort of an isotopic postage stamp that tells what parts of the solar system they came from.
The surprise is that the moon's titanium has the identical isotopic composition as Earth's, say the researchers. There is no forensic evidence for a third-party alien interloper that came from another part of the solar system.
The titanium analysis is consistent with previous work by other researchers that found identical oxygen isotopes in lunar and Earth samples.
These results threaten to send planetary scientists back to the drawing board in search of a plausible theory for the origin of the moon, but there are no satisfactory alternative scenarios for the moon's formation.
At first glance this would tend to resurrect an old idea that the moon spun off the Earth in a "mother-daughter" hypothesis. But it would take quite a wallop to spin something the mass of Earth fast enough to have a big lump of material fly off into space. We're not a ball of Play-Doh after all!
Another idea is that Earth collided with a "snowball planet" that didn't have any titanium. This would be circumstantial evidence for a lost population of gigantic flying ice cubes that no longer exist, as best as we can tell.
Read more at Discovery News
Historically, astronomers have long pondered where the moon came from. Analysis of the moon rocks brought back to Earth during the Apollo manned missions led to a radical hypothesis that was finally embraced in the mid-1980s.
The leading theory has been that a Mars-sized protoplanet barreled into Earth 4.4 billion years ago and the two bodies merged -- like a kid kneading yellow and blue Play-Doh together to make a green ball. The moon condensed quickly from the spillover ejecta. This same scenario has been applied to the birth of Pluto's moon Charon.
However, a comparative analysis of titanium from the moon, Earth and meteorites, indicates the moon's material came exclusively from Earth. Imagine, Mother Earth as a single parent!
The research, published by Junjun Zhang at the University of Chicago and co-authors, appears in the March 25 edition of Nature Geoscience.
If the moon was born out of a collision, the logic follows that the moon should have inherited half of its material from the Earth and the rest from the impactor. This is natural consequence, at least, in the world of biological reproduction, but should also apply to the basic physics of collisions.
It turns out that forms of titanium tended to remain in a solid or molten state rather than being vaporized in the moon's birth. This means it's unlikely titanium would become incorporated by the Earth and the moon in equal amounts. Titanium isotopes, forged in supernova explosions, also slightly vary in chemical signature and abundance across the solar system. They carry sort of an isotopic postage stamp that tells what parts of the solar system they came from.
The surprise is that the moon's titanium has the identical isotopic composition as Earth's, say the researchers. There is no forensic evidence for a third-party alien interloper that came from another part of the solar system.
The titanium analysis is consistent with previous work by other researchers that found identical oxygen isotopes in lunar and Earth samples.
These results threaten to send planetary scientists back to the drawing board in search of a plausible theory for the origin of the moon, but there are no satisfactory alternative scenarios for the moon's formation.
At first glance this would tend to resurrect an old idea that the moon spun off the Earth in a "mother-daughter" hypothesis. But it would take quite a wallop to spin something the mass of Earth fast enough to have a big lump of material fly off into space. We're not a ball of Play-Doh after all!
Another idea is that Earth collided with a "snowball planet" that didn't have any titanium. This would be circumstantial evidence for a lost population of gigantic flying ice cubes that no longer exist, as best as we can tell.
Read more at Discovery News
Apr 10, 2012
What Triggers a Mass Extinction? Habitat Loss and Tropical Cooling Were Once to Blame
The second-largest mass extinction in Earth's history coincided with a short but intense ice age during which enormous glaciers grew and sea levels dropped. Although it has long been agreed that the so-called Late Ordovician mass extinction -- which occurred about 450 million years ago -- was related to climate change, exactly how the climate change produced the extinction has not been known. Now, a team led by scientists at the California Institute of Technology (Caltech) has created a framework for weighing the factors that might have led to mass extinction and has used that framework to determine that the majority of extinctions were caused by habitat loss due to falling sea levels and cooling of the tropical oceans.
The work -- performed by scientists at Caltech and the University of Wisconsin, Madison -- is described in a paper currently online in the early edition of the Proceedings of the National Academy of Sciences.
The researchers combined information from two separate databases to overlay fossil occurrences on the sedimentary rock record of North America around the time of the extinction, an event that wiped out about 75 percent of marine species alive then. At that time, North America was an island continent geologists call Laurentia, located in the tropics.
Comparing the groups of species, or genera, that went extinct during the event with those that survived, the researchers were able to figure out the relative importance of several variables in dictating whether a genus went extinct during a 50-million-year interval around the mass extinction.
"What we did was essentially the same thing you'd do if confronted with a disease epidemic," says Seth Finnegan, postdoctoral scholar at Caltech and lead author of the study. "You ask who is affected and who is unaffected, and that can tell you a lot about what's causing the epidemic."
As it turns out, the strongest predictive factors of extinction on Laurentia were both the percentage of a genus's habitat that was lost when the sea level dropped and a genus's ability to tolerate broader ranges of temperatures. Groups that lost large portions of their habitat as ice sheets grew and sea levels fell, and those that had always been confined to warm tropical waters, were most likely to go extinct as a result of the rapid climate change.
"This is the first really attractive demonstration of how you can use multivariate approaches to try to understand extinctions, which reflect amazingly complex suites of processes," says Woodward Fischer, an assistant professor of geobiology at Caltech and principal investigator on the study. "As earth scientists, we love to debate different environmental and ecological factors in extinctions, but the truth is that all of these factors interact with one another in complicated ways, and you need a way of teasing these interactions apart. I'm sure this framework will be profitably applied to extinction events in other geologic intervals."
The analysis enabled the researchers to largely rule out a hypothesis, known as the record-bias hypothesis, which says that the extinction might be explained by a significant gap in the fossil record, also related to glaciation. After all, if sea levels fell and continents were no longer flooded, sedimentary rocks with fossils would not accumulate. Therefore, the last record of any species that went extinct during the gap would show up immediately before the gap, creating the appearance of a mass extinction.
Finnegan reasoned that this record-bias hypothesis would predict that the duration of a gap in the record should correlate with higher numbers of extinctions -- if a gap persisted longer, more groups should have gone extinct during that time, so it should appear that more species went extinct all at once than for shorter gaps. But in the case of the Late Ordovician, the researchers found that the duration of the gap did not matter, indicating that a mass extinction very likely did occur.
Read more at Science Daily
The work -- performed by scientists at Caltech and the University of Wisconsin, Madison -- is described in a paper currently online in the early edition of the Proceedings of the National Academy of Sciences.
The researchers combined information from two separate databases to overlay fossil occurrences on the sedimentary rock record of North America around the time of the extinction, an event that wiped out about 75 percent of marine species alive then. At that time, North America was an island continent geologists call Laurentia, located in the tropics.
Comparing the groups of species, or genera, that went extinct during the event with those that survived, the researchers were able to figure out the relative importance of several variables in dictating whether a genus went extinct during a 50-million-year interval around the mass extinction.
"What we did was essentially the same thing you'd do if confronted with a disease epidemic," says Seth Finnegan, postdoctoral scholar at Caltech and lead author of the study. "You ask who is affected and who is unaffected, and that can tell you a lot about what's causing the epidemic."
As it turns out, the strongest predictive factors of extinction on Laurentia were both the percentage of a genus's habitat that was lost when the sea level dropped and a genus's ability to tolerate broader ranges of temperatures. Groups that lost large portions of their habitat as ice sheets grew and sea levels fell, and those that had always been confined to warm tropical waters, were most likely to go extinct as a result of the rapid climate change.
"This is the first really attractive demonstration of how you can use multivariate approaches to try to understand extinctions, which reflect amazingly complex suites of processes," says Woodward Fischer, an assistant professor of geobiology at Caltech and principal investigator on the study. "As earth scientists, we love to debate different environmental and ecological factors in extinctions, but the truth is that all of these factors interact with one another in complicated ways, and you need a way of teasing these interactions apart. I'm sure this framework will be profitably applied to extinction events in other geologic intervals."
The analysis enabled the researchers to largely rule out a hypothesis, known as the record-bias hypothesis, which says that the extinction might be explained by a significant gap in the fossil record, also related to glaciation. After all, if sea levels fell and continents were no longer flooded, sedimentary rocks with fossils would not accumulate. Therefore, the last record of any species that went extinct during the gap would show up immediately before the gap, creating the appearance of a mass extinction.
Finnegan reasoned that this record-bias hypothesis would predict that the duration of a gap in the record should correlate with higher numbers of extinctions -- if a gap persisted longer, more groups should have gone extinct during that time, so it should appear that more species went extinct all at once than for shorter gaps. But in the case of the Late Ordovician, the researchers found that the duration of the gap did not matter, indicating that a mass extinction very likely did occur.
Read more at Science Daily
Eggs of Enigmatic Dinosaur in Patagonia Discovered
An Argentine-Swedish research team has reported a 70-million-year-old pocket of fossilized bones and unique eggs of an enigmatic birdlike dinosaur in Patagonia.
"What makes the discovery unique are the two eggs preserved near articulated bones of its hind limb. This is the first time the eggs are found in a close proximity to skeletal remains of an alvarezsaurid dinosaur," says Dr. Martin Kundrát, dinosaur expert from the group of Professor Per Erik Ahlberg at Uppsala University.
The first Argentine-Swedish Dinosaur expedition and collaboration; Fernando Novas, F. Agnolin and J. Powell from Museo Argentino de Ciencias Naturales and Martin Kundrát was performed in December 2010.
The dinosaur represents the latest survivor of its kind from Gondwana, the southern landmass in the Mesozoic Era. The creature belongs to one of the most mysterious groups of dinosaurs, the Alvarezsauridae, and it is one of the largest members, 2.6 m, of the family. It was first discovered by Dr. Powell, but has now been described and named Bonapartenykus ultimus in honor of Dr. José Bonaparte who 1991 discovered the first alvarezsaurid in Patagonia.
"This shows that basal alvarezsaurids persisted in South America until Latest Cretaceous times," says Martin Kundrát.
The two eggs found together with the bones during the expedition might have been inside the oviducts of the Bonapartenykus female when the animal perished. On the other hand numerous eggshell fragments later found show considerable calcite resorption of the inner eggshell layer, which suggest that at least some of the eggs were incubated and contained embryos at an advanced stage of their development.
Martin Kundrát analyzed the eggshells and found that it did not belong to any known category of the eggshell microstructure-based taxonomy. Hence, a new egg-family, the Arraigadoolithidae, was designated and named after the owner of the site where the specimen was discovered, Mr. Alberto Arraigada. Kundrát also made another discovery:
"During inspection of the shell samples using the electron scanning microscopy I observed unusual fossilized objects inside of the pneumatic canal of the eggshells. It turned out to be the first evidence of fungal contamination of dinosaur eggs," he says.
Read more at Science Daily
"What makes the discovery unique are the two eggs preserved near articulated bones of its hind limb. This is the first time the eggs are found in a close proximity to skeletal remains of an alvarezsaurid dinosaur," says Dr. Martin Kundrát, dinosaur expert from the group of Professor Per Erik Ahlberg at Uppsala University.
The first Argentine-Swedish Dinosaur expedition and collaboration; Fernando Novas, F. Agnolin and J. Powell from Museo Argentino de Ciencias Naturales and Martin Kundrát was performed in December 2010.
The dinosaur represents the latest survivor of its kind from Gondwana, the southern landmass in the Mesozoic Era. The creature belongs to one of the most mysterious groups of dinosaurs, the Alvarezsauridae, and it is one of the largest members, 2.6 m, of the family. It was first discovered by Dr. Powell, but has now been described and named Bonapartenykus ultimus in honor of Dr. José Bonaparte who 1991 discovered the first alvarezsaurid in Patagonia.
"This shows that basal alvarezsaurids persisted in South America until Latest Cretaceous times," says Martin Kundrát.
The two eggs found together with the bones during the expedition might have been inside the oviducts of the Bonapartenykus female when the animal perished. On the other hand numerous eggshell fragments later found show considerable calcite resorption of the inner eggshell layer, which suggest that at least some of the eggs were incubated and contained embryos at an advanced stage of their development.
Martin Kundrát analyzed the eggshells and found that it did not belong to any known category of the eggshell microstructure-based taxonomy. Hence, a new egg-family, the Arraigadoolithidae, was designated and named after the owner of the site where the specimen was discovered, Mr. Alberto Arraigada. Kundrát also made another discovery:
"During inspection of the shell samples using the electron scanning microscopy I observed unusual fossilized objects inside of the pneumatic canal of the eggshells. It turned out to be the first evidence of fungal contamination of dinosaur eggs," he says.
Read more at Science Daily
The Fastest Sprinter Could Run Faster
With his current world record of 9.58 seconds in the 100-meter dash and a top speed of more than 27 miles per hour, Jamaican sprinter Usain Bolt has already defied many expectations of how fast human legs can go.
Yet, without much effort, Bolt could run even faster, according to new calculations. With a few slight but still-legal boosts from tailwinds, altitude and a better reaction time at the start, argues Cambridge University mathematician John Barrow, Bolt could easily clock in at 9.45.
And while elite athletes will likely run even faster than that some day, no one can say for sure how fast people will eventually go -- or if we’ll ever see a sprinter finally reach the limits of the human body.
“There will be an ultimate limit, but just because there’s a limit mathematically, that doesn’t mean you’ll ever reach it,” said Barrow, author of Mathletics: A Scientist Explains 100 Amazing Things About the World of Sports. “You can draw a curve that’s always increasing, but never goes higher than the particular level where it’s bounded.”
Bolt surprised the running world when he broke the 100m record in the spring of 2008, partly because the top times had been stagnant for years. At 6 feet, 5 inches tall, Bolt also seemed too big to be a sprinter. By 2009, he had lowered the record from 9.74 to 9.58 -- a dramatic drop for such a short distance.
As speculation circulated about how fast Bolt might eventually go, Barrow started doing some basic calculations, focusing on three simple factors that are known to affect sprinting speed. He started with Bolt’s notoriously slow reaction time to the starting gun.
Under official rules, runners are called on false starts if they leave the starting blocks less than 0.1 seconds after the signal sounds. The best starters are consistently off and running after about 0.12 seconds. If Bolt could get his sluggish start time of 0.165 -- the second slowest in the final heat at the Beijing Olympics -- down to 0.12 and still run at his top speed, Barrow said, that alone would lower his record to 9.55.
With a maximum allowable tailwind of two meters (6.6 feet) per second on top of an improved start time, Barrow calculated with known relationships between wind, drag and running speed, the sprinter could lower his record to 9.5.
Finally, Barrow considered what would happen if Bolt ran at an altitude of 1,000 m (3,280 feet), the highest allowable elevation for running records to count. At that height, the density of air is low enough to reduce drag and facilitate another drop in speed. If he also started well and had a tailwind, altitude would give Bolt the ability to run a 9.47.
As for actual running technique, studies have shown that the most important factor driving sprinting performance is how hard runners can hit the ground in relation to their body weight, said Peter Weyand, a physiologist and biomechanist at Southern Methodist University in Dallas.
Read more at Discovery News
Yet, without much effort, Bolt could run even faster, according to new calculations. With a few slight but still-legal boosts from tailwinds, altitude and a better reaction time at the start, argues Cambridge University mathematician John Barrow, Bolt could easily clock in at 9.45.
And while elite athletes will likely run even faster than that some day, no one can say for sure how fast people will eventually go -- or if we’ll ever see a sprinter finally reach the limits of the human body.
“There will be an ultimate limit, but just because there’s a limit mathematically, that doesn’t mean you’ll ever reach it,” said Barrow, author of Mathletics: A Scientist Explains 100 Amazing Things About the World of Sports. “You can draw a curve that’s always increasing, but never goes higher than the particular level where it’s bounded.”
Bolt surprised the running world when he broke the 100m record in the spring of 2008, partly because the top times had been stagnant for years. At 6 feet, 5 inches tall, Bolt also seemed too big to be a sprinter. By 2009, he had lowered the record from 9.74 to 9.58 -- a dramatic drop for such a short distance.
As speculation circulated about how fast Bolt might eventually go, Barrow started doing some basic calculations, focusing on three simple factors that are known to affect sprinting speed. He started with Bolt’s notoriously slow reaction time to the starting gun.
Under official rules, runners are called on false starts if they leave the starting blocks less than 0.1 seconds after the signal sounds. The best starters are consistently off and running after about 0.12 seconds. If Bolt could get his sluggish start time of 0.165 -- the second slowest in the final heat at the Beijing Olympics -- down to 0.12 and still run at his top speed, Barrow said, that alone would lower his record to 9.55.
With a maximum allowable tailwind of two meters (6.6 feet) per second on top of an improved start time, Barrow calculated with known relationships between wind, drag and running speed, the sprinter could lower his record to 9.5.
Finally, Barrow considered what would happen if Bolt ran at an altitude of 1,000 m (3,280 feet), the highest allowable elevation for running records to count. At that height, the density of air is low enough to reduce drag and facilitate another drop in speed. If he also started well and had a tailwind, altitude would give Bolt the ability to run a 9.47.
As for actual running technique, studies have shown that the most important factor driving sprinting performance is how hard runners can hit the ground in relation to their body weight, said Peter Weyand, a physiologist and biomechanist at Southern Methodist University in Dallas.
Read more at Discovery News
LHC Slams Protons Together at Record Energy
After achieving record beam energies last month, physicists at the Large Hadron Collider (LHC) have successfully carried out the first collisions in this new regime.
In this game of "the bigger the better," the LHC is now colliding protons at 8 Tera-electronvolts (TeV) -- a huge step toward peeling back the frontiers of high-energy physics, culminating in the increasingly likely confirmation of the Higgs boson.
In the April 5 announcement, CERN said, "the LHC shift crew declared 'stable beams' as two 4 TeV proton beams were brought into collision at the LHC's four interaction points ... The collision energy of 8 TeV is a new world record, and increases the machine's discovery potential considerably."
For the last year, the LHC has been running at 3.5 TeV per beam, allowing physicists to familiarize themselves with the machine operating in this previously unattainable energy range. By slamming two counter-rotating beams of protons at 3.5 TeV apiece, collision energies of 7 TeV have been possible.
At 7 TeV, the LHC has revealed some tantalizing hints of the Higgs -- a much sought-after subatomic particle that is theorized to endow all matter in the Universe with mass.
By amping-up collision energies higher by 1 TeV, physicists hope to generate more Higgs particles (if they do indeed exist) so they produce a strong enough signal that will leave no ambiguity.
But it's not all just about the Higgs. By recreating the conditions of the Big Bang for the briefest of moments as protons collide, it is hoped that higher collision energies will help us glimpse particles hypothesized to exist beyond the Standard Model of physics. The "Standard Model" Higgs is the last piece of this puzzle, but there are many theories that it cannot account for -- such as why gravity doesn't "fit" with the Standard Model.
"(Supersymmetric particles) would be produced much more copiously at the higher energy. Supersymmetry is a theory in particle physics that goes beyond the current Standard Model, and could account for the dark matter of the Universe," the CERN statement continues.
"Standard Model Higgs particles, if they exist, will also be produced more copiously at 8 TeV than at 7 TeV, but background processes that mimic the Higgs signal will also increase. That means that the full year’s running will still be necessary to convert the tantalizing hints seen in 2011 into a discovery, or to rule out the Standard Model Higgs particle altogether."
Read more at Discovery News
In this game of "the bigger the better," the LHC is now colliding protons at 8 Tera-electronvolts (TeV) -- a huge step toward peeling back the frontiers of high-energy physics, culminating in the increasingly likely confirmation of the Higgs boson.
In the April 5 announcement, CERN said, "the LHC shift crew declared 'stable beams' as two 4 TeV proton beams were brought into collision at the LHC's four interaction points ... The collision energy of 8 TeV is a new world record, and increases the machine's discovery potential considerably."
For the last year, the LHC has been running at 3.5 TeV per beam, allowing physicists to familiarize themselves with the machine operating in this previously unattainable energy range. By slamming two counter-rotating beams of protons at 3.5 TeV apiece, collision energies of 7 TeV have been possible.
At 7 TeV, the LHC has revealed some tantalizing hints of the Higgs -- a much sought-after subatomic particle that is theorized to endow all matter in the Universe with mass.
By amping-up collision energies higher by 1 TeV, physicists hope to generate more Higgs particles (if they do indeed exist) so they produce a strong enough signal that will leave no ambiguity.
But it's not all just about the Higgs. By recreating the conditions of the Big Bang for the briefest of moments as protons collide, it is hoped that higher collision energies will help us glimpse particles hypothesized to exist beyond the Standard Model of physics. The "Standard Model" Higgs is the last piece of this puzzle, but there are many theories that it cannot account for -- such as why gravity doesn't "fit" with the Standard Model.
"(Supersymmetric particles) would be produced much more copiously at the higher energy. Supersymmetry is a theory in particle physics that goes beyond the current Standard Model, and could account for the dark matter of the Universe," the CERN statement continues.
"Standard Model Higgs particles, if they exist, will also be produced more copiously at 8 TeV than at 7 TeV, but background processes that mimic the Higgs signal will also increase. That means that the full year’s running will still be necessary to convert the tantalizing hints seen in 2011 into a discovery, or to rule out the Standard Model Higgs particle altogether."
Read more at Discovery News
Apr 9, 2012
Search for Sun's Sibling Could Find Life's Cousin
The biggest challenge facing astronomers hunting for a bona fide "Earth-like" exoplanet is where do you look? In a galaxy stuffed with hundreds of billions of stars, it's difficult to narrow the search.
Naturally, a key motivational factor behind planet-hunting missions (like NASA's Kepler space telescope) is to hunt for small rocky worlds -- not too dissimilar to Earth -- that may play host to life as we know it.
So, Kepler astronomers have focused their search on sun-like stars in the hope of detecting Earth-sized exoplanets orbiting at a similar distance our planet orbits the sun -- a "sweet spot" (or "habitable zone") where the temperature may be just right for liquid water to exist on the surface. In our experience, where there's liquid water, there's life.
But still, even though we look toward "sun-like" stars with the hope that they may host a system of worlds not too dissimilar to our solar system, astronomers are still taking a proverbial "shot in the dark."
Spreading Germs in the Stellar Creche
In an effort to find sun-like stars that are more sun-like than just their outward appearance, researchers from the University of Turku in Finland are carrying out a search for our sun's siblings.
When the sun was a baby, some 4.5 billion years ago, it was nourished in the same stellar nursery as thousands of other baby stars. After a billion years, the cluster of young stars went their own ways, dispersing throughout interstellar space.
But, like any family, these stars have a lot in common. And like any nursery here on Earth, they may have shared some germs and viruses during their formative years when they were in close proximity. But in this case, "cosmic chicken pox" may have formed the building blocks of life that eventually flourished on Earth. If there's life on Earth, might there be life on the planets that formed around our sun's siblings?
This might sound a little outlandish, but there is a hypothesis that may have acted as the stellar nursery "cosmic chicken pox transmission mechanism" for the building blocks of life.
Life's Transmission Mechanism
"The idea is if a planet has life, like Earth, and if you hit it with an asteroid, it will create debris, some of which will escape into space," astronomer Mauri Valtonen, of the University of Turku, told SPACE.com. "And if the debris is big enough, like 1 meter across, it can shield life inside from radiation, and that life can survive inside for millions of years until that debris lands somewhere. If it happens to land on a planet with suitable conditions, life can start there."
Valtonen is describing "panspermia" in its most basic form -- life, as bacteria or even just some strands of DNA, hopping from planet-to-planet after meteorite impacts.
Usually astronomers will point out that panspermia could be used to explain how life may be common on Earth and Mars (if it is discovered). Perhaps meteorites made from Earth or Mars rock carried life throughout the solar system and beyond?
But what if this mechanism spread the earliest form of biology billions of years ago through our sun's cosmic creche? Suddenly we have targets for missions hunting for true sun-like stars, bona fide Earth-like planets and (possibly) Earth Brand™ Life.
Read more at Discovery News
Naturally, a key motivational factor behind planet-hunting missions (like NASA's Kepler space telescope) is to hunt for small rocky worlds -- not too dissimilar to Earth -- that may play host to life as we know it.
So, Kepler astronomers have focused their search on sun-like stars in the hope of detecting Earth-sized exoplanets orbiting at a similar distance our planet orbits the sun -- a "sweet spot" (or "habitable zone") where the temperature may be just right for liquid water to exist on the surface. In our experience, where there's liquid water, there's life.
But still, even though we look toward "sun-like" stars with the hope that they may host a system of worlds not too dissimilar to our solar system, astronomers are still taking a proverbial "shot in the dark."
Spreading Germs in the Stellar Creche
In an effort to find sun-like stars that are more sun-like than just their outward appearance, researchers from the University of Turku in Finland are carrying out a search for our sun's siblings.
When the sun was a baby, some 4.5 billion years ago, it was nourished in the same stellar nursery as thousands of other baby stars. After a billion years, the cluster of young stars went their own ways, dispersing throughout interstellar space.
But, like any family, these stars have a lot in common. And like any nursery here on Earth, they may have shared some germs and viruses during their formative years when they were in close proximity. But in this case, "cosmic chicken pox" may have formed the building blocks of life that eventually flourished on Earth. If there's life on Earth, might there be life on the planets that formed around our sun's siblings?
This might sound a little outlandish, but there is a hypothesis that may have acted as the stellar nursery "cosmic chicken pox transmission mechanism" for the building blocks of life.
Life's Transmission Mechanism
"The idea is if a planet has life, like Earth, and if you hit it with an asteroid, it will create debris, some of which will escape into space," astronomer Mauri Valtonen, of the University of Turku, told SPACE.com. "And if the debris is big enough, like 1 meter across, it can shield life inside from radiation, and that life can survive inside for millions of years until that debris lands somewhere. If it happens to land on a planet with suitable conditions, life can start there."
Valtonen is describing "panspermia" in its most basic form -- life, as bacteria or even just some strands of DNA, hopping from planet-to-planet after meteorite impacts.
Usually astronomers will point out that panspermia could be used to explain how life may be common on Earth and Mars (if it is discovered). Perhaps meteorites made from Earth or Mars rock carried life throughout the solar system and beyond?
But what if this mechanism spread the earliest form of biology billions of years ago through our sun's cosmic creche? Suddenly we have targets for missions hunting for true sun-like stars, bona fide Earth-like planets and (possibly) Earth Brand™ Life.
Read more at Discovery News
In Pre-1492 Amazon, Farmers Managed Without Fires
Farming without fire in tropical regions, like indigenous populations did in Pre-Columbian times, may be the key to both feeding people and managing land more sustainably.
For hundreds of years before Columbus arrived in Central America, indigenous people converted vast swaths of tropical savannas into agricultural fields with raised beds for growing crops -- all without the use of slash-and-burn or other fire-intensive techniques, which are common today and a major contributor to greenhouse gas emissions.
But soon after 1492, there came a sharp surge in uncontrolled burns throughout the Amazon’s coastal landscapes, found a new study that dug into more than 2,000 years worth of soil. Those carbon-emitting burning practices, which continue today, contribute significantly to global warming.
The results -- which were exactly opposite of what scientists expected to find -- reveal an unexpected picture of pre-Columbian agricultural practices in an often overlooked and endangered landscape. In turn, the findings suggest that the secret to a sustainable future may lie in the past.
“In a time of climate change, we need an alternative way of managing these savannas that is fire-free, and this is a lesson we can learn from the past,” said José Iriarte, an archaeobotanist at the University of Exeter in the United Kingdom. “They were managing the savannas in what we can say today was a sustainable way.”
In forested regions of the Amazon, which have received the bulk of scientists’ attention, studies have shown that burning was common in pre-Columbian times. But the massive collapse of indigenous populations after European diseases arrived allowed the forest to take over, and burning quickly subsided.
Scientists have long assumed that the same pattern occurred in the coastal savannas of Central and South America, which make up 20 percent of lowland areas in the region. To find out for sure, Iriarte and colleagues visited coastal wetlands in French Guiana. There, previous archaeological studies have revealed extensive pre-Columbian agricultural systems, including raised fields, canals and ponds that took advantage of seasonal flooding to irrigate crops.
The researchers began by digging up cores of sediment dating back 2,150 years. It was the perfect environment for sampling because wetland soil is oxygen-deprived, allowing pollen and other identifiable plant materials to survive for centuries without threat from bacteria.
Between about 1,000 and 1,200 years ago, the researchers report today in the journal Proceedings of the National Academy of Sciences, plant remains documented the arrival of raised-field farmers in the area. At the same time, to everyone’s surprise, a near absence of charcoal in those early layers showed that there were very few fires during this time.
Read more at Discovery News
For hundreds of years before Columbus arrived in Central America, indigenous people converted vast swaths of tropical savannas into agricultural fields with raised beds for growing crops -- all without the use of slash-and-burn or other fire-intensive techniques, which are common today and a major contributor to greenhouse gas emissions.
But soon after 1492, there came a sharp surge in uncontrolled burns throughout the Amazon’s coastal landscapes, found a new study that dug into more than 2,000 years worth of soil. Those carbon-emitting burning practices, which continue today, contribute significantly to global warming.
The results -- which were exactly opposite of what scientists expected to find -- reveal an unexpected picture of pre-Columbian agricultural practices in an often overlooked and endangered landscape. In turn, the findings suggest that the secret to a sustainable future may lie in the past.
“In a time of climate change, we need an alternative way of managing these savannas that is fire-free, and this is a lesson we can learn from the past,” said José Iriarte, an archaeobotanist at the University of Exeter in the United Kingdom. “They were managing the savannas in what we can say today was a sustainable way.”
In forested regions of the Amazon, which have received the bulk of scientists’ attention, studies have shown that burning was common in pre-Columbian times. But the massive collapse of indigenous populations after European diseases arrived allowed the forest to take over, and burning quickly subsided.
Scientists have long assumed that the same pattern occurred in the coastal savannas of Central and South America, which make up 20 percent of lowland areas in the region. To find out for sure, Iriarte and colleagues visited coastal wetlands in French Guiana. There, previous archaeological studies have revealed extensive pre-Columbian agricultural systems, including raised fields, canals and ponds that took advantage of seasonal flooding to irrigate crops.
The researchers began by digging up cores of sediment dating back 2,150 years. It was the perfect environment for sampling because wetland soil is oxygen-deprived, allowing pollen and other identifiable plant materials to survive for centuries without threat from bacteria.
Between about 1,000 and 1,200 years ago, the researchers report today in the journal Proceedings of the National Academy of Sciences, plant remains documented the arrival of raised-field farmers in the area. At the same time, to everyone’s surprise, a near absence of charcoal in those early layers showed that there were very few fires during this time.
Read more at Discovery News
Apr 8, 2012
Copper Chains: Earth's Deep-Seated Hold On Copper Revealed
Earth is clingy when it comes to copper. A new Rice University study recently published in the journal Science finds that nature conspires at scales both large and small -- from the realms of tectonic plates down to molecular bonds -- to keep most of Earth's copper buried dozens of miles below ground.
"Everything throughout history shows us that Earth does not want to give up its copper to the continental crust," said Rice geochemist Cin-Ty Lee, the lead author of the study. "Both the building blocks for continents and the continental crust itself, dating back as much as 3 billion years, are highly depleted in copper."
Finding copper is more than an academic exercise. With global demand for electronics growing rapidly, some studies have estimated the world's demand for copper could exceed supply in as little as six years. The new study could help, because it suggests where undiscovered caches of copper might lie.
But the copper clues were just a happy accident.
"We didn't go into this looking for copper," Lee said. "We were originally interested in how continents form and more specifically in the oxidation state of volcanoes."
Earth scientists have long debated whether an oxygen-rich atmosphere might be required for continent formation. The idea stems from the fact that Earth may not have had many continents for at least the first billion years of its existence and that Earth's continents may have begun forming around the time that oxygen became a significant component of the atmosphere.
In their search for answers, Lee and colleagues set out to examine Earth's arc magmas -- the molten building blocks for continents. Arc magmas get their start deep in the planet in areas called subduction zones, where one of Earth's tectonic plates slides beneath another. When plates subduct, two things happen. First, they bring oxidized crust and sediments from Earth's surface into the mantle. Second, the subducting plate drives a return flow of hot mantle upwards from Earth's deep interior. During this return flow, the hot mantle not only melts itself but may also cause melting of the recycled sediments. Arc magmas are thought to form under these conditions, so if oxygen were required for continental crust formation, it would mostly likely come from these recycled segments.
"If oxidized materials are necessary for generating such melts, we should see evidence of it all the way from where the arc magmas form to the point where the new continent-building material is released from arc volcanoes," Lee said.
Lee and colleagues examined xenoliths, rocks that formed deep inside Earth and were carried up to the surface in volcanic eruptions. Specifically, they studied garnet pyroxenite xenoliths thought to represent the first crystallized products of arc magmas from the deep roots of an arc some 50 kilometers below Earth's surface. Rather than finding evidence of oxidation, they found sulfides -- minerals that contain reduced forms of sulfur bonded to metals like copper, nickel and iron. If conditions were highly oxidizing, Lee said, these sulfide minerals would be destabilized and allow these elements, particularly copper, to bond with oxygen.
Because sulfides are also heavy and dense, they tend to sink and get left behind in the deep parts of arc systems, like a blob of dense material that stays at the bottom of a lava lamp while less dense material rises to the top.
"This explains why copper deposits, in general, are so rare," Lee said. "The Earth wants to hold it deep and not give it up."
Lee said deciding where to look for undiscovered copper deposits requires an understanding of the conditions needed to overcome the forces that conspire to keep it deep inside the planet.
"As a continental arc matures, the copper-rich sulfides are trapped deep and accumulate," he said. "But if the continental arc grows thicker over time, the accumulated copper-bearing sulfides are driven to deeper depths where the higher temperatures can re-melt these copper-rich dregs, releasing them to rejoin arc magmas."
These conditions were met in the Andes Mountains and in western North America. He said other potential sources of undiscovered copper include Siberia, northern China, Mongolia and parts of Australia.
Lee noted that a high school intern played a role in the research paper. Daphne Jin, now a freshman at the University of Chicago, made her contribution to the research as a high school intern from Clements High School in the Houston suburb of Sugarland.
"The paper really wouldn't have been as broad without Daphne's contribution," Lee said. "I originally struggled with an assignment for her because I didn't and still don't have large projects where a student can just fit in. I try to make sure every student has a chance to do something new, but often I just run out of ideas."
Lee eventually asked Jin to compile information from published studies about the average concentration of all the first-row of transition elements in the periodic table in various samples of continental crust and mantle collected the world over.
"She came back and showed me the results, and we could see that the average continental crust itself, which has been built over 3 billion years of Earth's history in Africa, Siberia, North America, South America, etc., was all depleted in copper," Lee said. "Up to that point we'd been looking at the building blocks of continents, but this showed us that the continents themselves followed the same pattern. It was all internally consistent."
Read more at Science Daily
"Everything throughout history shows us that Earth does not want to give up its copper to the continental crust," said Rice geochemist Cin-Ty Lee, the lead author of the study. "Both the building blocks for continents and the continental crust itself, dating back as much as 3 billion years, are highly depleted in copper."
Finding copper is more than an academic exercise. With global demand for electronics growing rapidly, some studies have estimated the world's demand for copper could exceed supply in as little as six years. The new study could help, because it suggests where undiscovered caches of copper might lie.
But the copper clues were just a happy accident.
"We didn't go into this looking for copper," Lee said. "We were originally interested in how continents form and more specifically in the oxidation state of volcanoes."
Earth scientists have long debated whether an oxygen-rich atmosphere might be required for continent formation. The idea stems from the fact that Earth may not have had many continents for at least the first billion years of its existence and that Earth's continents may have begun forming around the time that oxygen became a significant component of the atmosphere.
In their search for answers, Lee and colleagues set out to examine Earth's arc magmas -- the molten building blocks for continents. Arc magmas get their start deep in the planet in areas called subduction zones, where one of Earth's tectonic plates slides beneath another. When plates subduct, two things happen. First, they bring oxidized crust and sediments from Earth's surface into the mantle. Second, the subducting plate drives a return flow of hot mantle upwards from Earth's deep interior. During this return flow, the hot mantle not only melts itself but may also cause melting of the recycled sediments. Arc magmas are thought to form under these conditions, so if oxygen were required for continental crust formation, it would mostly likely come from these recycled segments.
"If oxidized materials are necessary for generating such melts, we should see evidence of it all the way from where the arc magmas form to the point where the new continent-building material is released from arc volcanoes," Lee said.
Lee and colleagues examined xenoliths, rocks that formed deep inside Earth and were carried up to the surface in volcanic eruptions. Specifically, they studied garnet pyroxenite xenoliths thought to represent the first crystallized products of arc magmas from the deep roots of an arc some 50 kilometers below Earth's surface. Rather than finding evidence of oxidation, they found sulfides -- minerals that contain reduced forms of sulfur bonded to metals like copper, nickel and iron. If conditions were highly oxidizing, Lee said, these sulfide minerals would be destabilized and allow these elements, particularly copper, to bond with oxygen.
Because sulfides are also heavy and dense, they tend to sink and get left behind in the deep parts of arc systems, like a blob of dense material that stays at the bottom of a lava lamp while less dense material rises to the top.
"This explains why copper deposits, in general, are so rare," Lee said. "The Earth wants to hold it deep and not give it up."
Lee said deciding where to look for undiscovered copper deposits requires an understanding of the conditions needed to overcome the forces that conspire to keep it deep inside the planet.
"As a continental arc matures, the copper-rich sulfides are trapped deep and accumulate," he said. "But if the continental arc grows thicker over time, the accumulated copper-bearing sulfides are driven to deeper depths where the higher temperatures can re-melt these copper-rich dregs, releasing them to rejoin arc magmas."
These conditions were met in the Andes Mountains and in western North America. He said other potential sources of undiscovered copper include Siberia, northern China, Mongolia and parts of Australia.
Lee noted that a high school intern played a role in the research paper. Daphne Jin, now a freshman at the University of Chicago, made her contribution to the research as a high school intern from Clements High School in the Houston suburb of Sugarland.
"The paper really wouldn't have been as broad without Daphne's contribution," Lee said. "I originally struggled with an assignment for her because I didn't and still don't have large projects where a student can just fit in. I try to make sure every student has a chance to do something new, but often I just run out of ideas."
Lee eventually asked Jin to compile information from published studies about the average concentration of all the first-row of transition elements in the periodic table in various samples of continental crust and mantle collected the world over.
"She came back and showed me the results, and we could see that the average continental crust itself, which has been built over 3 billion years of Earth's history in Africa, Siberia, North America, South America, etc., was all depleted in copper," Lee said. "Up to that point we'd been looking at the building blocks of continents, but this showed us that the continents themselves followed the same pattern. It was all internally consistent."
Read more at Science Daily
Most Precise Measurement of Scale of the Universe
Physicists on the Baryon Oscillation Spectroscopic Survey (BOSS) have announced the first results from their collaboration, revealing the most precise measurements ever made of the large-scale structure of the universe between five to seven billion years ago. They achieved this by observing the primordial sound waves that propagated through the cosmic medium a mere 30,000 years after the Big Bang.
And so far, the data supports the theory that our universe as flat, comprised of roughly a quarter cold dark matter, and four percent ordinary matter, with the rest made up of a mysterious force dubbed "dark energy."
A hundred years ago scientists believed the universe was steady and unchanging. Einstein invented the cosmological constant to expand the fabric of space-time after his own equations for general relativity wouldn't allow for the cosmos to remain static as expected in a steady state universe.
Soon after, astronomer Edwin Hubble discovered the universe was actually expanding, consistent with Einstein's original general relativity theory. Einstein then removed his cosmological constant describing his failure to predict an expanding universe in theory before it was proven by observation, as his biggest blunder.
In 1998, astronomers studying distant exploding stars called a Type 1A supernovae discovered that not only was the universe expanding, but that the rate of expansion was accelerating due to some type of unknown force or dark energy. This bore a striking resemblance to Einstein's cosmological constant. Either that, or our theory of gravity is incomplete. Answering this question is one of the foremost challenges in 21st century cosmology.
This new measurement from BOSS is significant because that time frame -- five to seven billion years ago -- is the era when dark energy "turned on." The BOSS findings will help physicists figure out the exact nature of whatever is causing our universe to accelerate in its expansion.
But in order to do that, they must first gain a more complete understanding of the history of that expansion.
The discovery that led to the theory of dark energy relied on studying the red shifts of bright light from supernovae. BOSS, in contrast, looks at something called baryonic acoustic oscillation (BAO).
This phenomenon is the result of pressure waves (sound, or acoustic waves) propagating through the early universe in its earliest hot phase, when everything was just one big primordial soup.
Those sound waves created pockets where the density differed in regular intervals or periods, a "wiggle" pattern indicative of oscillation, or vibration. Then the universe cooled sufficiently for ordinary matter and light to go their separate ways, the former condensing into hydrogen atoms. We can still see signs of those variations in temperature in the cosmic microwave background (CMB), thereby giving scientists a basic scale for BAO.
BOSS is designed to measure those oscillations as a means of determining how far away the most distant galaxies really are, by looking at the angles of those peaks where galaxies are most densely clustered. Within the vast network of cosmic structure, those density peaks repeat with a good degree of regularity, making them an excellent "standard ruler" to measure the geometry of the universe.
Measuring the angle between pairs of galaxies will tell scientists how distant there are -- the narrower the angle, the greater the distance. And once you know the distance, you can deduce an object's age, thanks to the telltale redshift of light as it travels across the universe, stretching proportionally in such a way as to give physicists a peek at how the universe expanded since the light left its source.
Redshifts aren't uniform, however. This is where BOSS is most helpful, since it can statistically analyze the redshifts of literally hundreds of thousands of galaxies in its dataset. With that large a sample, the variations in redshifts can be taken into account, while still achieving a precise measurement of distance.
Here's what the University of Portsmouth's Will Percival told BBC News about BOSS's first results:
"Because you can trace this pattern all the way through the Universe, it tells you a lot about its content. If it had a different content - it had more matter, or it had less matter, or it had been expanding at a different rate - then you'd see a different map of the galaxies. So, the fundamental observation is this map.
"What we find is everything is very consistent with Einstein's theory of general relativity, coupled with the cosmological constant that he put into his equations. He put it in originally to make the Universe static, and then took it out. But if we put constant in with the opposite sign, we can get acceleration. And if we do that, we find equations that are perfectly consistent with what we're seeing."
Eventually BOSS will have cataloged over a million galaxies. These are just the initial results, based on less than a quarter of the expected data that will be amassed by the time the survey ends in 2014, at which point the European Space Agency's Euclid mission will take over, launching in 2019.
Read more at Discovery News
And so far, the data supports the theory that our universe as flat, comprised of roughly a quarter cold dark matter, and four percent ordinary matter, with the rest made up of a mysterious force dubbed "dark energy."
A hundred years ago scientists believed the universe was steady and unchanging. Einstein invented the cosmological constant to expand the fabric of space-time after his own equations for general relativity wouldn't allow for the cosmos to remain static as expected in a steady state universe.
Soon after, astronomer Edwin Hubble discovered the universe was actually expanding, consistent with Einstein's original general relativity theory. Einstein then removed his cosmological constant describing his failure to predict an expanding universe in theory before it was proven by observation, as his biggest blunder.
In 1998, astronomers studying distant exploding stars called a Type 1A supernovae discovered that not only was the universe expanding, but that the rate of expansion was accelerating due to some type of unknown force or dark energy. This bore a striking resemblance to Einstein's cosmological constant. Either that, or our theory of gravity is incomplete. Answering this question is one of the foremost challenges in 21st century cosmology.
This new measurement from BOSS is significant because that time frame -- five to seven billion years ago -- is the era when dark energy "turned on." The BOSS findings will help physicists figure out the exact nature of whatever is causing our universe to accelerate in its expansion.
But in order to do that, they must first gain a more complete understanding of the history of that expansion.
The discovery that led to the theory of dark energy relied on studying the red shifts of bright light from supernovae. BOSS, in contrast, looks at something called baryonic acoustic oscillation (BAO).
This phenomenon is the result of pressure waves (sound, or acoustic waves) propagating through the early universe in its earliest hot phase, when everything was just one big primordial soup.
Those sound waves created pockets where the density differed in regular intervals or periods, a "wiggle" pattern indicative of oscillation, or vibration. Then the universe cooled sufficiently for ordinary matter and light to go their separate ways, the former condensing into hydrogen atoms. We can still see signs of those variations in temperature in the cosmic microwave background (CMB), thereby giving scientists a basic scale for BAO.
BOSS is designed to measure those oscillations as a means of determining how far away the most distant galaxies really are, by looking at the angles of those peaks where galaxies are most densely clustered. Within the vast network of cosmic structure, those density peaks repeat with a good degree of regularity, making them an excellent "standard ruler" to measure the geometry of the universe.
Measuring the angle between pairs of galaxies will tell scientists how distant there are -- the narrower the angle, the greater the distance. And once you know the distance, you can deduce an object's age, thanks to the telltale redshift of light as it travels across the universe, stretching proportionally in such a way as to give physicists a peek at how the universe expanded since the light left its source.
Redshifts aren't uniform, however. This is where BOSS is most helpful, since it can statistically analyze the redshifts of literally hundreds of thousands of galaxies in its dataset. With that large a sample, the variations in redshifts can be taken into account, while still achieving a precise measurement of distance.
Here's what the University of Portsmouth's Will Percival told BBC News about BOSS's first results:
"Because you can trace this pattern all the way through the Universe, it tells you a lot about its content. If it had a different content - it had more matter, or it had less matter, or it had been expanding at a different rate - then you'd see a different map of the galaxies. So, the fundamental observation is this map.
"What we find is everything is very consistent with Einstein's theory of general relativity, coupled with the cosmological constant that he put into his equations. He put it in originally to make the Universe static, and then took it out. But if we put constant in with the opposite sign, we can get acceleration. And if we do that, we find equations that are perfectly consistent with what we're seeing."
Eventually BOSS will have cataloged over a million galaxies. These are just the initial results, based on less than a quarter of the expected data that will be amassed by the time the survey ends in 2014, at which point the European Space Agency's Euclid mission will take over, launching in 2019.
Read more at Discovery News
Subscribe to:
Posts (Atom)