Dec 31, 2011
Dec 30, 2011
The 2011 Sceptical Award
It's that time of the year when it's time to look back and see what really happened throughout the year and it's time for a new Sceptical Award. Those of you who has followed this blog might remember that last years winner was the swedish sceptical podcast, Skeptikerpodden.
There is only going to be one Award this year too, like last year. This year I also want to let the Arabic speaking people know that there are a new sceptical podcast in arabic named, Arab Atheist Broadcasting (أذاعة الملحدين العرب ) wich you can also search for on YouTube.
Now on to the acctual award. The motivation to this years award is as follows:
Through the years they have made a name for themselfs in their sceptical podcast. Many people has followed them in their footsteps and started their own sceptical podcasts. Alot of people are looking up to them and listens to them every week to see what's new in the world of science, scepticism and what they have to say about PseudoScience. I'm very proud to announce The 2011 A Magical Journey Sceptical Award, and it goes too... The Skeptics' Guide To The Universe!
I, Danny Boston at A Magical Journey concratulate the winners of this years Sceptical Award! You Deserve it!
There is only going to be one Award this year too, like last year. This year I also want to let the Arabic speaking people know that there are a new sceptical podcast in arabic named, Arab Atheist Broadcasting (أذاعة الملحدين العرب ) wich you can also search for on YouTube.
Now on to the acctual award. The motivation to this years award is as follows:
Through the years they have made a name for themselfs in their sceptical podcast. Many people has followed them in their footsteps and started their own sceptical podcasts. Alot of people are looking up to them and listens to them every week to see what's new in the world of science, scepticism and what they have to say about PseudoScience. I'm very proud to announce The 2011 A Magical Journey Sceptical Award, and it goes too... The Skeptics' Guide To The Universe!
I, Danny Boston at A Magical Journey concratulate the winners of this years Sceptical Award! You Deserve it!
Weather Deserves Medal for Clean Air During 2008 Olympics
New research suggests that China's impressive feat of cutting Beijing's pollution up to 50 percent for the 2008 Summer Olympics had some help from Mother Nature. Rain just at the beginning and wind during the Olympics likely contributed about half of the effort needed to clean up the skies, scientists found. The results also suggest emission controls need to be more widely implemented than in 2008 if pollution levels are to be reduced permanently.
Reporting their findings December 12 in the journal Atmospheric Chemistry and Physics, co-author atmospheric chemist Xiaohong Liu at the Department of Energy's Pacific Northwest National laboratory said, "In addition to the emission controls, the weather was very important in reducing pollution. You can see the rain washing pollution out of the sky and wind transporting it away from the area."
Liu and colleague Chun Zhao at PNNL and at the Chinese Academy of Sciences in Beijing took advantage of the emission controls China put into play before and during the August Olympics to study the relative contributions of both planning and nature. Chinese officials restricted driving, temporarily halted pollution-producing manufacturing and power plants, and even relocated heavy polluting industries in preparation for the games.
To find out if the controls worked as well as people hoped, the researchers modeled the pollution and weather conditions in the area before, during and after the Olympics. They compared the model's results with measured amounts of pollution, which matched well.
Adding up the sources of pollution and the sinks that cleared it out, the team found that emission sources dropped up to a half in the week just before and during the Olympics. And while some pollution got washed out by rain or fell out of the sky, most of it got blown away by wind.
"They got very lucky. There were strong storms right before the Olympics," said Liu.
In addition to rain, wind also helped. Beijing is bordered on the south by urban areas and on the north by mountains, so wind blowing north would carry more pollution into the city. Examining the direction of the wind, the researchers saw that it generally blew south in the time period covering the Olympic period.
"The area we looked at is about 50 miles south. This suggests that emission controls need to be on a regional scale rather than just a local scale," said Liu.
The importance of regional controls meshes well with previous research on 2008 Olympics air quality that focused on nitrogen-based pollutants.
Next, the researchers will be examining the effect of pollution on other weather events and climate change in China. Pollutants are very small particles, and some suspect they might be causing fog to form rather than rain due to numerous pollution particles in China, Liu said.
Read more at Science Daily
Reporting their findings December 12 in the journal Atmospheric Chemistry and Physics, co-author atmospheric chemist Xiaohong Liu at the Department of Energy's Pacific Northwest National laboratory said, "In addition to the emission controls, the weather was very important in reducing pollution. You can see the rain washing pollution out of the sky and wind transporting it away from the area."
Liu and colleague Chun Zhao at PNNL and at the Chinese Academy of Sciences in Beijing took advantage of the emission controls China put into play before and during the August Olympics to study the relative contributions of both planning and nature. Chinese officials restricted driving, temporarily halted pollution-producing manufacturing and power plants, and even relocated heavy polluting industries in preparation for the games.
To find out if the controls worked as well as people hoped, the researchers modeled the pollution and weather conditions in the area before, during and after the Olympics. They compared the model's results with measured amounts of pollution, which matched well.
Adding up the sources of pollution and the sinks that cleared it out, the team found that emission sources dropped up to a half in the week just before and during the Olympics. And while some pollution got washed out by rain or fell out of the sky, most of it got blown away by wind.
"They got very lucky. There were strong storms right before the Olympics," said Liu.
In addition to rain, wind also helped. Beijing is bordered on the south by urban areas and on the north by mountains, so wind blowing north would carry more pollution into the city. Examining the direction of the wind, the researchers saw that it generally blew south in the time period covering the Olympic period.
"The area we looked at is about 50 miles south. This suggests that emission controls need to be on a regional scale rather than just a local scale," said Liu.
The importance of regional controls meshes well with previous research on 2008 Olympics air quality that focused on nitrogen-based pollutants.
Next, the researchers will be examining the effect of pollution on other weather events and climate change in China. Pollutants are very small particles, and some suspect they might be causing fog to form rather than rain due to numerous pollution particles in China, Liu said.
Read more at Science Daily
Glowing Scorpion Exoskeletons May Be Giant Eyes
Scorpion bodies are studded with eyes, sometimes as many as twelve — and scientists may have found one more.
A scorpion’s entire exoskeleton may act as one giant light receptor, a full-body proto-eye that detects shadows cast by moonlight and starlight.
That’s still just a hypothesis, but it would help explain why they glow so brilliantly under ultraviolet light.
“It might be a sort of alarm that’s always going off until the scorpion finds shelter,” said biologist Douglas Gaffin of the University of Oklahoma. “Shade might turn down the alarm on that part of their body, so they preferentially move in that direction.”
No matter their color in daylight, be it jet-black or translucent, ultraviolet light makes pigments embedded in their exoskeletons emit photons.
That property is called fluorescence, and nobody knows quite why scorpions possess it. Suggested explanations include mating signals or evolutionary leftovers of natural sunscreen needed before they became nocturnal. Whatever the case, 430 million-year-old fossils of scorpion relatives called eurypterids suggest their fluorescence has been around for a very long time.
Gaffin, leader of a study published Dec. 19 in Animal Behavior, noticed during scorpion collection expeditions that one desert grassland species, called Paruroctonus utahensis, always seemed to scurry under something, even in total darkness.
“You eventually wonder, “how do they find that one blade of grass and stay under it?” he said.
Gaffin wasn’t the first to wonder if fluorescence played a part, perhaps by converting ultraviolet sunlight and moonlight into a color visible to scorpion eyes, which are attuned to greenish wavelengths.
One researcher bleached fluorescent pigments from scorpions, covered their eyes and showed they could no longer discern shelter from open space. Another study showed nerves in their tails fired when green light (the same color of fluorescence) was shined on the body part.
Taking the research a step further, Gaffin and his colleagues recorded the behavior of more than 100 scorpions under UV, green light and longer wavelengths their eyes couldn’t see. The researchers completely blocked some scorpions’ eyes with foil to determine whether exoskeletons alone could “see” anything.
They found that eyes-blocked scorpions moved just as erratically under UV light as their unblinkered brethren.
“Maybe they’re collecting stray UV light, maybe starlight, and pigments turn it to green, and that’s what their nervous system is picking up on,” Gaffin said. “How do they do this? I don’t know.”
Read more at Wired Science
A scorpion’s entire exoskeleton may act as one giant light receptor, a full-body proto-eye that detects shadows cast by moonlight and starlight.
That’s still just a hypothesis, but it would help explain why they glow so brilliantly under ultraviolet light.
“It might be a sort of alarm that’s always going off until the scorpion finds shelter,” said biologist Douglas Gaffin of the University of Oklahoma. “Shade might turn down the alarm on that part of their body, so they preferentially move in that direction.”
No matter their color in daylight, be it jet-black or translucent, ultraviolet light makes pigments embedded in their exoskeletons emit photons.
That property is called fluorescence, and nobody knows quite why scorpions possess it. Suggested explanations include mating signals or evolutionary leftovers of natural sunscreen needed before they became nocturnal. Whatever the case, 430 million-year-old fossils of scorpion relatives called eurypterids suggest their fluorescence has been around for a very long time.
Gaffin, leader of a study published Dec. 19 in Animal Behavior, noticed during scorpion collection expeditions that one desert grassland species, called Paruroctonus utahensis, always seemed to scurry under something, even in total darkness.
“You eventually wonder, “how do they find that one blade of grass and stay under it?” he said.
Gaffin wasn’t the first to wonder if fluorescence played a part, perhaps by converting ultraviolet sunlight and moonlight into a color visible to scorpion eyes, which are attuned to greenish wavelengths.
One researcher bleached fluorescent pigments from scorpions, covered their eyes and showed they could no longer discern shelter from open space. Another study showed nerves in their tails fired when green light (the same color of fluorescence) was shined on the body part.
Taking the research a step further, Gaffin and his colleagues recorded the behavior of more than 100 scorpions under UV, green light and longer wavelengths their eyes couldn’t see. The researchers completely blocked some scorpions’ eyes with foil to determine whether exoskeletons alone could “see” anything.
They found that eyes-blocked scorpions moved just as erratically under UV light as their unblinkered brethren.
“Maybe they’re collecting stray UV light, maybe starlight, and pigments turn it to green, and that’s what their nervous system is picking up on,” Gaffin said. “How do they do this? I don’t know.”
Read more at Wired Science
Where Would Earth-like Planets Find Water?
Throughout 2011 there was a string of breathless news stories about astronomers finding extrasolar planets in the habitable zones surrounding their stars.
This is the "Goldilocks Zone" where temperatures are just right for water to remain in liquid form and presumably nurture life as we know it.
Last week, NASA announced the discovery of the first two Earth-sized planets orbiting a sun-like star. That followed an announcement from the Kepler space observatory two weeks before that scientists had discovered a planet roughly twice the size of Earth orbiting inside its star's habitable zone.
SETI astronomers are firing up their Allen Array radio telescope to check these worlds for signs of intelligent life.
But using the term Earth-like is a stretch at best, and misleading at worst.
We don't have a clue about the physical nature or processes on these worlds any more than an air traffic controller's radar blip tells him what meals are being served on a commercial flight.
Saying that liquid water could exist is OK, but to imply it does exist in the phrase "Earth-like planet," is very presumptive. Even press release artistic illustrations "lead the witness" by showing idyllic water-drenched worlds.
The bottom line is that we don't know how Earth got tanked-up with its water supply. So how might we begin to guess what's happening on worlds thousands of light-years away?
"If we need exotic mechanisms to get water onto Earth, then maybe it suggests life is not prevalent in these exoplanetary systems," astrobiologist Karen Meech of the University of Hawaii recently told astronomers in a colloquium at the Space Telescope Science Institute.
The oceans account for merely one-quarter of one percent of Earth's mass. Another one-tenth of a percent may be in Earth's mantle. But if we could probe deeper, down into the core, Earth could conceivably have 50 oceans worth of water locked away from the days of our planet's formation. (This is somewhat bemusing considering that Jules Verne wrote about a great subterranean ocean in the 1864 "A Journey to the Center of the Earth.")
With water potentially so locked away, "we may never know how much water Earth really has," says Meech.
This complicates several competing theories for how Earth got its water supply in the first place. We know water is everywhere in the solar system, especially among the planets and moons of the outer solar system. They lie beyond the "frost line" (roughly the distance of the asteroid belt) where water can remain a solid. By comparison the baked rocky planets Mercury and Venus seem bone dry, and Mars looks arid at best.
From the geologic record we do know that oceans were here on Earth just a few hundred million years after our planet's formation 4.6 billion years ago.
The Spitzer and Herschel space telescope observations of the young star TW Hydrae (seen the Hubble Space Telescope infrared picture below) shows that its protoplanetary disk contains enough water molecules to make 6,000 oceans (should we rename it TW Hydro?). The star also has a so-called snowline beyond a range of a few hundred million miles, the same as with our own solar system.
Dust grains in the sun's protostellar nebula were likely porous and could have captured water molecules for the newly forming Earth. But given the violent birth of Earth though accretion and differentiation, could they have remained intact?
Another possibility is that the young Earth manufactured its own water. The early Earth was so hot it had an ocean of molten magma. The oxygen in that magma could have combined with hydrogen in protostellar gas envelope, before it was dissipated away by the glare of the newborn sun.
If water was instead imported from comets and asteroids, Meech estimates would take 20 million medium sized comets to fill Earth’s oceans. but she thinks that only 1/50th the ocean’s volume came from comets.
Recent computer simulations show that, dynamically, all hell broke loose in the solar system if the outer planets migrated in their orbits -- a phenomenon commonly seen in exoplanetary systems. Our young world would have been pelted with water-bearing asteroids that were thrown into Earth-crossing elliptical orbits.
This would explain the late heavy bombardment at roughly 4 billion years ago as recorded on the moon and other solar system bodies. On the other hand, was water transported to the early Earth by a class of objects that no longer exist? And, did the water appear late, early, or in sporadic episodes in Earth's formative years?
Read more at Discovery News
This is the "Goldilocks Zone" where temperatures are just right for water to remain in liquid form and presumably nurture life as we know it.
Last week, NASA announced the discovery of the first two Earth-sized planets orbiting a sun-like star. That followed an announcement from the Kepler space observatory two weeks before that scientists had discovered a planet roughly twice the size of Earth orbiting inside its star's habitable zone.
SETI astronomers are firing up their Allen Array radio telescope to check these worlds for signs of intelligent life.
But using the term Earth-like is a stretch at best, and misleading at worst.
We don't have a clue about the physical nature or processes on these worlds any more than an air traffic controller's radar blip tells him what meals are being served on a commercial flight.
Saying that liquid water could exist is OK, but to imply it does exist in the phrase "Earth-like planet," is very presumptive. Even press release artistic illustrations "lead the witness" by showing idyllic water-drenched worlds.
The bottom line is that we don't know how Earth got tanked-up with its water supply. So how might we begin to guess what's happening on worlds thousands of light-years away?
"If we need exotic mechanisms to get water onto Earth, then maybe it suggests life is not prevalent in these exoplanetary systems," astrobiologist Karen Meech of the University of Hawaii recently told astronomers in a colloquium at the Space Telescope Science Institute.
The oceans account for merely one-quarter of one percent of Earth's mass. Another one-tenth of a percent may be in Earth's mantle. But if we could probe deeper, down into the core, Earth could conceivably have 50 oceans worth of water locked away from the days of our planet's formation. (This is somewhat bemusing considering that Jules Verne wrote about a great subterranean ocean in the 1864 "A Journey to the Center of the Earth.")
With water potentially so locked away, "we may never know how much water Earth really has," says Meech.
This complicates several competing theories for how Earth got its water supply in the first place. We know water is everywhere in the solar system, especially among the planets and moons of the outer solar system. They lie beyond the "frost line" (roughly the distance of the asteroid belt) where water can remain a solid. By comparison the baked rocky planets Mercury and Venus seem bone dry, and Mars looks arid at best.
From the geologic record we do know that oceans were here on Earth just a few hundred million years after our planet's formation 4.6 billion years ago.
The Spitzer and Herschel space telescope observations of the young star TW Hydrae (seen the Hubble Space Telescope infrared picture below) shows that its protoplanetary disk contains enough water molecules to make 6,000 oceans (should we rename it TW Hydro?). The star also has a so-called snowline beyond a range of a few hundred million miles, the same as with our own solar system.
Dust grains in the sun's protostellar nebula were likely porous and could have captured water molecules for the newly forming Earth. But given the violent birth of Earth though accretion and differentiation, could they have remained intact?
Another possibility is that the young Earth manufactured its own water. The early Earth was so hot it had an ocean of molten magma. The oxygen in that magma could have combined with hydrogen in protostellar gas envelope, before it was dissipated away by the glare of the newborn sun.
If water was instead imported from comets and asteroids, Meech estimates would take 20 million medium sized comets to fill Earth’s oceans. but she thinks that only 1/50th the ocean’s volume came from comets.
Recent computer simulations show that, dynamically, all hell broke loose in the solar system if the outer planets migrated in their orbits -- a phenomenon commonly seen in exoplanetary systems. Our young world would have been pelted with water-bearing asteroids that were thrown into Earth-crossing elliptical orbits.
This would explain the late heavy bombardment at roughly 4 billion years ago as recorded on the moon and other solar system bodies. On the other hand, was water transported to the early Earth by a class of objects that no longer exist? And, did the water appear late, early, or in sporadic episodes in Earth's formative years?
Read more at Discovery News
Dec 29, 2011
The Science of Champagne
As the minutes tick toward midnight on Saturday and you are running out of conversation topics, why not bust out some trivia about the science of champagne to impress your friends?
They may already know about French law, which decrees that grapes must be grown in the region of Champagne in order for sparkling wine to qualify as true champagne. But your companions might not know about Henry’s Law, explains a New Year’s themed video produced by the American Chemical Society.
This law of physics states that the pressure of a gas above a solution is proportional to the concentration of the gas within the solution. For champagne, carbon dioxide is the gas that forms those delightful bubbles. And, in an unopened bottle of champagne, there is equilibrium between the CO2 inside the liquid and the gas in the spaces of the cork.
Popping the cork disturbs this equilibrium, which is only regained as the CO2 bubbles out. To get the most pleasure out of your drink, make sure to pour on an angle, which preserves up to twice as much CO2 compared to pouring into the middle of the glass, found a 2010 paper in the Journal of Agricultural Food Chemistry.
“As the bubbles ascend the length of the glass in tiny trains,” the video explains, “they drag along molecules of flavor and aroma which explode out of the surface, tickling the nose and stimulating the senses.”
Making champagne involves two fermentations that must be done just right to ensure the correct concentration of bubbles in the final product. During the first fermentation, just as for any other kind of wine, yeast eats up sugar molecules in grape juice and releases CO2 and ethanol. The second fermentation traps CO2 inside the liquid.
Read more at Discovery News
They may already know about French law, which decrees that grapes must be grown in the region of Champagne in order for sparkling wine to qualify as true champagne. But your companions might not know about Henry’s Law, explains a New Year’s themed video produced by the American Chemical Society.
This law of physics states that the pressure of a gas above a solution is proportional to the concentration of the gas within the solution. For champagne, carbon dioxide is the gas that forms those delightful bubbles. And, in an unopened bottle of champagne, there is equilibrium between the CO2 inside the liquid and the gas in the spaces of the cork.
Popping the cork disturbs this equilibrium, which is only regained as the CO2 bubbles out. To get the most pleasure out of your drink, make sure to pour on an angle, which preserves up to twice as much CO2 compared to pouring into the middle of the glass, found a 2010 paper in the Journal of Agricultural Food Chemistry.
“As the bubbles ascend the length of the glass in tiny trains,” the video explains, “they drag along molecules of flavor and aroma which explode out of the surface, tickling the nose and stimulating the senses.”
Making champagne involves two fermentations that must be done just right to ensure the correct concentration of bubbles in the final product. During the first fermentation, just as for any other kind of wine, yeast eats up sugar molecules in grape juice and releases CO2 and ethanol. The second fermentation traps CO2 inside the liquid.
Read more at Discovery News
'Yeti Finger' Mystery Solved
A blackened, curled, oversized finger long claimed to belong to a yeti, has been identified to be human after all.
Featuring a long nail, the mummified relic -- 3.5 inches long and almost an inch thick at its widest part -- has languished for decades in the Royal College of Surgeons' Hunterian Museum in London.
The specimen caught the interest of scientists in 2008, when curators catalogued a collection bequeathed to the museum by primatologist William Charles Osman Hill. Among Hill's assemblage of items relating to his interest in crypto-zoology (the study of animals not proved to exist) there was a box labelled simply the "Yeti's finger."
The notes in the box revealed that the digit was taken from the hand of a yeti in the Pangboche temple in Nepal by mountain climber Peter Byrne,
"Mr Byrne is now 85, and living in the United States, I discovered," said Matthew Hill, the BBC journalist who last year was granted permission to research and produce a documentary on the mysterious finger.
A member of a 1958 expedition sent to the Himalayas to look for evidence of the legendary creature, Byrne camped at the Pangboche temple and learned of a Yeti hand preserved there for many years.
"It looked like a large human hand. It was covered with crusted black, broken skin. It was very oily from the candles and the oil lamps in the temple. The fingers were hooked and curled," Byrne told the BBC reporter.
A year later, Byrne returned to the monastery, and struck a deal with the monks about removing just one finger.
According to Byrne, the alleged yeti's digit was replaced with a human finger provided by professor Osmond Hill, who got it from a severed hand belonging to the Hunterian Museum.
The relic was smuggled out of Nepal with the help of Hollywood movie star James Stewart, who was on holiday in Calcutta with his wife Gloria.
Hidden in Gloria's lingerie case, the finger finally reached professor Hill in London.
The scientist identified it as belonging to an early hominid.
But DNA analysis at the Zoological Society of Scotland in Edinburgh proved that Hill was wrong.
"We found human DNA," the zoo's genetic expert Rob Ogden told the BBC.
Read more at Discovery News
Featuring a long nail, the mummified relic -- 3.5 inches long and almost an inch thick at its widest part -- has languished for decades in the Royal College of Surgeons' Hunterian Museum in London.
The specimen caught the interest of scientists in 2008, when curators catalogued a collection bequeathed to the museum by primatologist William Charles Osman Hill. Among Hill's assemblage of items relating to his interest in crypto-zoology (the study of animals not proved to exist) there was a box labelled simply the "Yeti's finger."
The notes in the box revealed that the digit was taken from the hand of a yeti in the Pangboche temple in Nepal by mountain climber Peter Byrne,
"Mr Byrne is now 85, and living in the United States, I discovered," said Matthew Hill, the BBC journalist who last year was granted permission to research and produce a documentary on the mysterious finger.
A member of a 1958 expedition sent to the Himalayas to look for evidence of the legendary creature, Byrne camped at the Pangboche temple and learned of a Yeti hand preserved there for many years.
"It looked like a large human hand. It was covered with crusted black, broken skin. It was very oily from the candles and the oil lamps in the temple. The fingers were hooked and curled," Byrne told the BBC reporter.
A year later, Byrne returned to the monastery, and struck a deal with the monks about removing just one finger.
According to Byrne, the alleged yeti's digit was replaced with a human finger provided by professor Osmond Hill, who got it from a severed hand belonging to the Hunterian Museum.
The relic was smuggled out of Nepal with the help of Hollywood movie star James Stewart, who was on holiday in Calcutta with his wife Gloria.
Hidden in Gloria's lingerie case, the finger finally reached professor Hill in London.
The scientist identified it as belonging to an early hominid.
But DNA analysis at the Zoological Society of Scotland in Edinburgh proved that Hill was wrong.
"We found human DNA," the zoo's genetic expert Rob Ogden told the BBC.
Read more at Discovery News
Detecting Light Echoes from Ancient Star Eruption
During the mid 1800's, the well known star Eta Carinae underwent an enormous eruption becoming, for a time, the second brightest star in the sky.
Although 19th Century astronomers did not yet have the technology to study one of the largest eruptions in recent history in depth, astronomers from the Space Telescope Science Institute recently discovered that light echoes are just now reaching us.
This discovery allows astronomers to use modern instruments to study Eta Carinae as it was between 1838 and 1858 when it underwent its Great Eruption.
Light echoes have been made famous in recent years by the dramatic example of V838 Monocerotis. While V838 Mon looks like an expanding shell of gas, what is actually depicted is light reflecting off shells of gas and dust that was thrown off earlier in the star's life.
The extra distance the light had to travel to strike the shell, before being reflected towards observers on Earth, means that the light arrives later. In the case of Eta Carinae, nearly 170 years later!
The reflected light has its properties changed by the motion of the material off which it reflects. In particular, the light shows a notable blueshift, telling astronomers that the material itself is traveling 210 km/sec.
This observation fits with theoretical predictions of eruptions similar to the type Eta Carinae is thought to have undergone. However, the light echo has also highlighted some discrepancies between expectation and observation.
Typically, Eta Carinae's eruption is classified as a "supernova impostor." This title is fitting since the eruptions create a large change in the overall brightness.
However, although these events may release 10 percent of the total energy of a typical supernova or more, the star remains intact.
The main model to explain such eruptions is that a sudden increase in the star's energy output causes some of the outer layers to be blown off in an opaque wind. This shell of material is so thick, that it gives a large increase in the effective surface area from which light is emitted, thereby increasing the overall brightness.
For this to happen, models predict that the temperature of the star prior to the eruption needs to be at least 7,000 K. Analyzing the reflected light from the eruption places the temperature of Eta Carinae at the time of the eruption at a much lower 5,000 K.
This would suggest that the favored model for such events is incorrect and that another model, involving an energetic blast was (a mini-supernova), may be the true culprit, at least in Eta Carinae's case.
Yet this observation is somewhat at odds with observations made in the years following the eruption.
As spectrography came into use, astronomers in 1870 visually noticed emission lines in the star's spectrum which is more typical in hotter stars.
Read more at Discovery News
Although 19th Century astronomers did not yet have the technology to study one of the largest eruptions in recent history in depth, astronomers from the Space Telescope Science Institute recently discovered that light echoes are just now reaching us.
This discovery allows astronomers to use modern instruments to study Eta Carinae as it was between 1838 and 1858 when it underwent its Great Eruption.
Light echoes have been made famous in recent years by the dramatic example of V838 Monocerotis. While V838 Mon looks like an expanding shell of gas, what is actually depicted is light reflecting off shells of gas and dust that was thrown off earlier in the star's life.
The extra distance the light had to travel to strike the shell, before being reflected towards observers on Earth, means that the light arrives later. In the case of Eta Carinae, nearly 170 years later!
The reflected light has its properties changed by the motion of the material off which it reflects. In particular, the light shows a notable blueshift, telling astronomers that the material itself is traveling 210 km/sec.
This observation fits with theoretical predictions of eruptions similar to the type Eta Carinae is thought to have undergone. However, the light echo has also highlighted some discrepancies between expectation and observation.
Typically, Eta Carinae's eruption is classified as a "supernova impostor." This title is fitting since the eruptions create a large change in the overall brightness.
However, although these events may release 10 percent of the total energy of a typical supernova or more, the star remains intact.
The main model to explain such eruptions is that a sudden increase in the star's energy output causes some of the outer layers to be blown off in an opaque wind. This shell of material is so thick, that it gives a large increase in the effective surface area from which light is emitted, thereby increasing the overall brightness.
For this to happen, models predict that the temperature of the star prior to the eruption needs to be at least 7,000 K. Analyzing the reflected light from the eruption places the temperature of Eta Carinae at the time of the eruption at a much lower 5,000 K.
This would suggest that the favored model for such events is incorrect and that another model, involving an energetic blast was (a mini-supernova), may be the true culprit, at least in Eta Carinae's case.
Yet this observation is somewhat at odds with observations made in the years following the eruption.
As spectrography came into use, astronomers in 1870 visually noticed emission lines in the star's spectrum which is more typical in hotter stars.
Read more at Discovery News
Probes May Find Remnants of Moon's Lost Sibling
Two identical NASA space probes are due to arrive at the moon this weekend to learn what is inside Earth's companion and how it formed.
Among the most interesting questions scientists will attempt to answer is if our moon holds the wrecked body of a lost sibling body.
Evidence of the crash, if it occurred, should be buried inside the moon, in the form of remnant radioactive materials, like uranium and thorium, which would have been heated in the smash-up.
According to a recently published paper, scientists suspect a second moon once circled Earth in the same orbit and at roughly the same speed as our moon. It eventually bumped into its companion, but instead of causing an impact crater, the second moon stuck and made a mountain. That feature today would be the lunar highlands located on the side of the moon that permanently faces away from Earth.
"One prediction of this model is that the whole exterior of the moon was once molten, and it started to cool off -- actually cooled from the outside in -- so you were left with a molten channel in the base of the moon's crust," said planetary scientist Maria Zuber, with the Massachusetts Institute of Technology.
Simulations show that when the second moon hit our moon the molten material was pushed around to the near side, traces of which should remain today.
"We're looking for layering in the lower crust," said Zuber, who is the lead researcher on NASA's Gravity Gravity Recovery and Interior Laboratory, or GRAIL, mission.
Flying in formation 34 miles above the lunar surface, the two GRAIL spacecraft will map the moon's gravity down to fractions of a micron. A micron is about the width of a red blood cell.
Scientists can use the precise measurements to model the moon's interior, a key piece of data missing despite more than 100 previous missions to the moon, including six human excursions by NASA astronauts under the 1969-1972 Apollo program.
"We believe the moon formed from the impact of a Mars-sized object with Earth, but we understand little really of how this formation happened and how it cooled off after the violent event," Zuber said.
"One fundamental thing that we don't know about the moon -- shockingly after all these missions that have gone to the moon -- is why the near side of the moon is different than the far side," she added.
Read more at Discovery News
Among the most interesting questions scientists will attempt to answer is if our moon holds the wrecked body of a lost sibling body.
Evidence of the crash, if it occurred, should be buried inside the moon, in the form of remnant radioactive materials, like uranium and thorium, which would have been heated in the smash-up.
According to a recently published paper, scientists suspect a second moon once circled Earth in the same orbit and at roughly the same speed as our moon. It eventually bumped into its companion, but instead of causing an impact crater, the second moon stuck and made a mountain. That feature today would be the lunar highlands located on the side of the moon that permanently faces away from Earth.
"One prediction of this model is that the whole exterior of the moon was once molten, and it started to cool off -- actually cooled from the outside in -- so you were left with a molten channel in the base of the moon's crust," said planetary scientist Maria Zuber, with the Massachusetts Institute of Technology.
Simulations show that when the second moon hit our moon the molten material was pushed around to the near side, traces of which should remain today.
"We're looking for layering in the lower crust," said Zuber, who is the lead researcher on NASA's Gravity Gravity Recovery and Interior Laboratory, or GRAIL, mission.
Flying in formation 34 miles above the lunar surface, the two GRAIL spacecraft will map the moon's gravity down to fractions of a micron. A micron is about the width of a red blood cell.
Scientists can use the precise measurements to model the moon's interior, a key piece of data missing despite more than 100 previous missions to the moon, including six human excursions by NASA astronauts under the 1969-1972 Apollo program.
"We believe the moon formed from the impact of a Mars-sized object with Earth, but we understand little really of how this formation happened and how it cooled off after the violent event," Zuber said.
"One fundamental thing that we don't know about the moon -- shockingly after all these missions that have gone to the moon -- is why the near side of the moon is different than the far side," she added.
Read more at Discovery News
Dec 28, 2011
Time for a Change? Overhauling the Calendar
Researchers at The Johns Hopkins University have discovered a way to make time stand still -- at least when it comes to the yearly calendar.
Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.
Under the Hanke-Henry Permanent Calendar, for instance, if Christmas fell on a Sunday in 2012 (and it would), it will also fall on a Sunday in 2013, 2014 and beyond. In addition, under the new calendar, the rhyme "30 days hath September, April, June and November," would no longer apply, because September would have 31 days, as would June, March and December. All the rest have 30 (Try creating a rhyme using that.)
"Our plan offers a stable calendar that is absolutely identical from year to year and which allows the permanent, rational planning of annual activities, from school to work holidays," says Henry, who is also director of the Maryland Space Grant Consortium. "Think about how much time and effort are expended each year in redesigning the calendar of every single organization in the world and it becomes obvious that our calendar would make life much simpler and would have noteworthy benefits."
Among the practical advantages would be the convenience afforded by birthdays and holidays (as well as work holidays) falling on the same day of the week every year. But the economic benefits are even more profound, according to Hanke, an expert in international economics, including monetary policy.
"Our calendar would simplify financial calculations and eliminate what we call the 'rip off' factor," explains Hanke. "Determining how much interest accrues on mortgages, bonds, forward rate agreements, swaps and others, day counts are required. Our current calendar is full of anomalies that have led to the establishment of a wide range of conventions that attempt to simplify interest calculations. Our proposed permanent calendar has a predictable 91-day quarterly pattern of two months of 30 days and a third month of 31 days, which does away with the need for artificial day count conventions."
According to Hanke and Henry, their calendar is an improvement on the dozens of rival reform calendars proffered by individuals and institutions over the last century.
"Attempts at reform have failed in the past because all of the major ones have involved breaking the seven-day cycle of the week, which is not acceptable to many people because it violates the Fourth Commandment about keeping the Sabbath Day," Henry explains. "Our version never breaks that cycle."
Henry posits that his team's version is far more convenient, sensible and easier to use than the current Gregorian calendar, which has been in place for four centuries -- ever since 1582, when Pope Gregory altered a calendar that was instituted in 46 BC by Julius Caesar.
In an effort to bring Caesar's calendar in synch with the seasons, the pope's team removed 11 days from the calendar in October, so that Oct. 4 was followed immediately by Oct. 15. This adjustment was necessary in order to deal with the same knotty problem that makes designing an effective and practical new calendar such a challenge: the fact that each Earth year is 365.2422 days long.
Hanke and Henry deal with those extra "pieces" of days by dropping leap years entirely in favor of an extra week added at the end of December every five or six years. This brings the calendar in sync with the seasonal changes as the Earth circles the sun.
In addition to advocating the adoption of this new calendar, Hanke and Henry encourage the abolition of world time zones and the adoption of "Universal Time" (formerly known as Greenwich Mean Time) in order to synchronize dates and times worldwide, streamlining international business.
Read more at Science Daily
Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.
Under the Hanke-Henry Permanent Calendar, for instance, if Christmas fell on a Sunday in 2012 (and it would), it will also fall on a Sunday in 2013, 2014 and beyond. In addition, under the new calendar, the rhyme "30 days hath September, April, June and November," would no longer apply, because September would have 31 days, as would June, March and December. All the rest have 30 (Try creating a rhyme using that.)
"Our plan offers a stable calendar that is absolutely identical from year to year and which allows the permanent, rational planning of annual activities, from school to work holidays," says Henry, who is also director of the Maryland Space Grant Consortium. "Think about how much time and effort are expended each year in redesigning the calendar of every single organization in the world and it becomes obvious that our calendar would make life much simpler and would have noteworthy benefits."
Among the practical advantages would be the convenience afforded by birthdays and holidays (as well as work holidays) falling on the same day of the week every year. But the economic benefits are even more profound, according to Hanke, an expert in international economics, including monetary policy.
"Our calendar would simplify financial calculations and eliminate what we call the 'rip off' factor," explains Hanke. "Determining how much interest accrues on mortgages, bonds, forward rate agreements, swaps and others, day counts are required. Our current calendar is full of anomalies that have led to the establishment of a wide range of conventions that attempt to simplify interest calculations. Our proposed permanent calendar has a predictable 91-day quarterly pattern of two months of 30 days and a third month of 31 days, which does away with the need for artificial day count conventions."
According to Hanke and Henry, their calendar is an improvement on the dozens of rival reform calendars proffered by individuals and institutions over the last century.
"Attempts at reform have failed in the past because all of the major ones have involved breaking the seven-day cycle of the week, which is not acceptable to many people because it violates the Fourth Commandment about keeping the Sabbath Day," Henry explains. "Our version never breaks that cycle."
Henry posits that his team's version is far more convenient, sensible and easier to use than the current Gregorian calendar, which has been in place for four centuries -- ever since 1582, when Pope Gregory altered a calendar that was instituted in 46 BC by Julius Caesar.
In an effort to bring Caesar's calendar in synch with the seasons, the pope's team removed 11 days from the calendar in October, so that Oct. 4 was followed immediately by Oct. 15. This adjustment was necessary in order to deal with the same knotty problem that makes designing an effective and practical new calendar such a challenge: the fact that each Earth year is 365.2422 days long.
Hanke and Henry deal with those extra "pieces" of days by dropping leap years entirely in favor of an extra week added at the end of December every five or six years. This brings the calendar in sync with the seasonal changes as the Earth circles the sun.
In addition to advocating the adoption of this new calendar, Hanke and Henry encourage the abolition of world time zones and the adoption of "Universal Time" (formerly known as Greenwich Mean Time) in order to synchronize dates and times worldwide, streamlining international business.
Read more at Science Daily
Over 65 Million Years, North American Mammal Evolution Has Tracked With Climate Change
Climate changes profoundly influenced the rise and fall of six distinct, successive waves of mammal species diversity in North America over the last 65 million years, shows a novel statistical analysis led by Brown University evolutionary biologists. Warming and cooling periods, in two cases confounded by species migrations, marked the transition from one dominant grouping to the next.
History often seems to happen in waves -- fashion and musical tastes turn over every decade and empires give way to new ones over centuries. A similar pattern characterizes the last 65 million years of natural history in North America, where a novel quantitative analysis has identified six distinct, consecutive waves of mammal species diversity or "evolutionary faunas." What force of history determined the destiny of these groupings? The numbers say it was typically climate change.
"Although we've always known in a general way that mammals respond to climatic change over time, there has been controversy as to whether this can be demonstrated in a quantitative fashion," said Christine Janis, professor of evolutionary biology at Brown University. "We show that the rise and fall of these faunas is indeed correlated with climatic change -- the rise or fall of global paleotemperatures -- and also influenced by other more local perturbations such as immigration events."
Specifically, of the six waves of species diversity that Janis and her Spanish collaborators recently describe online in Proceedings of the National Academy of Sciences, four show statistically significant correlations with major changes in temperature. The two transitions that show a weaker but still apparent correlation with the pattern correspond to periods when mammals from other continents happened to invade in large numbers, said Janis, who is the paper's senior and second author.
Previous studies of the potential connection between climate change and mammal species evolution have counted total species diversity in the fossil record over similar time periods. But in this analysis, led by postdoctoral scholar Borja Figueirido, the scientists asked whether there were any patterns within the species diversity that might be significant. They were guided by a similar methodology pioneered in a study of "evolutionary faunas" in marine invertebrates by Janis' late husband Jack Sepkoski, who was a paleontologist at the University of Chicago.
What the authors found is six distinct and consecutive groupings of mammal species that shared a common rise, peak, and decline in their numbers. For example, the "Paleocene fauna" had largely given way to the "early-middle Eocene fauna" by about 50 million years ago. Moreover, the authors found that these transfers of dominance correlated with temperature shifts, as reflected in data on past levels of atmospheric oxygen (determined from the isotopes in the fossilized remains of deep sea microorganisms).
By the numbers, the research showed correlations between species diversity and temperature change, but qualitatively, it also provided a narrative of how the traits of typical species within each wave made sense given the changes in vegetation that followed changes in climate. For example, after a warming episode about 20 million years in the early Miocene epoch, the dominant vegetation transitioned from woodland to a savannah-like grassland. It is no surprise, therefore, that many of the herbivores that comprised the accompanying "Miocene fauna" had high-crowned teeth that allowed them to eat the foods from those savannah sources.
To the extent that the study helps clarify scientists' understanding of evolution amid climate changes, it does not do so to the extent that they can make specific predictions about the future, Janis said. But it seems all the clearer that climate change has repeatedly had meaningful effect over millions of years.
Read more at Science Daily
History often seems to happen in waves -- fashion and musical tastes turn over every decade and empires give way to new ones over centuries. A similar pattern characterizes the last 65 million years of natural history in North America, where a novel quantitative analysis has identified six distinct, consecutive waves of mammal species diversity or "evolutionary faunas." What force of history determined the destiny of these groupings? The numbers say it was typically climate change.
"Although we've always known in a general way that mammals respond to climatic change over time, there has been controversy as to whether this can be demonstrated in a quantitative fashion," said Christine Janis, professor of evolutionary biology at Brown University. "We show that the rise and fall of these faunas is indeed correlated with climatic change -- the rise or fall of global paleotemperatures -- and also influenced by other more local perturbations such as immigration events."
Specifically, of the six waves of species diversity that Janis and her Spanish collaborators recently describe online in Proceedings of the National Academy of Sciences, four show statistically significant correlations with major changes in temperature. The two transitions that show a weaker but still apparent correlation with the pattern correspond to periods when mammals from other continents happened to invade in large numbers, said Janis, who is the paper's senior and second author.
Previous studies of the potential connection between climate change and mammal species evolution have counted total species diversity in the fossil record over similar time periods. But in this analysis, led by postdoctoral scholar Borja Figueirido, the scientists asked whether there were any patterns within the species diversity that might be significant. They were guided by a similar methodology pioneered in a study of "evolutionary faunas" in marine invertebrates by Janis' late husband Jack Sepkoski, who was a paleontologist at the University of Chicago.
What the authors found is six distinct and consecutive groupings of mammal species that shared a common rise, peak, and decline in their numbers. For example, the "Paleocene fauna" had largely given way to the "early-middle Eocene fauna" by about 50 million years ago. Moreover, the authors found that these transfers of dominance correlated with temperature shifts, as reflected in data on past levels of atmospheric oxygen (determined from the isotopes in the fossilized remains of deep sea microorganisms).
By the numbers, the research showed correlations between species diversity and temperature change, but qualitatively, it also provided a narrative of how the traits of typical species within each wave made sense given the changes in vegetation that followed changes in climate. For example, after a warming episode about 20 million years in the early Miocene epoch, the dominant vegetation transitioned from woodland to a savannah-like grassland. It is no surprise, therefore, that many of the herbivores that comprised the accompanying "Miocene fauna" had high-crowned teeth that allowed them to eat the foods from those savannah sources.
To the extent that the study helps clarify scientists' understanding of evolution amid climate changes, it does not do so to the extent that they can make specific predictions about the future, Janis said. But it seems all the clearer that climate change has repeatedly had meaningful effect over millions of years.
Read more at Science Daily
How Beavers Helped to Build America
Beavers, once abundant and widespread in the Northern Hemisphere, helped to forge the ground underneath many Americans’ feet
The team behind a new study used ground-penetrating radar to detect buried beaver dams.
The study, published in the January 2012 issue of Geology, reveals how beaver activity up to thousands of years ago affected sedimentation and left its lasting mark within North America's ground.
“Ecologists have estimated that, prior to the arrival of Europeans in North America, anywhere from 60 to 400 million beavers inhabited the continent, with a geographic range estimated at 15 million square kilometers,” co-author Ellen Wohl told Discovery News.
“Beavers were present from the arctic tundra to the deserts of northern Mexico, so they would have been in every U.S. state but Hawaii,” added Wohl, a professor in the Warner College of Natural Resources at Colorado State University.
For the study, Wohl and colleagues Natalie Kramer and Dennis Harry used both ground-penetrating radar and near-surface seismic refraction to detect beaver-induced sedimentation.
Both methods put energy, radar and seismic waves, into the ground. The researchers then recorded the times required for the energy’s return, using spatial differences in these travel times to infer the presence of contrasts in the subsurface. These, in turn, can reveal buried beaver dams that may have no surface expression.
The scientists focused their work at a site appropriately called Beaver Meadows in Colorado’s Rocky Mountain National Park. As of 1976, no beavers were present in the area, but earlier surveys documented the presence of numerous beaver lodges. The beaver dam sedimentation detected by the scientists dated from 180 to 4,300 years ago, with even older remains possible.
The study determined that beavers contributed 30-50 percent of post-glacial sediments in the target area.
“I think it very likely that our results are not unique to the Beaver Meadows study site, but also apply to other regions with relatively low rates of sediment yield to valley bottoms,” Wohl said.
She explained that beaver dams interrupt the flow of a stream, creating a backwater effect of reduced velocity. Sediment deposits in the backwater zone of the beaver pond, with this material remaining “in storage” until river erosion may mobilize it and carry it downstream.
The process is beneficial to humans, she continued, because “wet meadows associated with beaver dams have higher habitat and species diversity for plants, insects and other invertebrates, amphibians, reptiles, birds and mammals -- pretty much all forms of life.”
Beaver ponds additionally help to remove carbon and nitrogen from water. When carbon combines with chlorine -- used in many water treatment facilities -- it can result in cancer-causing chemicals, she said, so beavers can help to keep drinking water safe.
Jill Baron, co-director of the John Wesley Powell Center for Earth System Analysis and Synthesis at the U.S. Geological Survey, believes that studies like this latest one, which show the widespread and long-standing effects of beaver activity, are important.
Read more at Discovery News
The team behind a new study used ground-penetrating radar to detect buried beaver dams.
The study, published in the January 2012 issue of Geology, reveals how beaver activity up to thousands of years ago affected sedimentation and left its lasting mark within North America's ground.
“Ecologists have estimated that, prior to the arrival of Europeans in North America, anywhere from 60 to 400 million beavers inhabited the continent, with a geographic range estimated at 15 million square kilometers,” co-author Ellen Wohl told Discovery News.
“Beavers were present from the arctic tundra to the deserts of northern Mexico, so they would have been in every U.S. state but Hawaii,” added Wohl, a professor in the Warner College of Natural Resources at Colorado State University.
For the study, Wohl and colleagues Natalie Kramer and Dennis Harry used both ground-penetrating radar and near-surface seismic refraction to detect beaver-induced sedimentation.
Both methods put energy, radar and seismic waves, into the ground. The researchers then recorded the times required for the energy’s return, using spatial differences in these travel times to infer the presence of contrasts in the subsurface. These, in turn, can reveal buried beaver dams that may have no surface expression.
The scientists focused their work at a site appropriately called Beaver Meadows in Colorado’s Rocky Mountain National Park. As of 1976, no beavers were present in the area, but earlier surveys documented the presence of numerous beaver lodges. The beaver dam sedimentation detected by the scientists dated from 180 to 4,300 years ago, with even older remains possible.
The study determined that beavers contributed 30-50 percent of post-glacial sediments in the target area.
“I think it very likely that our results are not unique to the Beaver Meadows study site, but also apply to other regions with relatively low rates of sediment yield to valley bottoms,” Wohl said.
She explained that beaver dams interrupt the flow of a stream, creating a backwater effect of reduced velocity. Sediment deposits in the backwater zone of the beaver pond, with this material remaining “in storage” until river erosion may mobilize it and carry it downstream.
The process is beneficial to humans, she continued, because “wet meadows associated with beaver dams have higher habitat and species diversity for plants, insects and other invertebrates, amphibians, reptiles, birds and mammals -- pretty much all forms of life.”
Beaver ponds additionally help to remove carbon and nitrogen from water. When carbon combines with chlorine -- used in many water treatment facilities -- it can result in cancer-causing chemicals, she said, so beavers can help to keep drinking water safe.
Jill Baron, co-director of the John Wesley Powell Center for Earth System Analysis and Synthesis at the U.S. Geological Survey, believes that studies like this latest one, which show the widespread and long-standing effects of beaver activity, are important.
Read more at Discovery News
Feeling the Ripples of Black Hole Collisions
You might call it the universe's ultimate Clash of the Titans.
As if the idea of monster black holes lurking in the heart of a galaxies isn't ominous enough, imagine two of them crashing together like a pair of Sumo wrestlers.
This is an inevitable outcome when galaxies collide. But such a death match has never been directly detected, at least not yet.
Astronomers are eagerly looking forward to the day when gravitational wave detectors are sensitive enough to pick up the fabric of space-time ringing from such a smashup. This would allow theorists to precisely test general relativity under extreme conditions where strong gravity is at work.
These events that happened long ago and far away are a prime target for space based gravitational wave detectors, like the long-planned Laser Interferometer Space Antenna (LISA) -- a joint mission between NASA and the European Space Agency.
According to theory, a black hole merger will first look like a sinusoidal wave on LISA's detectors. It will wiggle, increase in frequency and then flat-line after the black holes coalesce. The gravitational waves will tell scientists about mass, spin and orbital properties of the merger.
The beauty of gravitational wave astronomy, when it finally comes of age, is that it can look back into time to the birth of galaxies and it covers whole range of galaxy evolution.
Until then astronomers are hoping that upcoming mammoth all-sky optical surveys, conducted with Pan-STARRS (Panoramic Survey Telescope & Rapid Response System) and the LSST (Large Synoptic Survey Telescope) might serendipitously pickup the flicker of a collision. These will be sensitive to galaxies probably no farther away than 7 billion light-years. More importantly how would astrophysicists recognize a collision?
Astrophysicists are relying on computer modeling of black hole collisions to predict just what a merger would look like. It would be a brief burst of light from the heart of an interacting galaxy.
When a pair of galaxies collide and merge -- an event taking a billion years to elapse -- their black holes should merge too. Spiraling toward the center of the newly forming merged galaxy, the black hole pair will feel each other's gravity at a distance of about 30 light-years, depending on their masses.
Read more at Discovery News
As if the idea of monster black holes lurking in the heart of a galaxies isn't ominous enough, imagine two of them crashing together like a pair of Sumo wrestlers.
This is an inevitable outcome when galaxies collide. But such a death match has never been directly detected, at least not yet.
Astronomers are eagerly looking forward to the day when gravitational wave detectors are sensitive enough to pick up the fabric of space-time ringing from such a smashup. This would allow theorists to precisely test general relativity under extreme conditions where strong gravity is at work.
These events that happened long ago and far away are a prime target for space based gravitational wave detectors, like the long-planned Laser Interferometer Space Antenna (LISA) -- a joint mission between NASA and the European Space Agency.
According to theory, a black hole merger will first look like a sinusoidal wave on LISA's detectors. It will wiggle, increase in frequency and then flat-line after the black holes coalesce. The gravitational waves will tell scientists about mass, spin and orbital properties of the merger.
The beauty of gravitational wave astronomy, when it finally comes of age, is that it can look back into time to the birth of galaxies and it covers whole range of galaxy evolution.
Until then astronomers are hoping that upcoming mammoth all-sky optical surveys, conducted with Pan-STARRS (Panoramic Survey Telescope & Rapid Response System) and the LSST (Large Synoptic Survey Telescope) might serendipitously pickup the flicker of a collision. These will be sensitive to galaxies probably no farther away than 7 billion light-years. More importantly how would astrophysicists recognize a collision?
Astrophysicists are relying on computer modeling of black hole collisions to predict just what a merger would look like. It would be a brief burst of light from the heart of an interacting galaxy.
When a pair of galaxies collide and merge -- an event taking a billion years to elapse -- their black holes should merge too. Spiraling toward the center of the newly forming merged galaxy, the black hole pair will feel each other's gravity at a distance of about 30 light-years, depending on their masses.
Read more at Discovery News
Dec 27, 2011
A New Theory Emerges for Where Some Fish Became Four-Limbed Creatures
A small fish crawling on stumpy limbs from a shrinking desert pond is an icon of can-do spirit, emblematic of a leading theory for the evolutionary transition between fish and amphibians. This theorized image of such a drastic adaptation to changing environmental conditions, however, may, itself, be evolving into a new picture.
University of Oregon scientist Gregory J. Retallack, professor of geological sciences, says that his discoveries at numerous sites in Maryland, New York and Pennsylvania suggests that "such a plucky hypothetical ancestor of ours probably could not have survived the overwhelming odds of perishing in a trek to another shrinking pond."
This scenario comes from the late Devonian, about 390 million years ago to roughly 360 million years ago. Paleontologist Alfred Romer, who died in 1973 after serving on the faculties at the University of Chicago and Harvard University, saw this time as a period of struggle and escape -- and important in fish-tetrapod transition -- to ensure survival.
Reporting in the May 2011 issue of the Journal of Geology, Retallack, who also is co-director of paleontological collections at the UO's Museum of Natural and Cultural History, argues for a very different explanation. He examined numerous buried soils in rocks yielding footprints and bones of early transitional fossils between fish and amphibians of Devonian and Carboniferous geological age. What he found raises a major challenge to Romer's theory.
"These transitional fossils were not associated with drying ponds or deserts, but consistently were found with humid woodland soils," he said. "Remains of drying ponds and desert soils also are known and are littered with fossil fish, but none of our distant ancestors. Judging from where their fossils were found, transitional forms between fish and amphibians lived in wooded floodplains. Our distant ancestors were not so much foolhardy, as opportunistic, taking advantage of floodplains and lakes choked with roots and logs for the first time in geological history."
Limbs proved handy for negotiating woody obstacles, and flexible necks allowed for feeding in shallow water, Retallack said. By this new woodland hypothesis, the limbs and necks, which distinguish salamanders from fish, did not arise from reckless adventure in deserts, but rather were nurtured by a newly evolved habitat of humid, wooded floodplains.
The findings, he said, dampen both the desert hypothesis of Romer and a newer inter-tidal theory put forth by Grzegorz Niedbwiedzki and colleagues at the University of Warsaw. In 2010, they published their discovery of eight-foot-long, 395-million-year-old tetrapods in ancient lagoonal mud in southeastern Poland, where Retallack also has been studying fossil soils with Polish colleague Marek Narkeiwicz.
Read more at Science Daily
University of Oregon scientist Gregory J. Retallack, professor of geological sciences, says that his discoveries at numerous sites in Maryland, New York and Pennsylvania suggests that "such a plucky hypothetical ancestor of ours probably could not have survived the overwhelming odds of perishing in a trek to another shrinking pond."
This scenario comes from the late Devonian, about 390 million years ago to roughly 360 million years ago. Paleontologist Alfred Romer, who died in 1973 after serving on the faculties at the University of Chicago and Harvard University, saw this time as a period of struggle and escape -- and important in fish-tetrapod transition -- to ensure survival.
Reporting in the May 2011 issue of the Journal of Geology, Retallack, who also is co-director of paleontological collections at the UO's Museum of Natural and Cultural History, argues for a very different explanation. He examined numerous buried soils in rocks yielding footprints and bones of early transitional fossils between fish and amphibians of Devonian and Carboniferous geological age. What he found raises a major challenge to Romer's theory.
"These transitional fossils were not associated with drying ponds or deserts, but consistently were found with humid woodland soils," he said. "Remains of drying ponds and desert soils also are known and are littered with fossil fish, but none of our distant ancestors. Judging from where their fossils were found, transitional forms between fish and amphibians lived in wooded floodplains. Our distant ancestors were not so much foolhardy, as opportunistic, taking advantage of floodplains and lakes choked with roots and logs for the first time in geological history."
Limbs proved handy for negotiating woody obstacles, and flexible necks allowed for feeding in shallow water, Retallack said. By this new woodland hypothesis, the limbs and necks, which distinguish salamanders from fish, did not arise from reckless adventure in deserts, but rather were nurtured by a newly evolved habitat of humid, wooded floodplains.
The findings, he said, dampen both the desert hypothesis of Romer and a newer inter-tidal theory put forth by Grzegorz Niedbwiedzki and colleagues at the University of Warsaw. In 2010, they published their discovery of eight-foot-long, 395-million-year-old tetrapods in ancient lagoonal mud in southeastern Poland, where Retallack also has been studying fossil soils with Polish colleague Marek Narkeiwicz.
Read more at Science Daily
Top Scientific Discoveries of 2011
From emotional honeybees to particles flying faster than Einstein's theory of relativity ought to allow, 2011 abounded in findings that posed new questions and expanded frontiers of possibility. Here are Wired Science's favorites.
Faster-Than-Light Neutrinos Detected -- or Not
In September, researchers from the OPERA collaboration in Italy provided fodder for a thousand articles when they announced the measurement of neutrinos flying faster than that killjoy Albert Einstein would permit. Most physicists dismissed the finding, suggesting some error in the measurement or analysis, but that didn't stop millions of people from hoping that they'd witnessed the start of a new scientific revolution.
Extinct Human Ancestors Survive in our Genes
For years, anthropologists suspected that Homo sapiens cross-bred with Neanderthals before our closest ancestor went extinct. That hypothesis proved officially true in 2010, with the first hard genetic evidence of Neanderthal DNA surviving in living humans, and in July further tests found even more evidence of cross-breeding. Moreover, it's not just Neanderthals that live on in us, but long-extinct, recently discovered Neanderthal cousins called Denisovans.
The functional role of formerly non-human gene variants remains to be determined, but their importance to a human sense of self is clearer: Homo sapiens isn't the product of some long, pure lineage, but a bit of a hominid mutt.
Read more at Wired Scince
Faster-Than-Light Neutrinos Detected -- or Not
In September, researchers from the OPERA collaboration in Italy provided fodder for a thousand articles when they announced the measurement of neutrinos flying faster than that killjoy Albert Einstein would permit. Most physicists dismissed the finding, suggesting some error in the measurement or analysis, but that didn't stop millions of people from hoping that they'd witnessed the start of a new scientific revolution.
Extinct Human Ancestors Survive in our Genes
For years, anthropologists suspected that Homo sapiens cross-bred with Neanderthals before our closest ancestor went extinct. That hypothesis proved officially true in 2010, with the first hard genetic evidence of Neanderthal DNA surviving in living humans, and in July further tests found even more evidence of cross-breeding. Moreover, it's not just Neanderthals that live on in us, but long-extinct, recently discovered Neanderthal cousins called Denisovans.
The functional role of formerly non-human gene variants remains to be determined, but their importance to a human sense of self is clearer: Homo sapiens isn't the product of some long, pure lineage, but a bit of a hominid mutt.
Read more at Wired Scince
Ancient Texts Part of Earliest Known Documents
A team of scholars has discovered what might be the oldest representation of the Tower of Babel of Biblical fame, they report in a newly published book.
Carved on a black stone, which has already been dubbed the Tower of Babel stele, the inscription dates to 604-562 BCE.
It was found in the collection of Martin Schøyen, a businessman from Norway who owns the largest private manuscript assemblage formed in the 20th century.
Consisting of 13,717 manuscript items spanning over 5,000 years, the collection includes parts of the Dead Sea Scrolls, ancient Buddhist manuscript rescued from the Taliban, and even cylcon symbols by Australia's Aborigines which can be up to 20,000 years old.
The collection also includes a large number of pictographic and cuneiform tablets -- which are some of the earliest known written documents -- seals and royal inscription spanning most of the written history of Mesopotamia, an area near modern Iraq.
A total of 107 cuneiform texts dating from the Uruk period about 5,000 years ago to the Persian period about 2,400 years ago, have been now translated by an international group of scholars and published in the book Cuneiform Royal Inscriptions and Related Texts in the Schøyen Collection.
The Tower of Babel stele stands out as one of "the stars in the firmament of the book," wrote Andrew George, a professor of Babylonian at the University of London and editor of the book.
The spectacular stone monument clearly shows the Tower and King Nebuchadnezzar II, who ruled Babylon some 2,500 years ago.
Credited with the destruction of the temple of Solomon in 586 BCE, Nebuchadnezzar II was also responsible for sending the Jews into exile, according to the Bible.
The first Babylonian king to rule Egypt, he is also famous for building the legendary Hanging Gardens, one of the 7 wonders of the ancient world, and many temples all over Babylonia.
Calling himself the "great restorer and builder of holy places," he also reconstructed Etemenanki, a 7-story, almost 300-foot-high temple (also known as a ziggurat) dedicated to the god Marduk.
Biblical scholars believe that this temple may be the Tower of Babel mentioned in the Bible.
In the inscription, the standing figure of Nebuchadnezzar II is portrayed with his royal conical hat, holding a staff in his left hand and a scroll with the rebuilding plans of the Tower (or a foundation nail) in his outstretched right hand.
According to George, the relief yields only the fourth certain representation of Nebuchadnezzar II.
"The others are carved on cliff-faces in Lebanon at Wadi Brisa (which has two reliefs) and at Shir es-Sanam. All these outdoor monuments are in very poor condition," he wrote.
The inscription also depicts the Tower of Babel from a front view, "clearly showing the relative proportions of the 7 steps including the temple on the top," the Schøyen Collection stated.
Read more at Discovery News
Carved on a black stone, which has already been dubbed the Tower of Babel stele, the inscription dates to 604-562 BCE.
It was found in the collection of Martin Schøyen, a businessman from Norway who owns the largest private manuscript assemblage formed in the 20th century.
Consisting of 13,717 manuscript items spanning over 5,000 years, the collection includes parts of the Dead Sea Scrolls, ancient Buddhist manuscript rescued from the Taliban, and even cylcon symbols by Australia's Aborigines which can be up to 20,000 years old.
The collection also includes a large number of pictographic and cuneiform tablets -- which are some of the earliest known written documents -- seals and royal inscription spanning most of the written history of Mesopotamia, an area near modern Iraq.
A total of 107 cuneiform texts dating from the Uruk period about 5,000 years ago to the Persian period about 2,400 years ago, have been now translated by an international group of scholars and published in the book Cuneiform Royal Inscriptions and Related Texts in the Schøyen Collection.
The Tower of Babel stele stands out as one of "the stars in the firmament of the book," wrote Andrew George, a professor of Babylonian at the University of London and editor of the book.
The spectacular stone monument clearly shows the Tower and King Nebuchadnezzar II, who ruled Babylon some 2,500 years ago.
Credited with the destruction of the temple of Solomon in 586 BCE, Nebuchadnezzar II was also responsible for sending the Jews into exile, according to the Bible.
The first Babylonian king to rule Egypt, he is also famous for building the legendary Hanging Gardens, one of the 7 wonders of the ancient world, and many temples all over Babylonia.
Calling himself the "great restorer and builder of holy places," he also reconstructed Etemenanki, a 7-story, almost 300-foot-high temple (also known as a ziggurat) dedicated to the god Marduk.
Biblical scholars believe that this temple may be the Tower of Babel mentioned in the Bible.
In the inscription, the standing figure of Nebuchadnezzar II is portrayed with his royal conical hat, holding a staff in his left hand and a scroll with the rebuilding plans of the Tower (or a foundation nail) in his outstretched right hand.
According to George, the relief yields only the fourth certain representation of Nebuchadnezzar II.
"The others are carved on cliff-faces in Lebanon at Wadi Brisa (which has two reliefs) and at Shir es-Sanam. All these outdoor monuments are in very poor condition," he wrote.
The inscription also depicts the Tower of Babel from a front view, "clearly showing the relative proportions of the 7 steps including the temple on the top," the Schøyen Collection stated.
Read more at Discovery News
Unscratchable Gold Is Harder Than Steel
No doubt, gold is a beautiful and popular precious metal. But it's also soft and tends to scratch easily. Making it more resilient requires mixing it with other metals, but that reduces its quality.
Now a research team from the EPFL in Switzerland, with support from Swiss watchmaker Hublot, have created a very hard high-quality gold. And recently, they unveiled the shiny result.
“What is radically new is being able to make something that is both extremely hard and 18-karat gold. The challenge was to stick with that boundary,” said Andreas Mortensen, a metallurgy professor at the EPFL in Switzerland who led the work. Metallurgy lecturer Ludger Weber, postdoc Reza Tavangar and materials engineer Senad Hasanovic collaborated with Mortensen to develop the new gold.
Others have been able to make hard gold in the past but they haven't been able to achieve the level of hardness required to meet the 18-karat standard, separating real gold from impure gold. Hublot filed for a patent on the new gold composite, Mortensen said. He called Hublot an adventurous company when it comes to designing with new materials.
To make the new gold, the EPFL team used boron carbide, a ceramic that’s one of the hardest materials in the world, along with diamonds. This material has numerous applications, including as a component in bulletproof vests.
First the ceramic was heated in an oven to more than 3,600 degrees Fahrenheit, producing a three-dimensional network almost like a scaffold, with just the right amount of pores. That network was then infiltrated with liquid gold, meaning the scientists pushed gold into the pores. Finally, the combination was solidified to form the composite material.
Mortensen said the new gold looks and feels distinctive. It’s harder to the touch than other gold, and has a darker hue. The material is so hard that no coating is needed to make it unscratchable. Although that’s an advantage to watch-wearers looking for durability, there is a trade-off because it’s slightly more fragile than pure gold that’s soft.
Since gold doesn’t oxidize, there could be other applications beyond jewelry, although the cost of the higher quality and its heft might limit the uses. Luxury goods conglomerate LMVH owns Hublot, so the unscratchable gold may end up being incorporated into other high-end products. Currently Hublot watches made with the new gold are being readied for a large jewelry show in Basel next year. The material still needs to undergo extensive testing, Mortensen added.
Read more at Discovery News
Now a research team from the EPFL in Switzerland, with support from Swiss watchmaker Hublot, have created a very hard high-quality gold. And recently, they unveiled the shiny result.
“What is radically new is being able to make something that is both extremely hard and 18-karat gold. The challenge was to stick with that boundary,” said Andreas Mortensen, a metallurgy professor at the EPFL in Switzerland who led the work. Metallurgy lecturer Ludger Weber, postdoc Reza Tavangar and materials engineer Senad Hasanovic collaborated with Mortensen to develop the new gold.
Others have been able to make hard gold in the past but they haven't been able to achieve the level of hardness required to meet the 18-karat standard, separating real gold from impure gold. Hublot filed for a patent on the new gold composite, Mortensen said. He called Hublot an adventurous company when it comes to designing with new materials.
To make the new gold, the EPFL team used boron carbide, a ceramic that’s one of the hardest materials in the world, along with diamonds. This material has numerous applications, including as a component in bulletproof vests.
First the ceramic was heated in an oven to more than 3,600 degrees Fahrenheit, producing a three-dimensional network almost like a scaffold, with just the right amount of pores. That network was then infiltrated with liquid gold, meaning the scientists pushed gold into the pores. Finally, the combination was solidified to form the composite material.
Mortensen said the new gold looks and feels distinctive. It’s harder to the touch than other gold, and has a darker hue. The material is so hard that no coating is needed to make it unscratchable. Although that’s an advantage to watch-wearers looking for durability, there is a trade-off because it’s slightly more fragile than pure gold that’s soft.
Since gold doesn’t oxidize, there could be other applications beyond jewelry, although the cost of the higher quality and its heft might limit the uses. Luxury goods conglomerate LMVH owns Hublot, so the unscratchable gold may end up being incorporated into other high-end products. Currently Hublot watches made with the new gold are being readied for a large jewelry show in Basel next year. The material still needs to undergo extensive testing, Mortensen added.
Read more at Discovery News
Dec 26, 2011
HIV Study Named '2011 Breakthrough of the Year' by Science
The journal Science has chosen the HPTN 052 clinical trial, an international HIV prevention trial sponsored by the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, as the 2011 Breakthrough of the Year. The study found that if HIV-infected heterosexual individuals begin taking antiretroviral medicines when their immune systems are relatively healthy as opposed to delaying therapy until the disease has advanced, they are 96 percent less likely to transmit the virus to their uninfected partners.
Findings from the trial, first announced in May, were published in the New England Journal of Medicine in August. The complete top 10 list of 2011 scientific breakthroughs appears in the Dec. 23, 2011, issue of Science.
"The HPTN 052 study convincingly demonstrated that antiretroviral medications can not only treat but also prevent the transmission of HIV infection among heterosexual individuals," said NIAID Director Anthony S. Fauci, M.D. "We are pleased that Science recognized the extraordinary public health significance of these study results. This recognition also is a credit to the hard work and dedication of the HPTN 052 researchers and the more than 3,000 study participants who selflessly gave their time and energy to make such a significant contribution to the fight against HIV/AIDS."
Led by study chair Myron Cohen, M.D., director of the Institute for Global Health and Infectious Diseases at the University of North Carolina at Chapel Hill, HPTN 052 began in 2005 and enrolled 1,763 heterosexual couples in Botswana, Brazil, India, Kenya, Malawi, South Africa, Thailand, the United States and Zimbabwe. Each couple included one partner with HIV infection. The investigators randomly assigned each couple to either one of two study groups. In the first group, the HIV-infected partner immediately began taking a combination of three antiretroviral drugs. The participants infected with HIV were extensively counseled on the need to consistently take the medications as directed. Outstanding compliance resulted in the nearly complete suppression of HIV in the blood (viral load) of the treated study participants in group one.
In the second group (the deferred group), the HIV-infected partners began antiretroviral therapy when their CD4+ T-cell levels -- a key measure of immune system health -- fell below 250 cells per cubic millimeter or an AIDS-related event occurred. The HIV-infected participants also were counseled on the need to strictly adhere to the treatment regimen.
The study was slated to end in 2015, but an interim data review in May by an independent data and safety monitoring board (DSMB) found that of the total 28 cases of HIV infection among the previously uninfected partners, only one case occurred among those couples where the HIV-infected partner began immediate antiretroviral therapy. The DSMB, therefore, called for immediate public release of the study's findings.
The magnitude of protection against HIV infection demonstrated in HPTN 052 has made the successful strategy of the clinical trial a key component of public health policies recently discussed by federal officials and others saying that achieving an end to the HIV/AIDS pandemic is now feasible with additional research and implementation efforts.
"On its own, treatment as prevention is not going to solve the global HIV/AIDS problem," said Dr. Fauci. "Yet when used in combination with other HIV prevention methods -- such as knowing one's HIV status through routine testing, proper and consistent condom use, behavioral modification, needle and syringe exchange programs for injection drug users, voluntary, medically supervised adult male circumcision, preventing mother-to-child transmission, and, under some circumstances, antiretroviral use among HIV-negative individuals -- we now have a remarkable collection of public health tools that can make a significant impact on the HIV/AIDS pandemic."
"Scale-up of these proven prevention methods combined with continued research toward a preventive HIV vaccine and female-controlled HIV prevention tools places us on a path to achieving something previously unimaginable: an AIDS-free generation," Dr. Fauci added.
Read more at Science Daily
Findings from the trial, first announced in May, were published in the New England Journal of Medicine in August. The complete top 10 list of 2011 scientific breakthroughs appears in the Dec. 23, 2011, issue of Science.
"The HPTN 052 study convincingly demonstrated that antiretroviral medications can not only treat but also prevent the transmission of HIV infection among heterosexual individuals," said NIAID Director Anthony S. Fauci, M.D. "We are pleased that Science recognized the extraordinary public health significance of these study results. This recognition also is a credit to the hard work and dedication of the HPTN 052 researchers and the more than 3,000 study participants who selflessly gave their time and energy to make such a significant contribution to the fight against HIV/AIDS."
Led by study chair Myron Cohen, M.D., director of the Institute for Global Health and Infectious Diseases at the University of North Carolina at Chapel Hill, HPTN 052 began in 2005 and enrolled 1,763 heterosexual couples in Botswana, Brazil, India, Kenya, Malawi, South Africa, Thailand, the United States and Zimbabwe. Each couple included one partner with HIV infection. The investigators randomly assigned each couple to either one of two study groups. In the first group, the HIV-infected partner immediately began taking a combination of three antiretroviral drugs. The participants infected with HIV were extensively counseled on the need to consistently take the medications as directed. Outstanding compliance resulted in the nearly complete suppression of HIV in the blood (viral load) of the treated study participants in group one.
In the second group (the deferred group), the HIV-infected partners began antiretroviral therapy when their CD4+ T-cell levels -- a key measure of immune system health -- fell below 250 cells per cubic millimeter or an AIDS-related event occurred. The HIV-infected participants also were counseled on the need to strictly adhere to the treatment regimen.
The study was slated to end in 2015, but an interim data review in May by an independent data and safety monitoring board (DSMB) found that of the total 28 cases of HIV infection among the previously uninfected partners, only one case occurred among those couples where the HIV-infected partner began immediate antiretroviral therapy. The DSMB, therefore, called for immediate public release of the study's findings.
The magnitude of protection against HIV infection demonstrated in HPTN 052 has made the successful strategy of the clinical trial a key component of public health policies recently discussed by federal officials and others saying that achieving an end to the HIV/AIDS pandemic is now feasible with additional research and implementation efforts.
"On its own, treatment as prevention is not going to solve the global HIV/AIDS problem," said Dr. Fauci. "Yet when used in combination with other HIV prevention methods -- such as knowing one's HIV status through routine testing, proper and consistent condom use, behavioral modification, needle and syringe exchange programs for injection drug users, voluntary, medically supervised adult male circumcision, preventing mother-to-child transmission, and, under some circumstances, antiretroviral use among HIV-negative individuals -- we now have a remarkable collection of public health tools that can make a significant impact on the HIV/AIDS pandemic."
"Scale-up of these proven prevention methods combined with continued research toward a preventive HIV vaccine and female-controlled HIV prevention tools places us on a path to achieving something previously unimaginable: an AIDS-free generation," Dr. Fauci added.
Read more at Science Daily
570-Million-Year-Old Fossils Hint at Origins of Animal Kingdom
New research suggests that fossils thought to represent some of the earliest multicellular life are instead single-celled, amoeba-like organisms. But even if they’re not quite full-blown animals, they may hint at how animals came into being.
The 570-million-year-old Doushanto formation, first unearthed in South China in 1998, contains tiny clusters of cells that look similar to animal embryos. During the embryo stage of life, cells become organized into tissues and organs, one of the hallmarks of all animal species.
Using a technique called x-ray tomographic microscopy, researchers captured an unprecedented level of detail in the Doushanto fossils, imaging internal and external features down to a ten-thousandth of an inch. They could even see individual nuclei within the cells, some of which were caught in the act of dividing.
Interestingly, these nuclei had distinctive shapes, quite unlike the cell nuclei of animal embryos, which lose their contours when they divide. Furthermore, while the cells were rapidly dividing, they weren’t differentiating into specialized tissues. The cell clusters also sprouted peanut-shaped protrusions filled with spore-like cells.
“All of that is completely incompatible with an animal-embryo hypothesis,” said paleontologist Philip Donoghue from the University of Bristol in the U.K., a co-author on the new work, which appears Dec. 22 in Science.
When it was originally proposed, the animal-embryo hypothesis “was met with almost palpable relief” wrote Cambridge University paleobiologist Nicholas Butterfield in an accompanying commentary. The Doushanto formation sat at the base of a 40-million-year-long time period known as the Phanerozoic, which came just before the Cambrian — a time known colloquially as the Cambrian explosion, during which many new phyla of animals appear suddenly in the fossil record.
According to the theory of gradualistic evolution, which holds that life evolves slowly and steadily rather than in punctuated bursts, “the pre- Phanerozoic oceans must have swarmed with living animals — despite their conspicuous absence from the early fossil record,” wrote Butterfield. If the Doushanto fossils really were animals, “it was indeed the fossil record that had let us down, not the textbooks, and certainly not the exciting new insights from molecular clocks.”
But the new findings “look set to revoke the status of these most celebrated” fossils, Butterfield wrote.
Instead of resembling animals, the pattern is more consistent with a class of single-celled organisms known as Mesomycetozoea. These creatures, which exist in an amoeba-like adult stage, reproduce by dividing into hundreds of new cells that remain in a tightly packed mass until being released as tiny spore bodies.
While the finding is a blow to those who consider the Doushanto fossils early animal embryos, they may shed light on how cells first became organized into multicellular groups, said Donoghue. The Cambrian explosion could have been primed by the sophisticated behavior seen in these fossils, with permanently multicellular arrangements following naturally from temporary communality.
This is, to be sure, speculative, but compelling enough for Butterfield to herald “the arrival of this revolutionary new clade.”
Read more at Wired Science
The 570-million-year-old Doushanto formation, first unearthed in South China in 1998, contains tiny clusters of cells that look similar to animal embryos. During the embryo stage of life, cells become organized into tissues and organs, one of the hallmarks of all animal species.
Using a technique called x-ray tomographic microscopy, researchers captured an unprecedented level of detail in the Doushanto fossils, imaging internal and external features down to a ten-thousandth of an inch. They could even see individual nuclei within the cells, some of which were caught in the act of dividing.
Interestingly, these nuclei had distinctive shapes, quite unlike the cell nuclei of animal embryos, which lose their contours when they divide. Furthermore, while the cells were rapidly dividing, they weren’t differentiating into specialized tissues. The cell clusters also sprouted peanut-shaped protrusions filled with spore-like cells.
“All of that is completely incompatible with an animal-embryo hypothesis,” said paleontologist Philip Donoghue from the University of Bristol in the U.K., a co-author on the new work, which appears Dec. 22 in Science.
When it was originally proposed, the animal-embryo hypothesis “was met with almost palpable relief” wrote Cambridge University paleobiologist Nicholas Butterfield in an accompanying commentary. The Doushanto formation sat at the base of a 40-million-year-long time period known as the Phanerozoic, which came just before the Cambrian — a time known colloquially as the Cambrian explosion, during which many new phyla of animals appear suddenly in the fossil record.
According to the theory of gradualistic evolution, which holds that life evolves slowly and steadily rather than in punctuated bursts, “the pre- Phanerozoic oceans must have swarmed with living animals — despite their conspicuous absence from the early fossil record,” wrote Butterfield. If the Doushanto fossils really were animals, “it was indeed the fossil record that had let us down, not the textbooks, and certainly not the exciting new insights from molecular clocks.”
But the new findings “look set to revoke the status of these most celebrated” fossils, Butterfield wrote.
Instead of resembling animals, the pattern is more consistent with a class of single-celled organisms known as Mesomycetozoea. These creatures, which exist in an amoeba-like adult stage, reproduce by dividing into hundreds of new cells that remain in a tightly packed mass until being released as tiny spore bodies.
While the finding is a blow to those who consider the Doushanto fossils early animal embryos, they may shed light on how cells first became organized into multicellular groups, said Donoghue. The Cambrian explosion could have been primed by the sophisticated behavior seen in these fossils, with permanently multicellular arrangements following naturally from temporary communality.
This is, to be sure, speculative, but compelling enough for Butterfield to herald “the arrival of this revolutionary new clade.”
Read more at Wired Science
Camera carrying insects set to aid search and rescue teams
Minute cameras and microphones mounted on the backs of beetles will help emergency services find victims trapped or buried underneath rubble.
Researchers aim to power a tiny “backpack” of sensors by “scavenging” energy from the insect’s own wing movements to help create a lasting power source.
The bugs can then be released into collapsed buildings or other areas seen as too dangerous for human rescue teams.
Professor Khalil Najafi, who is developing the new technology, said the insect’s own kinetic energy would act as a battery for a variety of communication equipment.
He said: “Through energy scavenging, we could potentially power cameras, microphones and other sensors and communication equipment that an insect could carry aboard a tiny backpack.
“We could send these ‘bugged’ bugs into dangerous or enclosed environments where we would not want humans to go.”
The “hybrid insect” technology is being designed by a team of electrical and computer engineers at the University of Michigan in the United States.
The project is funded by the US Defence Advanced Research Projects Agency (DARPA) and is entitled the Hybrid Insect Micro Electromechanical Systems program.
Researchers have already developed a device able to generate power from the wing motion of a Green June beetle during tethered flight.
By mounting a miniature generator on each wing of an insect, scientists expect to be able to achieve enough power to operate onboard cameras or microphones – allowing the bug to “gather vital information from hazardous environments”.
A recent report by the team states that their “final prototype will be mounted on a live beetle, and tested during its untethered flight” next year.
Read more at The Telegraph
Researchers aim to power a tiny “backpack” of sensors by “scavenging” energy from the insect’s own wing movements to help create a lasting power source.
The bugs can then be released into collapsed buildings or other areas seen as too dangerous for human rescue teams.
Professor Khalil Najafi, who is developing the new technology, said the insect’s own kinetic energy would act as a battery for a variety of communication equipment.
He said: “Through energy scavenging, we could potentially power cameras, microphones and other sensors and communication equipment that an insect could carry aboard a tiny backpack.
“We could send these ‘bugged’ bugs into dangerous or enclosed environments where we would not want humans to go.”
The “hybrid insect” technology is being designed by a team of electrical and computer engineers at the University of Michigan in the United States.
The project is funded by the US Defence Advanced Research Projects Agency (DARPA) and is entitled the Hybrid Insect Micro Electromechanical Systems program.
Researchers have already developed a device able to generate power from the wing motion of a Green June beetle during tethered flight.
By mounting a miniature generator on each wing of an insect, scientists expect to be able to achieve enough power to operate onboard cameras or microphones – allowing the bug to “gather vital information from hazardous environments”.
A recent report by the team states that their “final prototype will be mounted on a live beetle, and tested during its untethered flight” next year.
Read more at The Telegraph
Electricity Sparks New Life Into Indonesia's Corals
Cyanide fishing and rising water temperatures had decimated corals off Bali until a diver inspired by a German scientist's pioneering work on organic architecture helped develop a project now replicated worldwide.
Based on ""Biorock" technology, it is implemented in 20 countries, mainly in Southeast Asia, the Caribbean, Indian Ocean and Pacific.
In the turquoise waters of Pemuteran off the north coast of Bali where the project was launched in 2000, a metal frame known as "the crab" is covered with huge corals in shimmering colors where hundreds of fish have made their homes.
"It's amazing, isn't it ?" Rani Morrow-Wuigk says proudly. The 60-year-old German-born Australian first dived in Pemuteran bay back in 1992, to see its beautiful reefs.
But at the end of the nineties rising water temperatures had led to the near-disappearance of the reef, already badly affected by cyanide and dynamite fishing in the area.
"I was devastated. Basically, all the corals were dead. It was gravel and sand," Rani recalled.
But when German architect and marine scientist Wolf Hilbertz told her about a discovery he had made in the 1970s, the diver's ears pricked up.
Hilbertz had sought to "grow" construction materials in the sea, and had done so by submerging a metallic structure and connecting it to an electric current with a weak and thus harmless voltage.
The ensuing electrolysis had provoked a build-up of limestone, in a kind of spontaneous building work.
When he tested out his invention in Louisiana in the United States, Hilbertz saw that after a few months oysters progressively covered the whole structure, and colonized the collected limestone.
More experiments were carried out and the same phenomenon was confirmed for corals.
"Corals grow 2-6 times faster. We are able to grow back reefs in a few years," Thomas J. Goreau, a Jamaican marine biologist and biogeochemist, told AFP.
Goreau began working with Hilbertz in the mid-1980s to develop Biorock technology, and he has continued their work since Hilbertz's death four years ago.
When Rani saw the discovery, it gave her an idea for how she might save "her" bay.
She decided to expand the project to 22 structures using her own money with the help of Taman Sari, the holiday resort in front of the coral restoration project.
Today there are around sixty of these "cages" in Pemuteran bay, across a surface of two hectares, and the reef has not only been saved from near-death, it is flourishing better than ever before.
"Now we've got a better coral garden than we used to have," said Rani.
Biorock not only revives the corals but it makes them more resistant, in particular against bleaching and global warming.
"Biorock is the only method known that protects corals from dying from high temperatures. We get from 16 to 50 times higher survival of corals from severe bleaching," Goreau said.
The evidence of this has been on show in Pemuteran, said Rani.
"We had coral bleaching happening in the last two years. The water temperature was 34 degrees (93 Fahrenheit), instead of 30. Only 10 percent of the corals were affected and two percent died. Whereas, in 1998, they basically all died".
Read more at Discovery News
Based on ""Biorock" technology, it is implemented in 20 countries, mainly in Southeast Asia, the Caribbean, Indian Ocean and Pacific.
In the turquoise waters of Pemuteran off the north coast of Bali where the project was launched in 2000, a metal frame known as "the crab" is covered with huge corals in shimmering colors where hundreds of fish have made their homes.
"It's amazing, isn't it ?" Rani Morrow-Wuigk says proudly. The 60-year-old German-born Australian first dived in Pemuteran bay back in 1992, to see its beautiful reefs.
But at the end of the nineties rising water temperatures had led to the near-disappearance of the reef, already badly affected by cyanide and dynamite fishing in the area.
"I was devastated. Basically, all the corals were dead. It was gravel and sand," Rani recalled.
But when German architect and marine scientist Wolf Hilbertz told her about a discovery he had made in the 1970s, the diver's ears pricked up.
Hilbertz had sought to "grow" construction materials in the sea, and had done so by submerging a metallic structure and connecting it to an electric current with a weak and thus harmless voltage.
The ensuing electrolysis had provoked a build-up of limestone, in a kind of spontaneous building work.
When he tested out his invention in Louisiana in the United States, Hilbertz saw that after a few months oysters progressively covered the whole structure, and colonized the collected limestone.
More experiments were carried out and the same phenomenon was confirmed for corals.
"Corals grow 2-6 times faster. We are able to grow back reefs in a few years," Thomas J. Goreau, a Jamaican marine biologist and biogeochemist, told AFP.
Goreau began working with Hilbertz in the mid-1980s to develop Biorock technology, and he has continued their work since Hilbertz's death four years ago.
When Rani saw the discovery, it gave her an idea for how she might save "her" bay.
She decided to expand the project to 22 structures using her own money with the help of Taman Sari, the holiday resort in front of the coral restoration project.
Today there are around sixty of these "cages" in Pemuteran bay, across a surface of two hectares, and the reef has not only been saved from near-death, it is flourishing better than ever before.
"Now we've got a better coral garden than we used to have," said Rani.
Biorock not only revives the corals but it makes them more resistant, in particular against bleaching and global warming.
"Biorock is the only method known that protects corals from dying from high temperatures. We get from 16 to 50 times higher survival of corals from severe bleaching," Goreau said.
The evidence of this has been on show in Pemuteran, said Rani.
"We had coral bleaching happening in the last two years. The water temperature was 34 degrees (93 Fahrenheit), instead of 30. Only 10 percent of the corals were affected and two percent died. Whereas, in 1998, they basically all died".
Read more at Discovery News
Dec 25, 2011
Merry Cristmas
I wanted to take the time to wish all of you a Merry christmas.
Here's something for you to watch.
Here's something for you to watch.
Geseknde Kersfees en 'n gelukkige nuwe jaar, Feliz Navidad y Feliz Año Nuevo, Vesele Vanoce, Boas Festas e Feliz Ano Novo, Vesela Koleda i chestita nova godina!, Bon Nadal i un Bon Any Nou!, Sing Dan Fae Lok. Gung Hai Fat Choi, Shen Dan Kuai Le Xin Nian Yu Kuai , Shen tan jie kuai le. Hsin Nien Kuaile, Sretan Bozic, St...astne a vesele v...anoce a stastny novy rok!, Glaedelig Jul og godt nyter
Vrolijk Kerstfeest en een Gelukkig Nieuw Jaar, rettige kerstdagen en een gelukkig nieuw jaar, Merry Christmas and a Happy New Year, inupik) Jutdlime pivdluarit ukiortame pivdluaritlo!, Felican Kristnaskon kaj Bonan Novjaron!, Rõõmusaid jõulupühi ja head uut aastat!, Gledhilig jol og eydnurikt nyggjar!, Maligayang Pasko, Hyvää joulua ja onnellista uutta vuotta!
Zalig Kerstfeest en Gelukkig nieuw jaar, Joyeux Noel et Bonne Année!, Nollaig chridheil agus Bliadhna mhath yr!, Bo Nadal, Frohe Weihnachten und ein gl|ckliches Neues Jahr!, Hronia polla kai eytyhismenos o kainourios hronos, Barka da Kirsimatikuma Barka da Sabuwar Shekara!, Mele Kalikimaka ame Hauoli Makahiki Hou!, Kellemes karacsonyi uennepeket es boldog ujevet!
Gledhileg jsl og farsflt komandi ar!, Selamat Hari Natal dan Selamat Tahun Baru!, Idah Saidan Wa Sanah Jadidah, Nollaig Shona duit, Buon Natale e Felice Anno Nuovo!, Meri Kurisumasu soshite Akemashite Omedeto!, God Jul Og Godt Nytt Aar, Vesowe Boze Narodzenie, Wesolych Swiat i Szczesliwego Nowego Roku, Feliz Natal e um Prospero Ano Novo, Craciun fericit si un an nou fericit
S nastupaiushchim Novym godom i s Rozhdestvom Khristovym!, Hristos se rodi, Feliz Navidad y Próspero Año Nuevo, God Jul Och Ett Gott Nytt Ar, Noeliniz kutlu olsun ve yeni yilinis kutlu olsun!, Srozhdestvom Kristovym... Merry Christmas
Remember to allways be Sceptical!
Dec 21, 2011
Strange Heads Evolved Before Unusual Bodies
Evolution can be a heads or tails question, with scientists debating which parts of animals diversified first. It turns out that heads win, according to new research, with species evolving in their heads before other bodily changes become evident.
The findings suggest that food availability has been a primary driver of animal evolution, starting with the head and then on down.
"Species evolved to exploit new food sources before shifting into new habitats or evolving new ways to get around," said Lauren Cole Sallan, lead author of the study published in the latest Proceedings of the Royal Society B.
"Strange heads show up first -- crushing jaws, animals with big teeth, with long jaws -- but they're all pretty much attached to the same body," added Sallan, who is a graduate student in the Department of Organismal Biology and Anatomy at the University of Chicago.
She and co-author Matt Friedman of the University of Oxford set out to test models that attempt to explain how adaptive radiations occur within the animal kingdom. For example, after a major disruption, such as an extinction event, surviving species diversify into a myriad of forms.
One popular theory, the "early burst model," holds that there's a flurry of divergence followed by a long period of relative stability. Another argues that habitat-driven changes in body type precede diversification of head types.
To help resolve which is right, Sallan and Friedman analyzed two different adaptive radiations in the fossil record. The first was the explosion of ray-finned fishes after what's known as the Hangenberg extinction, an event 360 million years ago that decimated ocean life on Earth.
The second group was the acanthomorphs, a group of fish that diversified wildly around the time of the end-Cretaceous extinction that killed off non-avian dinosaurs.
For both groups, the researchers quantified differences in features like body depth, fin position, and jaw shape between species. They separated head features from body features in their study, to find out which changes actually occurred first and seemed to dominate.
Both sets of analyses found that diversification in head features came before diversification in body types. Earlier research on mammals, lungfish, birds and other animals suggests that these too followed the head-first pattern of evolution.
"An animal's first job is to obtain enough energy to live, however a main tenet of natural selection is that there isn't enough of any one food in a habitat to support everyone past a certain population size," explained Sallan. "So there are two choices to get ahead: eat other foods in the same place, or move away and hope to find a habitat with the same food. In many cases, changing to a new food is probably simpler."
More evidence is needed to see if humans also fit into the head-first model of evolution.
Read more at Discovery News
The findings suggest that food availability has been a primary driver of animal evolution, starting with the head and then on down.
"Species evolved to exploit new food sources before shifting into new habitats or evolving new ways to get around," said Lauren Cole Sallan, lead author of the study published in the latest Proceedings of the Royal Society B.
"Strange heads show up first -- crushing jaws, animals with big teeth, with long jaws -- but they're all pretty much attached to the same body," added Sallan, who is a graduate student in the Department of Organismal Biology and Anatomy at the University of Chicago.
She and co-author Matt Friedman of the University of Oxford set out to test models that attempt to explain how adaptive radiations occur within the animal kingdom. For example, after a major disruption, such as an extinction event, surviving species diversify into a myriad of forms.
One popular theory, the "early burst model," holds that there's a flurry of divergence followed by a long period of relative stability. Another argues that habitat-driven changes in body type precede diversification of head types.
To help resolve which is right, Sallan and Friedman analyzed two different adaptive radiations in the fossil record. The first was the explosion of ray-finned fishes after what's known as the Hangenberg extinction, an event 360 million years ago that decimated ocean life on Earth.
The second group was the acanthomorphs, a group of fish that diversified wildly around the time of the end-Cretaceous extinction that killed off non-avian dinosaurs.
For both groups, the researchers quantified differences in features like body depth, fin position, and jaw shape between species. They separated head features from body features in their study, to find out which changes actually occurred first and seemed to dominate.
Both sets of analyses found that diversification in head features came before diversification in body types. Earlier research on mammals, lungfish, birds and other animals suggests that these too followed the head-first pattern of evolution.
"An animal's first job is to obtain enough energy to live, however a main tenet of natural selection is that there isn't enough of any one food in a habitat to support everyone past a certain population size," explained Sallan. "So there are two choices to get ahead: eat other foods in the same place, or move away and hope to find a habitat with the same food. In many cases, changing to a new food is probably simpler."
More evidence is needed to see if humans also fit into the head-first model of evolution.
Read more at Discovery News
Star-Churning 'Blob' Lurks at Universe's Edge
What lurks at the very edge of the observable Universe? As it's so distant, and as the light from any primordial galaxies took so long to get here, by observing any object that far away is to also look back in time.
Now astronomers using the Japanese Subaru and Keck II telescopes -- observatories situated atop Mauna Kea, Hawaii -- have managed to look deep into the cosmic past revealing one of the most distant objects ever discovered. In fact, only two other objects have been spotted at a greater distance.
As it's so far away, this particular "blob" existed when the Universe was only a fraction of its current age -- when it was only 750 million years old, or 5 percent its current age. (Our Universe is precisely 13.75 billion years old, by the way.)
The blob in question appears to be a very young galaxy -- called GN-108036 -- with a surprisingly high star birth rate. It's birthing around 100 baby stars per year, 30 times the star production rate of our Milky Way.
"We're really surprised to know that GN-108036 is quite luminous in ultraviolet and harbors a powerful star formation," said astronomer Yoshiaki Ono of the University of Tokyo, Japan. "We had never seen such a vigorously star-forming galaxy at a comparable distance until the discovery of GN-108036."
The team's work is set to be published in The Astrophysical Journal.
Previous surveys didn't detect the galaxy because of its high star formation rate; it wasn't thought that (before the discovery of GN-108036) the earliest galaxies in the history of the Universe were so active.
"Perhaps those surveys were just too small to find galaxies like GN-108036," said Mark Dickinson of the National Optical Astronomy Observatory in Tucson, Ariz. "It may be a special, rare object that we just happened to catch during an extreme burst of star formation."
This young galaxy is interesting as it emerged soon after the "dark ages" of our Universe. This cosmic epoch would have been especially depressing as our entire Universe was choked with thick clouds of hydrogen. No light could traverse space without getting absorbed -- it was the epitome of a cosmos-wide pea soup fog.
Only when the earliest galaxies began to form, with throbbing black holes in their cores, was this opaque cosmic soup "burned away" by the ionizing radiation generated by quasars.
Read more at Discovery News
Now astronomers using the Japanese Subaru and Keck II telescopes -- observatories situated atop Mauna Kea, Hawaii -- have managed to look deep into the cosmic past revealing one of the most distant objects ever discovered. In fact, only two other objects have been spotted at a greater distance.
As it's so far away, this particular "blob" existed when the Universe was only a fraction of its current age -- when it was only 750 million years old, or 5 percent its current age. (Our Universe is precisely 13.75 billion years old, by the way.)
The blob in question appears to be a very young galaxy -- called GN-108036 -- with a surprisingly high star birth rate. It's birthing around 100 baby stars per year, 30 times the star production rate of our Milky Way.
"We're really surprised to know that GN-108036 is quite luminous in ultraviolet and harbors a powerful star formation," said astronomer Yoshiaki Ono of the University of Tokyo, Japan. "We had never seen such a vigorously star-forming galaxy at a comparable distance until the discovery of GN-108036."
The team's work is set to be published in The Astrophysical Journal.
Previous surveys didn't detect the galaxy because of its high star formation rate; it wasn't thought that (before the discovery of GN-108036) the earliest galaxies in the history of the Universe were so active.
"Perhaps those surveys were just too small to find galaxies like GN-108036," said Mark Dickinson of the National Optical Astronomy Observatory in Tucson, Ariz. "It may be a special, rare object that we just happened to catch during an extreme burst of star formation."
This young galaxy is interesting as it emerged soon after the "dark ages" of our Universe. This cosmic epoch would have been especially depressing as our entire Universe was choked with thick clouds of hydrogen. No light could traverse space without getting absorbed -- it was the epitome of a cosmos-wide pea soup fog.
Only when the earliest galaxies began to form, with throbbing black holes in their cores, was this opaque cosmic soup "burned away" by the ionizing radiation generated by quasars.
Read more at Discovery News
Earth On Extreme Tilt Away From Sun
The winter solstice is a vampire's delight. No other night is longer and no day shorter. But sorry blood-suckers, it also means the subsequent days get longer.
This year's winter solstice will occur at 12:30 a.m. Dec. 22. At that moment, the Sun will be directly overhead at 23.5 degrees south latitude, and the Earth's axial tilt will be as far from the Sun as possible.
When the celestial fireball finally makes its appearance in the northern hemisphere it keeps a low profile. On the winter solstice the sun follows the lowest path of the year in the sky of the northern hemisphere.
The Washington Post provided some details about the solstice sun over the United States' capital:
The sun is above the horizon for approximately 9 hours, 26 minutes
Sunrise occurs at 7:24 a.m. and sunset at 4:50 p.m.
The sun angle at solar noon (12:07 p.m.) reaches its minimum height of 27.7º above the horizon (compared to 74.6º above the horizon on June 21)
The sun rises at its southeasternmost point and sets at its southwesternmost point along the horizon (120º and 240º from due north, respectively)
For modern city-dwellers, the solstice is little more than a curiosity, but in ancient times it was of vital importance.
In the northern hemisphere, knowing when the dark days of winter would start to lengthen could give hope to people trying to make the harvest of the previous year stave off starvation for a few more months.
The day was so important, that some of humanity's earliest monumental structures were aligned with the rising or setting of the Sun on the winter solstice. Stonehenge in England, for example, is lined up with the winter solstice.
Festivals both ancient and modern marked the winter solstice.
The Romans celebrated Saturnalia around the time of the solstice with revelry and a social switcheroo in which masters served the slaves.
Read more at Discovery News
This year's winter solstice will occur at 12:30 a.m. Dec. 22. At that moment, the Sun will be directly overhead at 23.5 degrees south latitude, and the Earth's axial tilt will be as far from the Sun as possible.
When the celestial fireball finally makes its appearance in the northern hemisphere it keeps a low profile. On the winter solstice the sun follows the lowest path of the year in the sky of the northern hemisphere.
The Washington Post provided some details about the solstice sun over the United States' capital:
The sun is above the horizon for approximately 9 hours, 26 minutes
Sunrise occurs at 7:24 a.m. and sunset at 4:50 p.m.
The sun angle at solar noon (12:07 p.m.) reaches its minimum height of 27.7º above the horizon (compared to 74.6º above the horizon on June 21)
The sun rises at its southeasternmost point and sets at its southwesternmost point along the horizon (120º and 240º from due north, respectively)
For modern city-dwellers, the solstice is little more than a curiosity, but in ancient times it was of vital importance.
In the northern hemisphere, knowing when the dark days of winter would start to lengthen could give hope to people trying to make the harvest of the previous year stave off starvation for a few more months.
The day was so important, that some of humanity's earliest monumental structures were aligned with the rising or setting of the Sun on the winter solstice. Stonehenge in England, for example, is lined up with the winter solstice.
Festivals both ancient and modern marked the winter solstice.
The Romans celebrated Saturnalia around the time of the solstice with revelry and a social switcheroo in which masters served the slaves.
Read more at Discovery News
Planets May Have Survived Star's Death
Scientists have found a system of planets that appears to have survived being engulfed by their dying parent star.
The discovery raises questions about the ultimate fate of our solar system when the sun runs out of hydrogen gas in about 5 billion years and violently transform into an expanding red giant star.
Scientists believe all the planets from Earth inward will be destroyed when the sun expands, but new research suggests that if planets are large enough, they may outlast their parent star's death, even if they are engulfed.
Astronomers using NASA's Kepler telescope have found two planets orbiting very close to a star that is past the red-giant phase. The pair circle less than 1 percent of the distance that Earth orbits the sun.
From that position, the planets would have been deeply engulfed by the star when it expanded in its senior years. So how did they survive?
Lead researcher Stephane Charpinet, with the University of Toulouse in France, suggests the planets might not have started out where they are now and that they were most likely a lot bigger than Earth, with more material to weather the searing heat from their expanding star.
"The system was probably more like the many stellar systems that have been discovered so far, with giant planets orbiting relatively close to their parent star," Charpinet wrote in an email to Discovery News.
Another possibility is that the two planets found circling the evolved star aren't original members, points out astronomer Eliza Kempton, with the University of Southern California at Santa Cruz.
The planets could have formed anew from material that was left behind when the star blew off its outer layers after its red-giant phase, Kempton said.
"The expansion of the sun will surely kill off all life on Earth," she said. "However, the existence of planets orbiting an evolved star points to an interesting possibility that" all close-in planets are not entirely destroyed during stellar evolution."
Read more at Discovery News
The discovery raises questions about the ultimate fate of our solar system when the sun runs out of hydrogen gas in about 5 billion years and violently transform into an expanding red giant star.
Scientists believe all the planets from Earth inward will be destroyed when the sun expands, but new research suggests that if planets are large enough, they may outlast their parent star's death, even if they are engulfed.
Astronomers using NASA's Kepler telescope have found two planets orbiting very close to a star that is past the red-giant phase. The pair circle less than 1 percent of the distance that Earth orbits the sun.
From that position, the planets would have been deeply engulfed by the star when it expanded in its senior years. So how did they survive?
Lead researcher Stephane Charpinet, with the University of Toulouse in France, suggests the planets might not have started out where they are now and that they were most likely a lot bigger than Earth, with more material to weather the searing heat from their expanding star.
"The system was probably more like the many stellar systems that have been discovered so far, with giant planets orbiting relatively close to their parent star," Charpinet wrote in an email to Discovery News.
Another possibility is that the two planets found circling the evolved star aren't original members, points out astronomer Eliza Kempton, with the University of Southern California at Santa Cruz.
The planets could have formed anew from material that was left behind when the star blew off its outer layers after its red-giant phase, Kempton said.
"The expansion of the sun will surely kill off all life on Earth," she said. "However, the existence of planets orbiting an evolved star points to an interesting possibility that" all close-in planets are not entirely destroyed during stellar evolution."
Read more at Discovery News
Dec 20, 2011
Medieval Knights May Have Had PTSD
In movies, medieval knights are portrayed as courageous and loyal heroes who will fight to the death without fear or regret.
In reality, the lives of knights were filled with a litany of stresses much like those that modern soldiers deal with.
They were often sleep-deprived, exhausted and malnourished. They slept outside on hard ground, fully exposed to whatever weather befell them. And their lives were full of horror and carnage as they regularly killed other men and watched their friends die.
Faced with the trauma inherent in a life of combat, according to a new look at ancient texts, medieval knights sometimes struggled with despair, fear, powerlessness and delusions. Some may have even suffered from post-traumatic stress or related disorders, argues a Danish researcher, just as their modern-day counterparts do.
The research strives to add a dose of humanity to our understanding of knights, who are often considered cold and heartless killers.
"As a medievalist, it's a bit irritating to hear people say that the Middle Ages were just populated by brutal and mindless thugs who just wallowed in warfare," said Thomas Heebøll-Holm, a medieval historian at the University of Copenhagen. "I'm going for a nuanced picture of humans that lived in the past. They were people just like you and me, as far as we can tell."
Ever since the war in Vietnam, there has been a growing recognition that the terrors of battle, torture, terrorism and other horrific experiences can result in a type of severe psychological distress now known as PTSD. To be diagnosed with the disorder, people must suffer from uncontrollable and intense stress for at least a month after a horrifying event. Symptoms can include flashbacks, nightmares, depression and hyperactivity.
When soldiers go to war in modern times, Heebøll-Holm said, psychologists now recognize that the stresses they encounter can lower their psychological resistance until they finally succumb to anxiety disorders. Since medieval knights faced as many and possibly more hardships than modern soldiers do, he wondered if he might be able to find references to signs of trauma in warriors who fought during the Middle Ages.
In addition to other documents, Heebøll-Holm focused on three texts written by a 14th-century French knight named Geoffroi de Charny, who was also a diplomat and trusted adviser to King John II.
No one knows for sure why Charny wrote the documents, whose translated titles included "The Book of Chivalry" and "Questions Concerning the Joust, Tournaments and War." The most popular theory is that they were part of an effort to create an ideological program for the royal French chivalric order that would rival the British equivalent.
Though many of these texts have been thoroughly analyzed already, Heebøll-Holm was the first to look between the lines through the lens of modern military psychology. And while it's hard to ever completely understand a culture that was so very different (and far more religious) than our own, Heebøll-Holm found a number of examples that would suggest at least the potential for trauma in medieval knights.
Among his writings, for example, Charny wrote:
"In this profession one has to endure heat, hunger and hard work, to sleep little and often to keep watch. And to be exhausted and to sleep uncomfortably on the ground only to be abruptly awakened. And you will be powerless to change the situation. You will often be afraid when you see your enemies coming towards you with lowered lances to run you through and with drawn swords to cut you down. Bolts and arrows come at you and you do not know how best to protect yourself. You see people killing each other, fleeing, dying and being taken prisoner and you see the bodies of your dead friends lying before you. But your horse is not dead, and by its vigorous speed you can escape in dishonour. But if you stay, you will win eternal honour. Is he not a great martyr, who puts himself to such work?"
Read more at Discovery News
In reality, the lives of knights were filled with a litany of stresses much like those that modern soldiers deal with.
They were often sleep-deprived, exhausted and malnourished. They slept outside on hard ground, fully exposed to whatever weather befell them. And their lives were full of horror and carnage as they regularly killed other men and watched their friends die.
Faced with the trauma inherent in a life of combat, according to a new look at ancient texts, medieval knights sometimes struggled with despair, fear, powerlessness and delusions. Some may have even suffered from post-traumatic stress or related disorders, argues a Danish researcher, just as their modern-day counterparts do.
The research strives to add a dose of humanity to our understanding of knights, who are often considered cold and heartless killers.
"As a medievalist, it's a bit irritating to hear people say that the Middle Ages were just populated by brutal and mindless thugs who just wallowed in warfare," said Thomas Heebøll-Holm, a medieval historian at the University of Copenhagen. "I'm going for a nuanced picture of humans that lived in the past. They were people just like you and me, as far as we can tell."
Ever since the war in Vietnam, there has been a growing recognition that the terrors of battle, torture, terrorism and other horrific experiences can result in a type of severe psychological distress now known as PTSD. To be diagnosed with the disorder, people must suffer from uncontrollable and intense stress for at least a month after a horrifying event. Symptoms can include flashbacks, nightmares, depression and hyperactivity.
When soldiers go to war in modern times, Heebøll-Holm said, psychologists now recognize that the stresses they encounter can lower their psychological resistance until they finally succumb to anxiety disorders. Since medieval knights faced as many and possibly more hardships than modern soldiers do, he wondered if he might be able to find references to signs of trauma in warriors who fought during the Middle Ages.
In addition to other documents, Heebøll-Holm focused on three texts written by a 14th-century French knight named Geoffroi de Charny, who was also a diplomat and trusted adviser to King John II.
No one knows for sure why Charny wrote the documents, whose translated titles included "The Book of Chivalry" and "Questions Concerning the Joust, Tournaments and War." The most popular theory is that they were part of an effort to create an ideological program for the royal French chivalric order that would rival the British equivalent.
Though many of these texts have been thoroughly analyzed already, Heebøll-Holm was the first to look between the lines through the lens of modern military psychology. And while it's hard to ever completely understand a culture that was so very different (and far more religious) than our own, Heebøll-Holm found a number of examples that would suggest at least the potential for trauma in medieval knights.
Among his writings, for example, Charny wrote:
"In this profession one has to endure heat, hunger and hard work, to sleep little and often to keep watch. And to be exhausted and to sleep uncomfortably on the ground only to be abruptly awakened. And you will be powerless to change the situation. You will often be afraid when you see your enemies coming towards you with lowered lances to run you through and with drawn swords to cut you down. Bolts and arrows come at you and you do not know how best to protect yourself. You see people killing each other, fleeing, dying and being taken prisoner and you see the bodies of your dead friends lying before you. But your horse is not dead, and by its vigorous speed you can escape in dishonour. But if you stay, you will win eternal honour. Is he not a great martyr, who puts himself to such work?"
Read more at Discovery News
Mammals' Tusked Ancestor Roamed Tasmania
Scientists say rare fossils found in Tasmania's south-east prove that an ancient species of prehistoric animal did exist throughout Australia.
The dicynodont was an early ancestor of modern-day mammals and lived about 250 million years ago. Roughly the size of a cow, the plant-eating animal had two tusks and a horny beak.
Queensland Museum palaeontologist Andrew Rozefelds said they lived on every continent, including Antarctica. But until now, the only Australian specimen was found in Queensland almost 30 years ago. He said it is surprising more remains have not been found, given the animal's size.
"There must be more material out there to be found," he said. "Obviously we'd love to find more, because at the moment, [of] this entire group of animals called dicynodonts, there's only about four bones known from Australia. We've got better fossil records from Antarctica for these animals than we have for Australia."
He describes the dicynodont as a bizarre-looking creature.
"They're a strange-looking beast," he said. "They had tusks at the front of their skull, which makes you think maybe they were a carnivore but in fact they were a plant eater. They had slightly splayed legs, so their posture was quite different to say some of the modern mammals you see and they're very, very distantly related to modern mammals."
Bob and Penny Tyson discovered the bone fragments a few years ago when they were walking along the beach on the Tasman Peninsula. Mr Tyson had gone for a walk along the rocky foreshore when he found some rare amphibian skulls. He took a few photos of the fossils and then went back to the place where they were staying to get his wife.
She started looking closer to the waterline and found a fossilized tusk, right on the low tide mark, sitting in seaweed.
"It was sitting on top of the rock surface, so all the surrounding rock had been worn away," said Mrs Tyson. "It was just sitting there waiting to be knocked off."
University of Tasmania sedimentologist Dr Stuart Ball, who dated the fossils, said the remains are from the early Triassic period. They predate the dinosaurs by at least 30 million years. He said it is likely floods washed the animals' remains into billabongs, which is why only fragments have turned up.
Rozefelds said the latest find proves that not only did dicynodonts exist in Australia, but they may have survived here longer than anywhere else in the world. He said scientists from places like China have been contacting him about the Tasmanian find.
Read more at Discovery News
The dicynodont was an early ancestor of modern-day mammals and lived about 250 million years ago. Roughly the size of a cow, the plant-eating animal had two tusks and a horny beak.
Queensland Museum palaeontologist Andrew Rozefelds said they lived on every continent, including Antarctica. But until now, the only Australian specimen was found in Queensland almost 30 years ago. He said it is surprising more remains have not been found, given the animal's size.
"There must be more material out there to be found," he said. "Obviously we'd love to find more, because at the moment, [of] this entire group of animals called dicynodonts, there's only about four bones known from Australia. We've got better fossil records from Antarctica for these animals than we have for Australia."
He describes the dicynodont as a bizarre-looking creature.
"They're a strange-looking beast," he said. "They had tusks at the front of their skull, which makes you think maybe they were a carnivore but in fact they were a plant eater. They had slightly splayed legs, so their posture was quite different to say some of the modern mammals you see and they're very, very distantly related to modern mammals."
Bob and Penny Tyson discovered the bone fragments a few years ago when they were walking along the beach on the Tasman Peninsula. Mr Tyson had gone for a walk along the rocky foreshore when he found some rare amphibian skulls. He took a few photos of the fossils and then went back to the place where they were staying to get his wife.
She started looking closer to the waterline and found a fossilized tusk, right on the low tide mark, sitting in seaweed.
"It was sitting on top of the rock surface, so all the surrounding rock had been worn away," said Mrs Tyson. "It was just sitting there waiting to be knocked off."
University of Tasmania sedimentologist Dr Stuart Ball, who dated the fossils, said the remains are from the early Triassic period. They predate the dinosaurs by at least 30 million years. He said it is likely floods washed the animals' remains into billabongs, which is why only fragments have turned up.
Rozefelds said the latest find proves that not only did dicynodonts exist in Australia, but they may have survived here longer than anywhere else in the world. He said scientists from places like China have been contacting him about the Tasmanian find.
Read more at Discovery News
Kents Cavern: inside the cave of stone-age secrets
The entrance to the cave was narrow and no more than 5ft high. Only one person at a time could enter, head stooped, a flickering light held in one hand, pickaxe in the other. They were a group of 12 explorers on that summer’s day in 1825, including local coastguards, a man determined to discover an ancient Roman temple, and a young Roman Catholic priest with an interest in fossils.
Father John MacEnery had recently arrived from Limerick as private chaplain to the Cary family at nearby Torre Abbey. He was the last to enter this strange world of darkness – of vast chambers, narrow fissures and magical stalactites that formed crystalline chandeliers and pillars, glinting in the lantern light.
Breaking off from the rest of the party who were vainly trying to break through the calcified floor, Father MacEnery investigated areas of the cave where the ground had already been disturbed. Beneath the stalagmites, in reddish brown earth, the priest saw something gleam. His candle reflected off the enamel of fossil teeth. He wrote later: “As I laid my hand on these relics of distant races… I shrank back involuntarily… I am not ashamed to own that, in the presence of these remains, I felt more awe than joy.”
The priest continued his search in silence, keeping “my good fortune a secret, fearing that amidst the press and avidity of the party to possess some fossil memorial of the day, my discoveries would be damaged.”
If he had known what he had stumbled upon, he might have held his finds even closer. For the teeth and other remains found in the cave are rewriting human prehistory.
It is now known that this cave, called Kents Cavern, outside Torquay in Devon, had been home to prehistoric hominids and animals extinct for half a million years.
In November, Professor Chris Stringer of the Natural History Museum announced that a human jaw found in the cave in 1927 is 7,000 years older than was thought and, at 42,000 years, this makes it the oldest Homo sapiens in northwest Europe.
This is yet more evidence that modern humans must have lived side-by-side with Neanderthals, an extinct cousin species, for tens of thousands of years.
But back in the 1820s, science knew nothing of humanity’s origins – or of what Britain was like millennia ago. Between 1825 and 1829, Father MacEnery made more astonishing discoveries. He unearthed the bones of extinct and exotic creatures, among them elephants, rhinos, sabre-tooth tigers, cave lions, bears and hyenas, from beneath the stalagmite cave floor.
For the early 19th century, this was momentous. It was just four years since the professor of the new science of geology at Oxford, William Buckland, had discovered similar fauna in a cave in Yorkshire. Science – and society as a whole – were barely coming to grips with the idea that animals which now exist only in tropical countries could once have tramped over the Dales. Now it seemed they had also lived on the English Riviera.
But Father MacEnery found something even more astonishing. As he dug, he discovered, on a bed of dirty red colour, “the singular phenomenon of flint instruments intermingled with fossil bones!” They were the unmistakeable tools of Stone Age humans. “This,” he wrote – his intellectual shock palpable – “electrified me”.
The 19th century was a frenzy of the new. Rapid developments in transport, industry and technology were paralleled by radical new philosophies and a revolution in the understanding of the age and nature of the Earth. The belief that our planet was just 6,000 years old, according to calculations based on Biblical texts, was fatally undermined by the geologists who were revealing the great antiquity of our world.
Buckland’s early writings reflect contemporary beliefs that fossil bones of strange extinct animals were, literally, antediluvian – they were the remains of creatures that had either failed to make it on to Noah’s Ark, or had vanished before the Flood.
Indeed, Buckland’s report on his Yorkshire finds is entitled Reliquiae Diluvianae – “Relics of the Flood” - and his ideas are a fascinating combination of cutting-edge science and Biblical belief.
As the fossil and geological evidence accumulated, Buckland concluded that, over time, there had been several floods. The last would have been a tremendous universal inundation – the one detailed in Genesis – thought to have taken place no more than 5,000 years ago. As humans were present only during this last catastrophe, Buckland stated unequivocally that humans and antediluvian creatures discovered in Britain and Europe had not co-existed. These animals had been wiped out before the arrival of man.
That was why Father MacEnery was so enthused by his discovery. This was clear evidence that contradicted Buckland, a man of great influence, who had also visited the Torquay cave. Father MacEnery, bursting with his momentous discovery and his realisation that it implied the co-existence of man and extinct beasts, “immediately communicated my impressions to Dr Buckland with all the earnestness of sincere conviction”.
Alas for the Irishman, Buckland would have none of it. He insisted that the flints had been introduced by later human inhabitants of the caves – by men digging to bury their dead, or digging through the stalagmite floor to make pits for ovens. Indeed, Buckland had found a flint blade there before MacEnery had, but does not appear to have told him.
Despite this, MacEnery had planned to publish his finds with the title Cavern Researches; he even printed a prospectus for it and had plates made by the famous natural history illustrator Georg Scharf.
His manuscript consists of disjointed narrative and essays, and copious notes – some made at the time, but many over the next decade – in which he tries, and fails, to reconcile the truth of what he had observed with Buckland’s views. At times he berates himself for falling into “the error of supposing human remains to be contemporaneous”, and then decides he is right and Buckland wrong, although “it is painful to dissent from so high an authority”.
Read more at The Telegraph
Father John MacEnery had recently arrived from Limerick as private chaplain to the Cary family at nearby Torre Abbey. He was the last to enter this strange world of darkness – of vast chambers, narrow fissures and magical stalactites that formed crystalline chandeliers and pillars, glinting in the lantern light.
Breaking off from the rest of the party who were vainly trying to break through the calcified floor, Father MacEnery investigated areas of the cave where the ground had already been disturbed. Beneath the stalagmites, in reddish brown earth, the priest saw something gleam. His candle reflected off the enamel of fossil teeth. He wrote later: “As I laid my hand on these relics of distant races… I shrank back involuntarily… I am not ashamed to own that, in the presence of these remains, I felt more awe than joy.”
The priest continued his search in silence, keeping “my good fortune a secret, fearing that amidst the press and avidity of the party to possess some fossil memorial of the day, my discoveries would be damaged.”
If he had known what he had stumbled upon, he might have held his finds even closer. For the teeth and other remains found in the cave are rewriting human prehistory.
It is now known that this cave, called Kents Cavern, outside Torquay in Devon, had been home to prehistoric hominids and animals extinct for half a million years.
In November, Professor Chris Stringer of the Natural History Museum announced that a human jaw found in the cave in 1927 is 7,000 years older than was thought and, at 42,000 years, this makes it the oldest Homo sapiens in northwest Europe.
This is yet more evidence that modern humans must have lived side-by-side with Neanderthals, an extinct cousin species, for tens of thousands of years.
But back in the 1820s, science knew nothing of humanity’s origins – or of what Britain was like millennia ago. Between 1825 and 1829, Father MacEnery made more astonishing discoveries. He unearthed the bones of extinct and exotic creatures, among them elephants, rhinos, sabre-tooth tigers, cave lions, bears and hyenas, from beneath the stalagmite cave floor.
For the early 19th century, this was momentous. It was just four years since the professor of the new science of geology at Oxford, William Buckland, had discovered similar fauna in a cave in Yorkshire. Science – and society as a whole – were barely coming to grips with the idea that animals which now exist only in tropical countries could once have tramped over the Dales. Now it seemed they had also lived on the English Riviera.
But Father MacEnery found something even more astonishing. As he dug, he discovered, on a bed of dirty red colour, “the singular phenomenon of flint instruments intermingled with fossil bones!” They were the unmistakeable tools of Stone Age humans. “This,” he wrote – his intellectual shock palpable – “electrified me”.
The 19th century was a frenzy of the new. Rapid developments in transport, industry and technology were paralleled by radical new philosophies and a revolution in the understanding of the age and nature of the Earth. The belief that our planet was just 6,000 years old, according to calculations based on Biblical texts, was fatally undermined by the geologists who were revealing the great antiquity of our world.
Buckland’s early writings reflect contemporary beliefs that fossil bones of strange extinct animals were, literally, antediluvian – they were the remains of creatures that had either failed to make it on to Noah’s Ark, or had vanished before the Flood.
Indeed, Buckland’s report on his Yorkshire finds is entitled Reliquiae Diluvianae – “Relics of the Flood” - and his ideas are a fascinating combination of cutting-edge science and Biblical belief.
As the fossil and geological evidence accumulated, Buckland concluded that, over time, there had been several floods. The last would have been a tremendous universal inundation – the one detailed in Genesis – thought to have taken place no more than 5,000 years ago. As humans were present only during this last catastrophe, Buckland stated unequivocally that humans and antediluvian creatures discovered in Britain and Europe had not co-existed. These animals had been wiped out before the arrival of man.
That was why Father MacEnery was so enthused by his discovery. This was clear evidence that contradicted Buckland, a man of great influence, who had also visited the Torquay cave. Father MacEnery, bursting with his momentous discovery and his realisation that it implied the co-existence of man and extinct beasts, “immediately communicated my impressions to Dr Buckland with all the earnestness of sincere conviction”.
Alas for the Irishman, Buckland would have none of it. He insisted that the flints had been introduced by later human inhabitants of the caves – by men digging to bury their dead, or digging through the stalagmite floor to make pits for ovens. Indeed, Buckland had found a flint blade there before MacEnery had, but does not appear to have told him.
Despite this, MacEnery had planned to publish his finds with the title Cavern Researches; he even printed a prospectus for it and had plates made by the famous natural history illustrator Georg Scharf.
His manuscript consists of disjointed narrative and essays, and copious notes – some made at the time, but many over the next decade – in which he tries, and fails, to reconcile the truth of what he had observed with Buckland’s views. At times he berates himself for falling into “the error of supposing human remains to be contemporaneous”, and then decides he is right and Buckland wrong, although “it is painful to dissent from so high an authority”.
Read more at The Telegraph
Labels:
Archeology,
Geology,
History,
Human,
Science
The Turin Shroud is fake. Get over it
First things first. The "authenticity" or otherwise of the Shroud of Turin does not have any implications for whether or not Christ was real, or whether He was divine. If it was a medieval forgery, it doesn't mean the stories aren't true; if it really was made in the first century AD, it doesn't mean they were. Until we find a reliable method of linking the shroud with Christ Himself – a nametag stitched in it by His mum, perhaps – the existence of a 2,000-year-old cloth does not imply that a particular person who died around the time it was made was the Son of God.
I mention this because today, we report that a group of scientists – working, unexpectedly, for the Italian sustainable energy agency ENEA – claim that the marks on the cloth could only have been made by ultraviolet radiation. They say that "When one talks about a flash of light being able to colour a piece of linen in the same way as the shroud, discussion inevitably touches on things like miracles and resurrection," and that they "hope our results can open up a philosophical and theological debate". They do, however, say "as scientists, we were concerned only with verifiable scientific processes."
The implication, of course, is that a divine light shone when Jesus's body was resurrected, and that this emitted a burst of high-frequency photons which burned an image on the cloth around him. This possibility has been discounted in the past by Raymond Rogers, a member of the Shroud of Turin Research Project (Sturp) which examined the fabric in the 1970s, who said: "If any form of radiation degraded the cellulose of the linen fibers to produce the image color, it would have had to penetrate the entire diameter of a fiber in order to color its back surface", but that the centres of the fibres are unmarked. There are many hypotheses about how the images could have been made, and they have each come in and out of favour. Without wanting to be too cocky, when the ENEA scientists say that radiation is the "only" way the image could have been made, I imagine that many of their fellow researchers will say it's the only way that they managed it.
However it was made, if – as many have claimed – the Shroud was made in the 13th century, then it isn't a relic of Christ, for obvious reasons. Radiocarbon dating has repeatedly placed the Shroud as medieval in origin – specifically, between 1260AD and 1390AD. There have been suggestions that the radiocarbon process got it wrong – but this is unlikely, according to Professor Christopher Ramsey of the Oxford Radiocarbon Accelerator Unit, one of three labs which carried out the research. "We're pretty confident in the radiocarbon dates," he told me. "There are various hypotheses as to why the dates might not be correct, but none of them stack up.
"One is that the samples were contaminated. But that doesn't work, because to make an 2,000-year-old object appear just 800 years old, about half the material would have to be contaminant, and that's if it was all modern. If it was older, it would have to be even more. Various tests done at the time of the original measurements also suggested that the material was fairly pure. It's also been hypothesised that the patch we tested was a modern repair, but most of us agree that's implausible, because the weave is very unusual and matches the rest of the shroud perfectly. Then there are more complicated notions, like contamination with carbon monoxide, but tests have shown that carbon monoxide doesn't react with the fabric under the circumstances that you might expect."
Regarding the ENEA findings, he is similarly sceptical. "Just because you can create similar results using an ultraviolet laser, that doesn't mean it's the only way it could have been made in the first place," he says. "There are several possibilities, and it could just be a chance effect due to a number of different phenomena. But in archaeological science, being able to reproduce something, doesn't imply that that's the technique used; it may simply show that you've got a new technique you want to try out." He adds that the confidence in the medieval result is such that, were it not suggested to be a relic, there would be no more discussion over its age.
So there remain questions about how the Shroud of Turin was made, but there seems to be little reason to think that it's anywhere near old enough to have been Christ's. (Interestingly, John Calvin in 1543 already thought it was a fake: he pointed out that according to the Gospel of St John, two cloths were used to shroud Jesus, one on His body and one on His face; he also suggests that it is strange that none of those recording his death in the Gospels mentioned a miracle "so remarkable as the likeness of the body of our Lord remaining on its wrapping sheet".) It's a fascinating and mysterious object, but it says nothing about the questions of whether Christ was a historical figure, whether He was the Son of God, or whether He rose from the dead.
More importantly, I think, the rush to suggest that it does is a bit undignified. The intelligent faithful don't need trinkets like this to justify their belief, surely? We are constantly told that science cannot disprove God; that it is a non-scientific question, that the two fields of science and religion are non-overlapping. But then, when something which goes the other way occurs – something which might suggest that one or other given Bible story is true – suddenly all that goes out of the window. The Turin Shroud is (almost certainly) fake. It makes no difference to anything. Get over it.
From The Telegraph
I mention this because today, we report that a group of scientists – working, unexpectedly, for the Italian sustainable energy agency ENEA – claim that the marks on the cloth could only have been made by ultraviolet radiation. They say that "When one talks about a flash of light being able to colour a piece of linen in the same way as the shroud, discussion inevitably touches on things like miracles and resurrection," and that they "hope our results can open up a philosophical and theological debate". They do, however, say "as scientists, we were concerned only with verifiable scientific processes."
The implication, of course, is that a divine light shone when Jesus's body was resurrected, and that this emitted a burst of high-frequency photons which burned an image on the cloth around him. This possibility has been discounted in the past by Raymond Rogers, a member of the Shroud of Turin Research Project (Sturp) which examined the fabric in the 1970s, who said: "If any form of radiation degraded the cellulose of the linen fibers to produce the image color, it would have had to penetrate the entire diameter of a fiber in order to color its back surface", but that the centres of the fibres are unmarked. There are many hypotheses about how the images could have been made, and they have each come in and out of favour. Without wanting to be too cocky, when the ENEA scientists say that radiation is the "only" way the image could have been made, I imagine that many of their fellow researchers will say it's the only way that they managed it.
However it was made, if – as many have claimed – the Shroud was made in the 13th century, then it isn't a relic of Christ, for obvious reasons. Radiocarbon dating has repeatedly placed the Shroud as medieval in origin – specifically, between 1260AD and 1390AD. There have been suggestions that the radiocarbon process got it wrong – but this is unlikely, according to Professor Christopher Ramsey of the Oxford Radiocarbon Accelerator Unit, one of three labs which carried out the research. "We're pretty confident in the radiocarbon dates," he told me. "There are various hypotheses as to why the dates might not be correct, but none of them stack up.
"One is that the samples were contaminated. But that doesn't work, because to make an 2,000-year-old object appear just 800 years old, about half the material would have to be contaminant, and that's if it was all modern. If it was older, it would have to be even more. Various tests done at the time of the original measurements also suggested that the material was fairly pure. It's also been hypothesised that the patch we tested was a modern repair, but most of us agree that's implausible, because the weave is very unusual and matches the rest of the shroud perfectly. Then there are more complicated notions, like contamination with carbon monoxide, but tests have shown that carbon monoxide doesn't react with the fabric under the circumstances that you might expect."
Regarding the ENEA findings, he is similarly sceptical. "Just because you can create similar results using an ultraviolet laser, that doesn't mean it's the only way it could have been made in the first place," he says. "There are several possibilities, and it could just be a chance effect due to a number of different phenomena. But in archaeological science, being able to reproduce something, doesn't imply that that's the technique used; it may simply show that you've got a new technique you want to try out." He adds that the confidence in the medieval result is such that, were it not suggested to be a relic, there would be no more discussion over its age.
So there remain questions about how the Shroud of Turin was made, but there seems to be little reason to think that it's anywhere near old enough to have been Christ's. (Interestingly, John Calvin in 1543 already thought it was a fake: he pointed out that according to the Gospel of St John, two cloths were used to shroud Jesus, one on His body and one on His face; he also suggests that it is strange that none of those recording his death in the Gospels mentioned a miracle "so remarkable as the likeness of the body of our Lord remaining on its wrapping sheet".) It's a fascinating and mysterious object, but it says nothing about the questions of whether Christ was a historical figure, whether He was the Son of God, or whether He rose from the dead.
More importantly, I think, the rush to suggest that it does is a bit undignified. The intelligent faithful don't need trinkets like this to justify their belief, surely? We are constantly told that science cannot disprove God; that it is a non-scientific question, that the two fields of science and religion are non-overlapping. But then, when something which goes the other way occurs – something which might suggest that one or other given Bible story is true – suddenly all that goes out of the window. The Turin Shroud is (almost certainly) fake. It makes no difference to anything. Get over it.
From The Telegraph
Subscribe to:
Posts (Atom)