Lake Vostok in Antarctica isn’t like most of the world’s other lakes. You can’t see it from surface level, because it’s been buried under two miles of ice for the past 15 million years. And unlike other bodies of water that recycle their contents fairly quickly, the water in Lake Vostok only recycles about once every 13,000 years from some unknown source. It’s under too much pressure from all the ice above it to freeze, but it can’t easily escape either, and it’s uncontaminated by human activity or chemicals.
That’s why scientists think it may be a treasure trove of extremophiles — that is, organisms which have evolved to live in the harshest, most inhospitable conditions.
Studies of ice from just above the liquid lake have revealed dozens of types of bacteria and fungi, and also genetic fragments that might suggest the presence of fish and crustaceans, according to a 2013 Nature news article. A Russian scientist also reported that the ice contained a previously unknown type of bacteria, whose DNA was different from other known species. But other scientists disputed those finds, saying that the samples had been tainted by anti-freezing chemicals used to keep the drill working.
In late January, researchers managed to drill down to the lake again, and this time, they say they’ve devised a method to obtain an uncontaminated 1-liter sample of the lake’s water, according to RT.com. That may help resolve the mystery of whether life exists in Lake Vostok.
Since the lake was discovered in 1996, scientists have used radar to explore it and the ice above it, which took thousands of years to accumulate.
There are about 200 other lakes under Antarctica, which actually contains about 70 percent of the world’s entire fresh water supply.
From Discovery News
Feb 7, 2015
Will Gravitational Waves Ever Be Found?
It’s official: data from the Planck satellite has revealed no signs of gravitational waves embedded in the cosmic microwave background, the primordial ‘echo’ of the Big Bang that occurred nearly 14 billion years ago.
This landmark result contradicts the now-infamous BICEP2 announcement of the discovery of gravitational waves last March — but this is not the end of gravitational waves, nor the theories behind inflation. In fact, according to cosmologists, we can expect the search to intensify over the coming months and years.
After the details behind the Planck observations were revealed this week, Discovery News was able to speak with cosmologist Kendrick Smith, of the Perimeter Institute for Theoretical Physics in Ontario, Canada, to find out what impact these Planck data will have on our quest to understand what happened when the universe was born.
To recap, in March 2014, researchers of the BICEP2 telescope made a very public announcement that they had discovered the fingerprint of gravitational waves in the most ancient radiation observed in the distant universe — the cosmic microwave background, or simply, the CMB. This radiation is the remnants of the Big Bang and therefore originates from the genesis of our universe.
By studying the CMB, cosmologists are looking into a cosmic time capsule of sorts — the features etched into this radiation were created moments after the Big Bang, so their structure can reveal the conditions (and therefore the physics) of our universe back in the beginning of time.
How the universe began is “one of the biggest open questions cosmologists are trying to answer,” said Smith. “There are several different theories on what happened shortly after the Big Bang … the problem isn’t that we don’t have a successful theory, it’s that we have too many successful theories! We’re trying to narrow down the possibilities.” Although Smith isn’t directly involved in this week’s joint BICEP2/Planck publication, he is a member of the international Planck Collaboration.
One theory is that the universe underwent a rapid expansion immediately after the Big Bang and one possible way to detect whether that inflationary period occurred is to look for gravitational waves etched into the CMB.
The BICEP2 telescope, based near the South Pole, is designed to specifically seek out a type of polarization in the ancient CMB radiation called “B-mode polarization.” Should B-modes be detected, it’s a sign that gravitational waves are present, proving certain inflationary universe theories.
“By searching for these gravitational waves in the cosmic microwave background, we can narrow down the physics of the Big Bang,” said Smith.
And that’s what the BICEP2 team thought they’d found in their observations of a small patch of sky. There appeared to be a very strong B-mode signal that couldn’t (at the time) be explained by any other phenomena.
“In the original BICEP2 result, they saw B-mode polarization in the CMB at a level that roughly corresponded to the largest gravitational wave signal that would be consistent with our observations so far,” Smith added.
But just because the signal looked like evidence for gravitational waves, the BICEP2 researchers had underestimated the impact of the magnetized dust that fills our galaxy.
When observing any radiation from beyond the Milky Way, we have to stare through a thin fog of interstellar dust and it just so happens that this dust can also generate B-mode polarized radiation. To compensate for this interference, the European space-based Planck telescope, which is sensitive to frequencies generated by the CMB and galactic dust, was tasked to map the magnetic fingerprint of galactic dust. Planck’s mission was completed in 2013, but its huge database is still being processed and interpreted.
Last March, the BICEP2 team only had access to a preliminary Planck dataset and concluded that, in the BICEP2 field of view, the impact of galactic dust was minimal and the B-mode signal they’d detected originated in the CMB.
“They thought it had to be cosmological gravitational waves rather than dust based on a number of statistical analyses, which were mainly driven by very approximate measurements of dust emission in our galaxy,” said Smith.
Although no one in the cosmological community disputed the fact that BICEP2 had detected B-mode polarization, they argued that too little was known about the galactic dust and that the signal was just as likely gravitational waves as it was dusty interference, urging caution against concluding that it had to be gravitational waves. And this week’s paper, a collaborative effort between Planck and BICEP2 scientists, has shown that there are no detectable traces of gravitational waves in the BICEP2 data.
“What this recent joint Planck/BICEP paper did is take Planck measurements, that are at a different frequency to BICEP2, combine them with the BICEP measurements, so that now with multiple observing frequencies, one can make a clear statistical separation between the cosmological gravitational wave signal and the dust signal,” said Smith.
In other words, the B-mode polarization that BICEP2 originally detected was caused by dust and not gravitational waves.
“That’s interesting to comment on as often data analysis is very subtle; a paper may have multiple interpretations or loopholes. But this paper is not one of them,” he said. “The conclusion is very clear: when you combine the observing frequencies of BICEP and Planck, all of the B-mode (polarization) in the sky can be accounted for by dust and there’s no evidence that any of it is gravitational waves.”
So what now? Although the March announcement may have been premature, the search for gravitational waves continues. Cosmologists are now armed with a comprehensive map of the obscuring dust in our galaxy. Smith hopes that, over the next 5 years, extensive multifrequency observations may start to root out the illusive B-mode polarization generated by gravitational waves. But they may not.
“There are working models for how the Big Bang might’ve worked that produce large levels of gravitational waves and there are working models for how the Big Bang might’ve worked that produced gravitational waves at such tiny levels that they’ll never be measured. Either one is a possibility, so it’s hard to speculate.”
As for the massive interest that surrounded the gravitational wave drama that unfolded in a very public arena, Smith isn’t surprised that this particular cosmological study has garnered such public excitement.
Read more at Discovery News
This landmark result contradicts the now-infamous BICEP2 announcement of the discovery of gravitational waves last March — but this is not the end of gravitational waves, nor the theories behind inflation. In fact, according to cosmologists, we can expect the search to intensify over the coming months and years.
After the details behind the Planck observations were revealed this week, Discovery News was able to speak with cosmologist Kendrick Smith, of the Perimeter Institute for Theoretical Physics in Ontario, Canada, to find out what impact these Planck data will have on our quest to understand what happened when the universe was born.
To recap, in March 2014, researchers of the BICEP2 telescope made a very public announcement that they had discovered the fingerprint of gravitational waves in the most ancient radiation observed in the distant universe — the cosmic microwave background, or simply, the CMB. This radiation is the remnants of the Big Bang and therefore originates from the genesis of our universe.
By studying the CMB, cosmologists are looking into a cosmic time capsule of sorts — the features etched into this radiation were created moments after the Big Bang, so their structure can reveal the conditions (and therefore the physics) of our universe back in the beginning of time.
How the universe began is “one of the biggest open questions cosmologists are trying to answer,” said Smith. “There are several different theories on what happened shortly after the Big Bang … the problem isn’t that we don’t have a successful theory, it’s that we have too many successful theories! We’re trying to narrow down the possibilities.” Although Smith isn’t directly involved in this week’s joint BICEP2/Planck publication, he is a member of the international Planck Collaboration.
One theory is that the universe underwent a rapid expansion immediately after the Big Bang and one possible way to detect whether that inflationary period occurred is to look for gravitational waves etched into the CMB.
The BICEP2 telescope, based near the South Pole, is designed to specifically seek out a type of polarization in the ancient CMB radiation called “B-mode polarization.” Should B-modes be detected, it’s a sign that gravitational waves are present, proving certain inflationary universe theories.
“By searching for these gravitational waves in the cosmic microwave background, we can narrow down the physics of the Big Bang,” said Smith.
And that’s what the BICEP2 team thought they’d found in their observations of a small patch of sky. There appeared to be a very strong B-mode signal that couldn’t (at the time) be explained by any other phenomena.
“In the original BICEP2 result, they saw B-mode polarization in the CMB at a level that roughly corresponded to the largest gravitational wave signal that would be consistent with our observations so far,” Smith added.
But just because the signal looked like evidence for gravitational waves, the BICEP2 researchers had underestimated the impact of the magnetized dust that fills our galaxy.
When observing any radiation from beyond the Milky Way, we have to stare through a thin fog of interstellar dust and it just so happens that this dust can also generate B-mode polarized radiation. To compensate for this interference, the European space-based Planck telescope, which is sensitive to frequencies generated by the CMB and galactic dust, was tasked to map the magnetic fingerprint of galactic dust. Planck’s mission was completed in 2013, but its huge database is still being processed and interpreted.
Last March, the BICEP2 team only had access to a preliminary Planck dataset and concluded that, in the BICEP2 field of view, the impact of galactic dust was minimal and the B-mode signal they’d detected originated in the CMB.
“They thought it had to be cosmological gravitational waves rather than dust based on a number of statistical analyses, which were mainly driven by very approximate measurements of dust emission in our galaxy,” said Smith.
Although no one in the cosmological community disputed the fact that BICEP2 had detected B-mode polarization, they argued that too little was known about the galactic dust and that the signal was just as likely gravitational waves as it was dusty interference, urging caution against concluding that it had to be gravitational waves. And this week’s paper, a collaborative effort between Planck and BICEP2 scientists, has shown that there are no detectable traces of gravitational waves in the BICEP2 data.
“What this recent joint Planck/BICEP paper did is take Planck measurements, that are at a different frequency to BICEP2, combine them with the BICEP measurements, so that now with multiple observing frequencies, one can make a clear statistical separation between the cosmological gravitational wave signal and the dust signal,” said Smith.
In other words, the B-mode polarization that BICEP2 originally detected was caused by dust and not gravitational waves.
“That’s interesting to comment on as often data analysis is very subtle; a paper may have multiple interpretations or loopholes. But this paper is not one of them,” he said. “The conclusion is very clear: when you combine the observing frequencies of BICEP and Planck, all of the B-mode (polarization) in the sky can be accounted for by dust and there’s no evidence that any of it is gravitational waves.”
So what now? Although the March announcement may have been premature, the search for gravitational waves continues. Cosmologists are now armed with a comprehensive map of the obscuring dust in our galaxy. Smith hopes that, over the next 5 years, extensive multifrequency observations may start to root out the illusive B-mode polarization generated by gravitational waves. But they may not.
“There are working models for how the Big Bang might’ve worked that produce large levels of gravitational waves and there are working models for how the Big Bang might’ve worked that produced gravitational waves at such tiny levels that they’ll never be measured. Either one is a possibility, so it’s hard to speculate.”
As for the massive interest that surrounded the gravitational wave drama that unfolded in a very public arena, Smith isn’t surprised that this particular cosmological study has garnered such public excitement.
Read more at Discovery News
Feb 6, 2015
Barking Norwegian Lemmings Tell Predators to Back Off
If you try to go near a Norwegian lemming, you'll hear about it. The small rodent uses its distinctive bark and unusual coloring as a stern warning to those who would think to mess with it, a Swedish researcher has determined.
Making its living in Finland, Norway, Sweden, and the Kola peninsula in Russia, the Norwegian lemming (Lemmus lemmus) is reddish brown on its back, yellow on its sides, white on its chin and cheeks, and black on its head, neck, and shoulders.
It's rare for small rodents to make aggressive defenses of themselves, but the Norwegian lemming bucks that trend: It will scream loudly, lunge, and even bite aerial predators such as the long-tailed skua.
The video below (not part of the study) offers a glimpse of the lemming's defensive call in action.
Malte Andersson, of the University of Göteborg in Sweden, wanted to find out if the creature's audacious coloring and tendency to shriek under pressure served a purpose.
In field observations, Andersson found that Norwegian lemmings were overwhelmingly more likely than brown lemmings to issue a "be-gone!" warning call when a potential predator, such as a human, approached them.
Their coloring, meanwhile, was built to stand out. Observers in field tests found L. lemmus easier to pick out in their natural surroundings than the other most common rodent in the area, the grey-sided vole.Taken together, the coloring and behavior were thought by Andersson to be an instance of aposematism -- using coloration and other tactics as a warning to a potential predator that, in this case, some rodents are more trouble to mess with than they're worth.
Aposematism is more commonly seen in creatures such as frogs, snakes, and insects than in herbivorous mammals, making the little lemming stand out in a crowd yet again.
Read more at Discovery News
Making its living in Finland, Norway, Sweden, and the Kola peninsula in Russia, the Norwegian lemming (Lemmus lemmus) is reddish brown on its back, yellow on its sides, white on its chin and cheeks, and black on its head, neck, and shoulders.
It's rare for small rodents to make aggressive defenses of themselves, but the Norwegian lemming bucks that trend: It will scream loudly, lunge, and even bite aerial predators such as the long-tailed skua.
The video below (not part of the study) offers a glimpse of the lemming's defensive call in action.
In field observations, Andersson found that Norwegian lemmings were overwhelmingly more likely than brown lemmings to issue a "be-gone!" warning call when a potential predator, such as a human, approached them.
Their coloring, meanwhile, was built to stand out. Observers in field tests found L. lemmus easier to pick out in their natural surroundings than the other most common rodent in the area, the grey-sided vole.Taken together, the coloring and behavior were thought by Andersson to be an instance of aposematism -- using coloration and other tactics as a warning to a potential predator that, in this case, some rodents are more trouble to mess with than they're worth.
Aposematism is more commonly seen in creatures such as frogs, snakes, and insects than in herbivorous mammals, making the little lemming stand out in a crowd yet again.
Read more at Discovery News
Cannibalism Suspected in Seal Deaths Off Scotland
Grey seals feasting on their own kind, and not ship propellers, is the likely reason behind the mysterious deaths of 86 seals in Scottish waters over the last five years, according to new research.
Between 2009 and 2014, 86 seal carcasses in Scotland each had the same wound: a "corkscrew"-patterned cut that worked in a spiral around the seal's body.
Prior to a new report by Scotland's Sea Mammal Research Unit (SMRU), the deaths were strongly suspected to have been caused by the seals getting caught in the propellers of ships.
But late last year the SMRU researchers, in the midst of trying to further nail down the propeller theory in an Isle of May study, got a surprise when they were able to witness and record an adult male grey seal cannibalizing a young seal.
The seal was recorded attacking and killing a weaned grey seal pup, leaving behind the same corkscrew injury pattern. After being tagged and tracked, the adult male was later implicated in at least eight killings of pups in the same manner.
Six additional carcasses were also documented in the study area, although their deaths were not directly attributed to that specific adult male.
Of the newer carcasses observed during the latter part of 2014, the SMRU wrote in its report: "The wound patterns seen on the grey seal pups at the Isle of May clearly resembled those that have been recorded as corkscrew wounds on previous grey and harbour seal cases in Scotland."
The team's analysis of the 14 pups' injuries found 12 of them to be consistent with the corkscrew patterns seen in the prior cases.
"This clearly suggests that a proportion of the cases previously identified as the result of interactions with propellers were in fact due to grey seal predation," the team wrote.
Grey seals were also implicated last November in the mutilations of more than 1,000 harbor porpoises along the Dutch coastline.
Read more at Discovery News
Between 2009 and 2014, 86 seal carcasses in Scotland each had the same wound: a "corkscrew"-patterned cut that worked in a spiral around the seal's body.
Prior to a new report by Scotland's Sea Mammal Research Unit (SMRU), the deaths were strongly suspected to have been caused by the seals getting caught in the propellers of ships.
But late last year the SMRU researchers, in the midst of trying to further nail down the propeller theory in an Isle of May study, got a surprise when they were able to witness and record an adult male grey seal cannibalizing a young seal.
The seal was recorded attacking and killing a weaned grey seal pup, leaving behind the same corkscrew injury pattern. After being tagged and tracked, the adult male was later implicated in at least eight killings of pups in the same manner.
Six additional carcasses were also documented in the study area, although their deaths were not directly attributed to that specific adult male.
Of the newer carcasses observed during the latter part of 2014, the SMRU wrote in its report: "The wound patterns seen on the grey seal pups at the Isle of May clearly resembled those that have been recorded as corkscrew wounds on previous grey and harbour seal cases in Scotland."
The team's analysis of the 14 pups' injuries found 12 of them to be consistent with the corkscrew patterns seen in the prior cases.
"This clearly suggests that a proportion of the cases previously identified as the result of interactions with propellers were in fact due to grey seal predation," the team wrote.
Grey seals were also implicated last November in the mutilations of more than 1,000 harbor porpoises along the Dutch coastline.
Read more at Discovery News
Oldest Stars in Universe Younger Than Thought
The very first stars in the universe need to reset their birthday clocks: these ancient objects burst into existence more than 100 million years later than scientists previously thought, according to new research.
A few hundred million years after the Big Bang, the light from some of the very first stars and galaxies lit up the universe and ended a period known as the "dark ages." New measurements by the European Space Agency's Planck satellite — which studied the cosmic microwave background, or the light left over from the Big Bang — indicate that this period of light began about 100 million years later than Planck's previous estimate. The new results are based on an additional year of observations recorded by the satellite.
"While these 100 million years may seem negligible compared to the universe's age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars," Marco Bersanelli of the University of Milan and a member of the Planck Collaboration, said in a statement.
The end of the dark ages
Some of the first stars and galaxies to be born in the early universe helped end what is often referred to as the universe's "dark ages." The stars not only lit up the skies with their light, but also cleared away a fog consisting of hydrogen atoms that had come to fill cosmos.
This haze of gas that filled the universe blocked most wavelengths of light, which is why this time is referred to as "dark."
The powerful photons created by stars and galaxies ripped the atoms apart, or ionized them, which is why this era is known as reionization. Galaxies called quasars burst into existence around this time; at the center of a quasar is a supermassive black hole that ejects powerful jets of light and matter into the universe.
Observations by NASA's Hubble Space Telescope show that the universe was entirely clear of this fog by about 900 million years after the Big Bang, the ESA statement said. But when did it begin?
Previous observations by Planck cited the start of reionization as being about 450 million years after the Big Bang. The new results used a larger data set, taken between 2009 and 2013, and pushed that measurement forward by 100 million years.
Because stars and galaxies drove the start of reionization, scientists with the Planck collaboration say the new measurement also indicates about when those stars and galaxies started forming.
"These things are basically two sides of the same coin," François Bouchet, of the Paris Institute of Astrophysics and a member of the Planck Collaboration, told Space.com. Bouchet said that Planck can identify the average starting time of star and galaxy formation, but not when specific stars were born. Rare stars have been identified that may have formed before the end of the dark ages.
For now, scientists with Planck think of reionization as an "instantaneous" event — "instantaneous" on a cosmological time scale, that is — but of course, Bouchet explained, it must actually be an event that took place over a period of time.
"As with any physical process, will take some time," Bouchet said. "Later on, we will want to know what is the duration of that period. We want to be able to say when 20 percent of the universe was reionized, and then 30 percent and 50 percent and 100 percent. We want to have a full history of reionization. That's the ultimate goal."
Oldest light in the universe
The Planck telescope can study reionization by looking at the cosmic microwave background: a static haze of light that fills the entire universe. This light was created by the Big Bang, and has radiated through the universe ever since. In that time, it has picked up information about the events that have taken place in the cosmic history.
When the universe began to emerge from the dark ages, hydrogen atoms were ripped apart into protons and electrons. The electrons interacted with the CMB and left an imprint in the light's "polarization" — or the orientation of the light waves, according to the ESA statement. Planck scientists have picked up on that subtle change.
Read more at Discovery News
A few hundred million years after the Big Bang, the light from some of the very first stars and galaxies lit up the universe and ended a period known as the "dark ages." New measurements by the European Space Agency's Planck satellite — which studied the cosmic microwave background, or the light left over from the Big Bang — indicate that this period of light began about 100 million years later than Planck's previous estimate. The new results are based on an additional year of observations recorded by the satellite.
"While these 100 million years may seem negligible compared to the universe's age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars," Marco Bersanelli of the University of Milan and a member of the Planck Collaboration, said in a statement.
The end of the dark ages
Some of the first stars and galaxies to be born in the early universe helped end what is often referred to as the universe's "dark ages." The stars not only lit up the skies with their light, but also cleared away a fog consisting of hydrogen atoms that had come to fill cosmos.
This haze of gas that filled the universe blocked most wavelengths of light, which is why this time is referred to as "dark."
The powerful photons created by stars and galaxies ripped the atoms apart, or ionized them, which is why this era is known as reionization. Galaxies called quasars burst into existence around this time; at the center of a quasar is a supermassive black hole that ejects powerful jets of light and matter into the universe.
Observations by NASA's Hubble Space Telescope show that the universe was entirely clear of this fog by about 900 million years after the Big Bang, the ESA statement said. But when did it begin?
Previous observations by Planck cited the start of reionization as being about 450 million years after the Big Bang. The new results used a larger data set, taken between 2009 and 2013, and pushed that measurement forward by 100 million years.
Because stars and galaxies drove the start of reionization, scientists with the Planck collaboration say the new measurement also indicates about when those stars and galaxies started forming.
"These things are basically two sides of the same coin," François Bouchet, of the Paris Institute of Astrophysics and a member of the Planck Collaboration, told Space.com. Bouchet said that Planck can identify the average starting time of star and galaxy formation, but not when specific stars were born. Rare stars have been identified that may have formed before the end of the dark ages.
For now, scientists with Planck think of reionization as an "instantaneous" event — "instantaneous" on a cosmological time scale, that is — but of course, Bouchet explained, it must actually be an event that took place over a period of time.
"As with any physical process, will take some time," Bouchet said. "Later on, we will want to know what is the duration of that period. We want to be able to say when 20 percent of the universe was reionized, and then 30 percent and 50 percent and 100 percent. We want to have a full history of reionization. That's the ultimate goal."
Oldest light in the universe
The Planck telescope can study reionization by looking at the cosmic microwave background: a static haze of light that fills the entire universe. This light was created by the Big Bang, and has radiated through the universe ever since. In that time, it has picked up information about the events that have taken place in the cosmic history.
When the universe began to emerge from the dark ages, hydrogen atoms were ripped apart into protons and electrons. The electrons interacted with the CMB and left an imprint in the light's "polarization" — or the orientation of the light waves, according to the ESA statement. Planck scientists have picked up on that subtle change.
Read more at Discovery News
This Amazing Little Critter Just Might Be Immortal
But there’s one creature that doesn’t need to get hung up on retirement or deadlines, and accordingly it could well be immortal. This is the bizarre existence of the hydra, a half-inch tube of jelly that inhabits fresh water all over the world, where it lives a long, long time under the right conditions—and if you don’t assault it.
Yet even then, it has remarkable powers of regeneration. Cut it in half and you’ll eventually end up with two hydra. Mix a bunch of them up in a test tube, break them all apart into single cells, and still they’ll re-form into a ball and split off as individuals. Yeah, I know, that doesn’t really seem possible. But stick with me here.
This is a supremely simple animal, belonging to the same group as jellyfish, the cnidarians. “I sometimes describe hydras sort of like a little free-living piece of intestine,” said hydrobiologist Rob Steele of the University of California, Irvine. At one end is a sticky disk, which the hydra uses to anchor itself, and at the other is a mouth and tentacles packed with stinging cells, which fire toxic harpoons into prey. Holding the quarry in place, the hydra then ratchets its mouth over the victim—typically a tiny crustacean called a water flea—until it’s entirely enveloped.
Back in the ‘90s, a fella named Daniel Martinez gathered up 60 of these creatures and isolated them in their own tiny tanks. Hydra reproduce asexually, budding off little clones, so Martinez had to pick those young out and toss them. After four years of this, not only were the hydra still alive, but they looked good as new. Four years may not sound like a long time, but the rule in nature is that the smaller you are, the shorter you live. Thus can small insects last only a matter of weeks, while blue whales keep ticking for nearly a century. Something the size of a hydra living for four years is just ridiculous.
So Martinez published his findings, declaring the hydra potentially immortal. Unsurprisingly, this rustled a few people’s jimmies. “So he published that result,” Steele said, “and then the naysayers came along and said, ‘Well maybe the average lifespan of hydra is six years, so you didn’t do the experiment long enough.’ So he went back and restarted the experiment, and I think he’s now at about year eight,” making his hydras the oldest known specimens. “He’s going to do it for 10 years, and then he says never again. If 10 years isn’t enough for them, that’s their problem.”
The hydra’s secret seems to be that it sheds its entire body and starts from scratch every few weeks. It’s essentially just a sack of stem cells, which are kind of like blank slates. These eventually specialize into, say, a cell that makes up the tentacle. But the hydra will shed this cell after just a few days, keeping it from aging and wearing out like our own cells do. “It doesn’t have any cells that hang around long enough to get old and decrepit,” Steele said, “and therefore the individual doesn’t get old and decrepit.”
So there you have it: the secret to potential immortality. (Another miniscule creature that’s worth mentioning, the adorable eight-legged water bear, has its own method of extreme longevity: It’s not immortal, but it can dry out to just a fraction of its normal water content and live for up to 10 years as a unconscious husk, only to reanimate once it again hits water.) Why the hydra would evolve such a unique way of life is still a mystery. And if you were hoping we could apply its trick to ourselves before you yourself keel over, I have some bad news. There really couldn’t be an animal more different than ourselves, so scientists don’t hold out much hope for indulging your egomania by way of the hydra. Sorry, but it’s for your own good.
The Show Must Go On
The Hydra of Greek mythology was a vicious serpent with a bunch of heads, which would grow into two if you cut one off. The hydra of the real world can do the same, only way, way more impressively. “You can poke a hole in them and they seal it up,” Steele said. “You can cut them in half and they regenerate the missing halves. You cut them into 20 pieces and you get 20 hydra.” Its ultimate trick, though, doesn’t make a lick of sense in a world governed by, you know, certain rules and stuff.
“Dissociate” is a verb, and a very good one at that, meaning to break something down into smaller bits, such as individual cells. You do not want to be dissociated. But dissociate a bunch of hydra into a soup of cells, and incredibly they emerge again as individuals.
Hydra reproduce asexually, budding off clones, as you can see here. It’s like Multiplicity, only with less pizza. |
But over the course of a few days, the cells that make up the outside of the hydra somehow make their way to the surface of the sphere, while the cells that should line the gut make their way into the center. Then the inside of the sphere forms a cavity, which will be the gut, and you end up with a hollow, two-layered sphere, just like an individual hydra has an outer layer and an inner one with fluid between the two.
Read more at Wired Science
Feb 5, 2015
Tomato, Tomahto? Chimps Can Learn 'Dialects'
Chimpanzees specifically refer to favorite foods with distinct grunts, but the calls differ between populations, according to a new study that found chimps could then learn the "foreign" grunts, making communication easier when two groups merge into one.
The findings, reported in the latest issue of the journal Current Biology, show that, like human words, chimp food calls are not fixed in their structure and that, when exposed to a new group, chimps can change their calls to sound more like their group mates.
A big topic of conversation after that is food.
"Chimpanzees produce acoustically distinct grunts in response to foods of low, medium or high quality, and in captivity they give distinct grunts for certain foods, including bananas, bread and mango," co-author Katie Slocombe of the University of York told Discovery News. "It has been shown that listeners are able to distinguish between these different grunts and extract information that helps guide their own search for food appropriately."
For the study, she and her colleagues studied what happened after two separate groups of adult chimps moved in together at the Edinburgh Zoo. Before the merger, the groups produced their particular food grunts. After the merger, the "newcomer" group significantly changed their calls to match those of the long-term residents, lead author Stuart Watson, also from the University of York, said.
The researchers offer a number of possible explanations. One is that the chimps might learn the "foreign" grunts in order to form, maintain or improve their social relationships with members of the different group.
"If one was to make an analogy with human language, it could be the process of dialect change when people move between groups of speakers of the same language and subtly modify their accents to sound like the group into which they are moving, potentially as a way to aid social integration," co-author Simon Townsend of the University of Zurich told Discovery News.
He added that the chimps might also modify their grunts to ensure that the other chimps understand them correctly.
Again referencing human language, Townsend said, "The word for 'apple' in German is only subtly different, 'apfel,' but correctly pronouncing it can avoid comprehension mistakes in receivers."
The researchers, however, do not use the word "language" when referring to chimp calls. While studies continue to find similarities between human language and the communication systems of other animals, the scientists say that language is much more complex than just referring to objects in the environment with vocalizations.
Townsend, for example, mentioned that human language is "syntactic," meaning that we can combine meaningful, referential words together into larger, higher-order structures, i.e. sentences, paragraphs and so on. We can also talk about objects, individuals and events in the absence of them since the words are fully symbolic.
"We can even create a new word if we feel like it, something that chimpanzees are seemingly unable to do," Watson said.
Nevertheless, other studies on animals find that many, from birds to dolphins, have very complex communication systems that involve skills we don't possess, such as the ability to share information via natural sonar. In the case of chimps, they are better than humans at naturally producing calls that carry over long distances, demonstrating that environmental conditions help to shape a particular animal's way of communicating. This holds true for humans as well, since a recent study found that tonal languages arose in humid climates where conditions facilitate the larynx's ability to produce different tones.
Read more at Discovery News
The findings, reported in the latest issue of the journal Current Biology, show that, like human words, chimp food calls are not fixed in their structure and that, when exposed to a new group, chimps can change their calls to sound more like their group mates.
A big topic of conversation after that is food.
"Chimpanzees produce acoustically distinct grunts in response to foods of low, medium or high quality, and in captivity they give distinct grunts for certain foods, including bananas, bread and mango," co-author Katie Slocombe of the University of York told Discovery News. "It has been shown that listeners are able to distinguish between these different grunts and extract information that helps guide their own search for food appropriately."
For the study, she and her colleagues studied what happened after two separate groups of adult chimps moved in together at the Edinburgh Zoo. Before the merger, the groups produced their particular food grunts. After the merger, the "newcomer" group significantly changed their calls to match those of the long-term residents, lead author Stuart Watson, also from the University of York, said.
The researchers offer a number of possible explanations. One is that the chimps might learn the "foreign" grunts in order to form, maintain or improve their social relationships with members of the different group.
"If one was to make an analogy with human language, it could be the process of dialect change when people move between groups of speakers of the same language and subtly modify their accents to sound like the group into which they are moving, potentially as a way to aid social integration," co-author Simon Townsend of the University of Zurich told Discovery News.
He added that the chimps might also modify their grunts to ensure that the other chimps understand them correctly.
Again referencing human language, Townsend said, "The word for 'apple' in German is only subtly different, 'apfel,' but correctly pronouncing it can avoid comprehension mistakes in receivers."
The researchers, however, do not use the word "language" when referring to chimp calls. While studies continue to find similarities between human language and the communication systems of other animals, the scientists say that language is much more complex than just referring to objects in the environment with vocalizations.
Townsend, for example, mentioned that human language is "syntactic," meaning that we can combine meaningful, referential words together into larger, higher-order structures, i.e. sentences, paragraphs and so on. We can also talk about objects, individuals and events in the absence of them since the words are fully symbolic.
"We can even create a new word if we feel like it, something that chimpanzees are seemingly unable to do," Watson said.
Nevertheless, other studies on animals find that many, from birds to dolphins, have very complex communication systems that involve skills we don't possess, such as the ability to share information via natural sonar. In the case of chimps, they are better than humans at naturally producing calls that carry over long distances, demonstrating that environmental conditions help to shape a particular animal's way of communicating. This holds true for humans as well, since a recent study found that tonal languages arose in humid climates where conditions facilitate the larynx's ability to produce different tones.
Read more at Discovery News
Buffalo-Sized Rodent Used Its Teeth Like Tusks
The largest rodent ever documented, an ancient creature closely related to guinea pigs but around the size of a buffalo, used its front teeth much like an elephant uses its tusks, new research concludes.
Scientists from the University of York and The Hull York Medical School used computer modeling to virtually reconstruct the skull of Josephoartigasia monesi to estimate how powerful the creature's bite could have been.
The team found that Josephoartigasia, which lived in South America about 3 million years ago, had a very powerful bite force, comparable to that of a tiger. But the researchers knew from earlier studies they'd performed that the incisors would actually have been able to withstand nearly triple that force.
This told the scientists that the ancient rodent's teeth were multipurpose, and a bit tusk-like in their functionality.
"We concluded that Josephoartigasia must have used its incisors for activities other than biting, such as digging in the ground for food, or defending itself from predators. This is very similar to how a modern day elephant uses its tusks," said Dr, Philip Cox, of the University of York.
The team's research has recently been published in the Journal of Anatomy.
From Discovery News
Scientists from the University of York and The Hull York Medical School used computer modeling to virtually reconstruct the skull of Josephoartigasia monesi to estimate how powerful the creature's bite could have been.
The team found that Josephoartigasia, which lived in South America about 3 million years ago, had a very powerful bite force, comparable to that of a tiger. But the researchers knew from earlier studies they'd performed that the incisors would actually have been able to withstand nearly triple that force.
This told the scientists that the ancient rodent's teeth were multipurpose, and a bit tusk-like in their functionality.
"We concluded that Josephoartigasia must have used its incisors for activities other than biting, such as digging in the ground for food, or defending itself from predators. This is very similar to how a modern day elephant uses its tusks," said Dr, Philip Cox, of the University of York.
The team's research has recently been published in the Journal of Anatomy.
From Discovery News
'Gospel of the Lots of Mary' ID'd in Ancient Text
A 1,500-year-old book that contains a previously unknown gospel has been deciphered. The ancient manuscript may have been used to provide guidance or encouragement to people seeking help for their problems, according to a researcher who has studied the text.
Written in Coptic, an Egyptian language, the opening reads (in translation):
"The Gospel of the lots of Mary, the mother of the Lord Jesus Christ, she to whom Gabriel the Archangel brought the good news. He who will go forward with his whole heart will obtain what he seeks. Only do not be of two minds."
Anne Marie Luijendijk, a professor of religion at Princeton University, discovered that this newfound gospel is like no other. "When I began deciphering the manuscript and encountered the word 'gospel' in the opening line, I expected to read a narrative about the life and death of Jesus as the canonical gospels present, or a collection of sayings similar to the Gospel of Thomas (a non-canonical text)," she wrote in her book "Forbidden Oracles? The Gospel of the Lots of Mary" (Mohr Siebeck, 2014).
What she found instead was a series of 37 oracles, written vaguely, and with only a few that mention Jesus.
The text would have been used for divination, Luijendijk said. A person seeking an answer to a question could have sought out the owner of this book, asked a question, and gone through a process that would randomly select one of the 37 oracles to help find a solution to the person's problem. The owner of the book could have acted as a diviner, helping to interpret the written oracles, she said.
Alternatively, the text could have been owned by someone who, when confronted with a question, simply opened an oracle at random to seek an answer.
The 37 oracles are all written vaguely; for instance, oracle seven says, "You know, o human, that you did your utmost again. You did not gain anything but loss, dispute, and war. But if you are patient a little, the matter will prosper through the God of Abraham, Isaac, and Jacob."
Another example is oracle 34, which reads, "Go forward immediately. This is a thing from God. You know that, behold, for many days you are suffering greatly. But it is of no concern to you, because you have come to the haven of victory."
Throughout the book "the text refers to hardships, suffering and violence, and occasionally one finds a threat. On the whole, however, a positive outlet prevails," Luijendijk wrote in her book.
Another interesting example, that illustrates the ancient book's positive outlook, is oracle 24, which reads, "Stop being of two minds, o human, whether this thing will happen or not. Yes, it will happen! Be brave and do not be of two minds. Because it will remain with you a long time and you will receive joy and happiness."
A 'gospel' like no other
In the ancient world, a special type of book, sometimes called a "lot book," was used to try to predict a person's future. Luijendijk says that this is the only lot book found so far that calls itself a "gospel" — a word that literally means "good news."
"The fact that this book is called that way is very significant," Luijendijk told Live Science in an interview. "To me, it also really indicated that it had something to do how people would consult it and also about being as good news," she said. "Nobody who wants to know the future wants to hear bad news in a sense."
Although people today associate the word "gospel" as being a text that talks about the life of Jesus, people in ancient times may have had a different perspective.
"The fact that this is not a gospel in the traditional sense gives ample reason to inquire about the reception and use of the term 'gospel' in Late Antiquity," Luijendijk wrote.
Where did it come from?
The text is now owned by Harvard University's Sackler Museum. It was given to Harvard in 1984 by Beatrice Kelekian, who donated it in memory of her husband, Charles Dikran Kelekian. Charles' father, Dikran Kelekian (1868-1951), was "an influential trader of Coptic antiquaries, deemed the 'dean of antiquities' among New York art dealers," Luijendijk wrote in her book.
It is not known where the Kelekians got the gospel. Luijendijk searched the Kelekian family archive but found no information about where the text came from or when it was acquired.
It's possible that, in ancient times, the book was used by a diviner at the Shrine of Saint Colluthus in Egypt, a "Christian site of pilgrimage and healing," Luijendijk wrote. At this shrine, archaeologists have found texts with written questions, indicating that the site was used for various forms of divination.
"Among the services offered to visitors of the shrine were dream incubation, ritual bathing, and both book and ticket divination," Luijendijk wrote.
Miniature text
One interesting feature of the book is its small size. The pages measure less than 3 inches in height and 2.7 inches in width. The codex is "only as large as my palm," Luijendijk wrote.
Read more at Discovery News
Written in Coptic, an Egyptian language, the opening reads (in translation):
"The Gospel of the lots of Mary, the mother of the Lord Jesus Christ, she to whom Gabriel the Archangel brought the good news. He who will go forward with his whole heart will obtain what he seeks. Only do not be of two minds."
Anne Marie Luijendijk, a professor of religion at Princeton University, discovered that this newfound gospel is like no other. "When I began deciphering the manuscript and encountered the word 'gospel' in the opening line, I expected to read a narrative about the life and death of Jesus as the canonical gospels present, or a collection of sayings similar to the Gospel of Thomas (a non-canonical text)," she wrote in her book "Forbidden Oracles? The Gospel of the Lots of Mary" (Mohr Siebeck, 2014).
What she found instead was a series of 37 oracles, written vaguely, and with only a few that mention Jesus.
The text would have been used for divination, Luijendijk said. A person seeking an answer to a question could have sought out the owner of this book, asked a question, and gone through a process that would randomly select one of the 37 oracles to help find a solution to the person's problem. The owner of the book could have acted as a diviner, helping to interpret the written oracles, she said.
Alternatively, the text could have been owned by someone who, when confronted with a question, simply opened an oracle at random to seek an answer.
The 37 oracles are all written vaguely; for instance, oracle seven says, "You know, o human, that you did your utmost again. You did not gain anything but loss, dispute, and war. But if you are patient a little, the matter will prosper through the God of Abraham, Isaac, and Jacob."
Another example is oracle 34, which reads, "Go forward immediately. This is a thing from God. You know that, behold, for many days you are suffering greatly. But it is of no concern to you, because you have come to the haven of victory."
Throughout the book "the text refers to hardships, suffering and violence, and occasionally one finds a threat. On the whole, however, a positive outlet prevails," Luijendijk wrote in her book.
Another interesting example, that illustrates the ancient book's positive outlook, is oracle 24, which reads, "Stop being of two minds, o human, whether this thing will happen or not. Yes, it will happen! Be brave and do not be of two minds. Because it will remain with you a long time and you will receive joy and happiness."
A 'gospel' like no other
In the ancient world, a special type of book, sometimes called a "lot book," was used to try to predict a person's future. Luijendijk says that this is the only lot book found so far that calls itself a "gospel" — a word that literally means "good news."
"The fact that this book is called that way is very significant," Luijendijk told Live Science in an interview. "To me, it also really indicated that it had something to do how people would consult it and also about being as good news," she said. "Nobody who wants to know the future wants to hear bad news in a sense."
Although people today associate the word "gospel" as being a text that talks about the life of Jesus, people in ancient times may have had a different perspective.
"The fact that this is not a gospel in the traditional sense gives ample reason to inquire about the reception and use of the term 'gospel' in Late Antiquity," Luijendijk wrote.
Where did it come from?
The text is now owned by Harvard University's Sackler Museum. It was given to Harvard in 1984 by Beatrice Kelekian, who donated it in memory of her husband, Charles Dikran Kelekian. Charles' father, Dikran Kelekian (1868-1951), was "an influential trader of Coptic antiquaries, deemed the 'dean of antiquities' among New York art dealers," Luijendijk wrote in her book.
It is not known where the Kelekians got the gospel. Luijendijk searched the Kelekian family archive but found no information about where the text came from or when it was acquired.
It's possible that, in ancient times, the book was used by a diviner at the Shrine of Saint Colluthus in Egypt, a "Christian site of pilgrimage and healing," Luijendijk wrote. At this shrine, archaeologists have found texts with written questions, indicating that the site was used for various forms of divination.
"Among the services offered to visitors of the shrine were dream incubation, ritual bathing, and both book and ticket divination," Luijendijk wrote.
Miniature text
One interesting feature of the book is its small size. The pages measure less than 3 inches in height and 2.7 inches in width. The codex is "only as large as my palm," Luijendijk wrote.
Read more at Discovery News
Termite Mound Oases Help Combat Climate Change
Most people consider termites to be unwanted and useless pests, but a new study finds that these insects help to combat the effects of climate change, to the point that they can even stop the spread of deserts.
Termite mounds often serve as oases of plant life in otherwise dry and desolate regions, according to the study, which is published in the latest issue of the journal Science. This may provide no comfort to homeowners with wood-chomping termites under their houses, but termites turn out to be critical to ecosystems, since the mound oases help to sustain both big and small wildlife.
“I like to think of termites as linchpins of the ecosystem in more than one way,” co-author Robert Pringle, an assistant professor of ecology and evolutionary biology at Princeton University, said in a press release. “They increase the productivity of the system, but they also make it more stable, more resilient.”
The visuals of this are dramatic, showing how lush, green grass often grows atop termite mounds, while the surrounding land remains dry and desolate. Depending on the number of mounds, this can look like green polka dots covering a landscape when it is photographed from above.
Pringle, lead author Juan Bonachela and their colleagues focused their research on fungus-growing termites of the genus Odontotermes, but they say that their findings could apply to all types of termites that increase resource availability on and/or around their nests.
For the study, the researchers adjusted some mathematical models to account for the effects of termite mounds on desert and semi-arid landscapes in Africa, South America and Asia. Bonachela and his team now suspect that termites are far more beneficial than anyone had ever previously believed.
They explained that termite mounds store nutrients and moisture, and–via internal tunnels–allow water to better penetrate soil. As a result, vegetation flourishes on and near termite mounds in ecosystems that are otherwise highly vulnerable to “desertification,” meaning the environment’s collapse into desert.
Moisture presence overall does not change, but the tiny termites still make a big difference.
“The rain is the same everywhere, but because termites allow water to penetrate the soil better, the plants grow on or near the mounds as if there were more rain,” co-author Corina Tarnita explained.
Read more at Discovery News
Termite mounds often serve as oases of plant life in otherwise dry and desolate regions, according to the study, which is published in the latest issue of the journal Science. This may provide no comfort to homeowners with wood-chomping termites under their houses, but termites turn out to be critical to ecosystems, since the mound oases help to sustain both big and small wildlife.
“I like to think of termites as linchpins of the ecosystem in more than one way,” co-author Robert Pringle, an assistant professor of ecology and evolutionary biology at Princeton University, said in a press release. “They increase the productivity of the system, but they also make it more stable, more resilient.”
The visuals of this are dramatic, showing how lush, green grass often grows atop termite mounds, while the surrounding land remains dry and desolate. Depending on the number of mounds, this can look like green polka dots covering a landscape when it is photographed from above.
Pringle, lead author Juan Bonachela and their colleagues focused their research on fungus-growing termites of the genus Odontotermes, but they say that their findings could apply to all types of termites that increase resource availability on and/or around their nests.
For the study, the researchers adjusted some mathematical models to account for the effects of termite mounds on desert and semi-arid landscapes in Africa, South America and Asia. Bonachela and his team now suspect that termites are far more beneficial than anyone had ever previously believed.
They explained that termite mounds store nutrients and moisture, and–via internal tunnels–allow water to better penetrate soil. As a result, vegetation flourishes on and near termite mounds in ecosystems that are otherwise highly vulnerable to “desertification,” meaning the environment’s collapse into desert.
Moisture presence overall does not change, but the tiny termites still make a big difference.
“The rain is the same everywhere, but because termites allow water to penetrate the soil better, the plants grow on or near the mounds as if there were more rain,” co-author Corina Tarnita explained.
Read more at Discovery News
Biggest Trove of Gold Built by Ancient 'Secret Agents'
The source of Earth’s biggest trove of gold may have been found: One scientist now points to a trio of agents working in concert: volcanic activity, ancient microbes and an oxygen-depleted atmosphere
The new theory may explain why there’s a string of gold beds in the Witwatersrand basin, near Johannesburg, South Africa, that collectively make up 40 percent of all of the gold that has ever been, or ever will be, dug out of the ground, said study author Christoph Heinrich, a geologist at the ETH Zurich in Switzerland.
“The single biggest gold deposit in that string of deposits is still like three times bigger than the next biggest single gold deposit,” called the Muruntau gold deposit, in the desert of Uzbekistan, Heinrich told Live Science.
Gold in those hills
Gold is a rare element in the universe that forms only in the hearts of violent star explosions called supernovae. The precious metal has been part of Earth since its birth 4.6 billion years ago, and while most of the Earth’s gold is locked deep within the planet’s core, the rest is largely dispersed throughout rocks at incredibly tiny concentrations of about one part gold per billion, Heinrich said.
But occasionally, a physical phenomenon causes the gold to become enriched in certain layers of rock. In the case of the Witwatersrand formation, up to 1 percent of the carbon-rich layers is made up of gold, Heinrich said.
Exactly how the gold deposit formed has been a mystery. Scientists originally thought that gold particles were mechanically deposited in the gravel of mountain streambeds, as is the case in California’s Sierra Nevada mountains. But without a massive mountain range with lots of gold near Witwatersrand, this mechanism seemed to be an unlikely culprit for such a huge deposit.
As an alternative, Heinrich proposed that a set of circumstances collided to form the deposit sometime between 2.9 billion and 2.7 billion years ago. First, massive lava flows — similar to the Deccan Trap eruptions that coincided with the dinosaurs’ extinction — belched sulfurous gas. The sulfur formed acid rain that ate at gold-containing rocks, sending bits of the rocks (and the gold) into the waterways. Without oxygen in the air, this rainwater sulfur didn’t immediately bind to oxygen and become hydrogen sulfate, but instead formed a compound called hydrogen sulfide, which entered the rivers and streams. Hydrogen sulfide bound the gold and changed the water’s ability to hold large amounts of gold, Heinrich said.
“From those conditions, the gold becomes quite soluble — you can actually dissolve it like salt and sugar in tea,” Heinrich told Live Science.
This gold-laden water then crossed over beds of Archaea or primitive microbes. These microbial mats may have been living or dead at the time, but either way, they formed a thick layer of carbon. The chemical reaction between the carbon and the water solution caused the gold to settle out, creating the thin layers of gold interspersed with the carbon.
Controversial idea
But not everyone agrees with Heinrich’s explanation. One researcher says volcanoes are unlikely to have played a role in the formation of the treasure trove of gold.
“I like the idea that the gold was precipitated, and I like the idea that the atmosphere was reducing,” or depleted of oxygen, said Nic Beukes, a geologist at the University of Johannesburg who was not involved in the study. However, Beukes is less convinced that volcanic activity on land played a role in the gold deposit’s formation, or that gold was carried in ancient rivers and lakes. About 100 million years separates most of the regional volcanic activity and the gold deposition, Beukes said.
In addition, newer evidence suggests the gold was deposited along a waterlogged shoreline, he said. But sulfur-laden rainwater would have been highly diluted if it fell into the ocean, rather than rivers and streams, meaning there wouldn’t have been enough sulfur in the water to make gold soluble, Beukes told Live Science.
However, the gold could have settled out in a seawater lagoon after being carried there by rivers and streams, as long as the river water was not immediately diluted in the open ocean, Heinrich said.
Read more at Discovery News
The new theory may explain why there’s a string of gold beds in the Witwatersrand basin, near Johannesburg, South Africa, that collectively make up 40 percent of all of the gold that has ever been, or ever will be, dug out of the ground, said study author Christoph Heinrich, a geologist at the ETH Zurich in Switzerland.
“The single biggest gold deposit in that string of deposits is still like three times bigger than the next biggest single gold deposit,” called the Muruntau gold deposit, in the desert of Uzbekistan, Heinrich told Live Science.
Gold in those hills
Gold is a rare element in the universe that forms only in the hearts of violent star explosions called supernovae. The precious metal has been part of Earth since its birth 4.6 billion years ago, and while most of the Earth’s gold is locked deep within the planet’s core, the rest is largely dispersed throughout rocks at incredibly tiny concentrations of about one part gold per billion, Heinrich said.
But occasionally, a physical phenomenon causes the gold to become enriched in certain layers of rock. In the case of the Witwatersrand formation, up to 1 percent of the carbon-rich layers is made up of gold, Heinrich said.
Exactly how the gold deposit formed has been a mystery. Scientists originally thought that gold particles were mechanically deposited in the gravel of mountain streambeds, as is the case in California’s Sierra Nevada mountains. But without a massive mountain range with lots of gold near Witwatersrand, this mechanism seemed to be an unlikely culprit for such a huge deposit.
As an alternative, Heinrich proposed that a set of circumstances collided to form the deposit sometime between 2.9 billion and 2.7 billion years ago. First, massive lava flows — similar to the Deccan Trap eruptions that coincided with the dinosaurs’ extinction — belched sulfurous gas. The sulfur formed acid rain that ate at gold-containing rocks, sending bits of the rocks (and the gold) into the waterways. Without oxygen in the air, this rainwater sulfur didn’t immediately bind to oxygen and become hydrogen sulfate, but instead formed a compound called hydrogen sulfide, which entered the rivers and streams. Hydrogen sulfide bound the gold and changed the water’s ability to hold large amounts of gold, Heinrich said.
“From those conditions, the gold becomes quite soluble — you can actually dissolve it like salt and sugar in tea,” Heinrich told Live Science.
This gold-laden water then crossed over beds of Archaea or primitive microbes. These microbial mats may have been living or dead at the time, but either way, they formed a thick layer of carbon. The chemical reaction between the carbon and the water solution caused the gold to settle out, creating the thin layers of gold interspersed with the carbon.
Controversial idea
But not everyone agrees with Heinrich’s explanation. One researcher says volcanoes are unlikely to have played a role in the formation of the treasure trove of gold.
“I like the idea that the gold was precipitated, and I like the idea that the atmosphere was reducing,” or depleted of oxygen, said Nic Beukes, a geologist at the University of Johannesburg who was not involved in the study. However, Beukes is less convinced that volcanic activity on land played a role in the gold deposit’s formation, or that gold was carried in ancient rivers and lakes. About 100 million years separates most of the regional volcanic activity and the gold deposition, Beukes said.
In addition, newer evidence suggests the gold was deposited along a waterlogged shoreline, he said. But sulfur-laden rainwater would have been highly diluted if it fell into the ocean, rather than rivers and streams, meaning there wouldn’t have been enough sulfur in the water to make gold soluble, Beukes told Live Science.
However, the gold could have settled out in a seawater lagoon after being carried there by rivers and streams, as long as the river water was not immediately diluted in the open ocean, Heinrich said.
Read more at Discovery News
Feb 4, 2015
Tropical Wasp Guards Nest Using Facial Recognition
If you're a member of a wasp species in the tropical forests of Southeast Asia, you'd better hope your colony-mates can recognize you on sight, or you risk being punched in the mouth.
That's what scientists from Queen Mary, University of London (QMUL) found when they took a look at how the wasp species Liostenogaster flavolineata weighs facial recognition vs. odor cues when trying to tell friend from foe.
It's not uncommon for the wasp to need to figure that out. Hundreds of its nests, each housing families of related wasps, can be clustered together, in what can look like a wasp city (see photo below). Intruders, bent on taking resources, show up often.
Most insects know how to identify fellow colony members by a scent that's specific to that colony. But the new study documents wasps using a delicate balance between visual identification and odor to make the assessment.
The scientists studied 50 colonies of L. flavolineata in Peninsular Malaysia. To test resident wasps for visual-only recognition, they used nest mates (proper colony members) and alien wasps (from outside the colony "neighborhood") that were essentially washed clean of their chemical scent.
To test the resident wasps for their reaction to scent-only, the researchers used filter paper drenched in either the extracted scent of either nest mates or rogue alien wasps.
The researchers found that when the resident wasps in a colony had to rely on scent information alone, they were more inclined to mistake an enemy for a friend. Plus one for intruders, but not so good for the colony. On the other hand, when the wasps had only facial recognition to go on, they were more likely to attack friendly colony-mates by accident. Hmm. Not ideal either.
The wasp's solution? Strike a balance between sight and scent, but place a higher priority on visual cues and attack anyone you think doesn't look right.
The upside of the visual, "hit first and ask questions later" approach, the scientists found, is that the attacking wasp will realize it is mistaken and abort the battle before seriously injuring its colony-mate.
"These wasps can use both face recognition and scent to determine whether another wasp is friend or foe," explained one of the paper's co-authors, Dr. David Baracchi, research fellow at QMUL. "Unfortunately, neither sight nor smell is infallible so they appear to not take any chances and attack anyone whose face they don't recognize."
While the idea that wasps can be pretty good at recognizing the faces of other wasps was known, the study is the first to show that they use facial recognition as the determining factor when deciding whether or not to attack what they think is an intruder.
Read more at Discovery News
That's what scientists from Queen Mary, University of London (QMUL) found when they took a look at how the wasp species Liostenogaster flavolineata weighs facial recognition vs. odor cues when trying to tell friend from foe.
It's not uncommon for the wasp to need to figure that out. Hundreds of its nests, each housing families of related wasps, can be clustered together, in what can look like a wasp city (see photo below). Intruders, bent on taking resources, show up often.
Most insects know how to identify fellow colony members by a scent that's specific to that colony. But the new study documents wasps using a delicate balance between visual identification and odor to make the assessment.
The scientists studied 50 colonies of L. flavolineata in Peninsular Malaysia. To test resident wasps for visual-only recognition, they used nest mates (proper colony members) and alien wasps (from outside the colony "neighborhood") that were essentially washed clean of their chemical scent.
To test the resident wasps for their reaction to scent-only, the researchers used filter paper drenched in either the extracted scent of either nest mates or rogue alien wasps.
The researchers found that when the resident wasps in a colony had to rely on scent information alone, they were more inclined to mistake an enemy for a friend. Plus one for intruders, but not so good for the colony. On the other hand, when the wasps had only facial recognition to go on, they were more likely to attack friendly colony-mates by accident. Hmm. Not ideal either.
The wasp's solution? Strike a balance between sight and scent, but place a higher priority on visual cues and attack anyone you think doesn't look right.
The upside of the visual, "hit first and ask questions later" approach, the scientists found, is that the attacking wasp will realize it is mistaken and abort the battle before seriously injuring its colony-mate.
"These wasps can use both face recognition and scent to determine whether another wasp is friend or foe," explained one of the paper's co-authors, Dr. David Baracchi, research fellow at QMUL. "Unfortunately, neither sight nor smell is infallible so they appear to not take any chances and attack anyone whose face they don't recognize."
While the idea that wasps can be pretty good at recognizing the faces of other wasps was known, the study is the first to show that they use facial recognition as the determining factor when deciding whether or not to attack what they think is an intruder.
Read more at Discovery News
Taj Mahal Gardens Found to Align with the Solstice Sun
If you arrived at the Taj Mahal in India before the sun rises on the day of the summer solstice (which usually occurs June 21), and walked up to the north-central portion of the garden where two pathways intersect with the waterway, and if you could step into that waterway and turn your gaze toward a pavilion to the northeast — you would see the sun rise directly over it.
If you could stay in that spot, in the waterway, for the entire day, the sun would appear to move behind you and then set in alignment with another pavilion, to the northwest. The mausoleum and minarets of the Taj Mahal are located between those two pavilions, and the rising and setting sun would appear to frame them.
Although standing in the waterway is impractical (and not allowed), the dawn and dusk would be sights to behold, and these alignments are just two among several that a physics researcher recently discovered between the solstice sun and the waterways, pavilions and pathways in the gardens of the Taj Mahal.
The Taj Mahal is a mausoleum built by Mughal Dynasty emperor Shah Jahan (who lived from 1592 to 1666) for his favorite wife Mumtaz Mahal (who lived 1592-1631). Her name meant "the Chosen one of the Palace."
The summer solstice has more hours of daylight than any other day of the year, and is when the sun appears at its highest point in the sky. The winter solstice (which usually occurs Dec. 21) is the shortest day of the year, and is when the sun appears at its lowest point in the sky.
Amelia Carolina Sparavigna, a physics professor at the Polytechnic University of Turin in Italy, reported the alignments in an article published recently in the journal Philica.
Gardens of Eden
The Mughal dynasty built the gardens in the "charbagh" style, a system developed in Persia that involves dividing a garden into four sections, Sparavigna noted in her article.
"It is well known that the Mughal gardens were created with the symbolic meaning of Gardens of Eden, with the four main canals flowing from a central spring to the four corners of the world," she wrote. Her research shows that solstice alignments can be found not only in the Taj Mahal gardens, but also in gardens built through time by different Mughal emperors.
Although the alignments at the Taj Mahal likely had symbolic meanings, it's also possible that the architects of the structure used the solstice sun to help build the Taj Mahal, which is precisely oriented along a north-south axis.
"In fact, architects have six main directions: two are joining cardinal points (north-south, east-west) and four are those given by sunrise and sunset on summer and winter solstices,"Sparavignawrote in her paper.
Sparavigna told Live Science in an email that the alignments seen at the Taj Mahal, compared with solar alignments seen at other gardens, are particularly precise. In "the case of Taj Mahal, these gardens, which are huge, are perfect."
New technologies
Sparavigna made the discoveries by using an app called Sun Calc, which uses Google Earth satellite imagery to help calculate the direction at which the sun rises and sets on a given day and location.
Over the past decade the availability of free, high-resolution Google Earth imagery, combined with the development of apps like Sun Calc and Sollumis, has made it easier for researchers to discover and study solar alignments at historical sites.
Read more at Discovery News
If you could stay in that spot, in the waterway, for the entire day, the sun would appear to move behind you and then set in alignment with another pavilion, to the northwest. The mausoleum and minarets of the Taj Mahal are located between those two pavilions, and the rising and setting sun would appear to frame them.
Although standing in the waterway is impractical (and not allowed), the dawn and dusk would be sights to behold, and these alignments are just two among several that a physics researcher recently discovered between the solstice sun and the waterways, pavilions and pathways in the gardens of the Taj Mahal.
The Taj Mahal is a mausoleum built by Mughal Dynasty emperor Shah Jahan (who lived from 1592 to 1666) for his favorite wife Mumtaz Mahal (who lived 1592-1631). Her name meant "the Chosen one of the Palace."
The summer solstice has more hours of daylight than any other day of the year, and is when the sun appears at its highest point in the sky. The winter solstice (which usually occurs Dec. 21) is the shortest day of the year, and is when the sun appears at its lowest point in the sky.
Amelia Carolina Sparavigna, a physics professor at the Polytechnic University of Turin in Italy, reported the alignments in an article published recently in the journal Philica.
Gardens of Eden
The Mughal dynasty built the gardens in the "charbagh" style, a system developed in Persia that involves dividing a garden into four sections, Sparavigna noted in her article.
"It is well known that the Mughal gardens were created with the symbolic meaning of Gardens of Eden, with the four main canals flowing from a central spring to the four corners of the world," she wrote. Her research shows that solstice alignments can be found not only in the Taj Mahal gardens, but also in gardens built through time by different Mughal emperors.
Although the alignments at the Taj Mahal likely had symbolic meanings, it's also possible that the architects of the structure used the solstice sun to help build the Taj Mahal, which is precisely oriented along a north-south axis.
"In fact, architects have six main directions: two are joining cardinal points (north-south, east-west) and four are those given by sunrise and sunset on summer and winter solstices,"Sparavignawrote in her paper.
Sparavigna told Live Science in an email that the alignments seen at the Taj Mahal, compared with solar alignments seen at other gardens, are particularly precise. In "the case of Taj Mahal, these gardens, which are huge, are perfect."
New technologies
Sparavigna made the discoveries by using an app called Sun Calc, which uses Google Earth satellite imagery to help calculate the direction at which the sun rises and sets on a given day and location.
Over the past decade the availability of free, high-resolution Google Earth imagery, combined with the development of apps like Sun Calc and Sollumis, has made it easier for researchers to discover and study solar alignments at historical sites.
Read more at Discovery News
2 Billion Years On, This Bacteria Remains Unchanged
Wedged inside rocks in the deep sea off the coast of Western Australia lurks an organism that hasn’t evolved in more than 2 billion years, scientists say.
From this deep-sea location, a team of researchers collected fossilized sulfur bacteria that was 1.8 billion years old and compared it to bacteria that lived in the same region 2.3 billion years ago. Both sets of microbes were indistinguishable from modern sulfur bacteria found off the coast of Chile.
But do the findings contradict Darwin’s theory of evolution?
“It seems astounding that life has not evolved for more than 2 billion years — nearly half the history of the Earth,” the study’s leader, J. William Schopf, a paleobiologist at UCLA, said in a statement. “Given that evolution is a fact, this lack of evolution needs to be explained.”
Darwin’s theory of evolution by natural selection states that all species develop from heritable genetic changes that make an individual better able to survive in its environment and reproduce.
So how can Darwin’s theory account for these apparently nonchanging bacteria? The answer comes in looking at the bacteria’s similarly stable surroundings. True, the deep-sea bacteria in this study haven’t changed for eons, but neither has their environment, Schopf said. Darwin’s theory doesn’t call for organisms to evolve unless their environment changes, so the microbes’ lack of evolution is consistent with the theory, Schopf added.
To compare the fossils, Schopf and his colleagues used a method known as Raman spectroscopy to measure the composition and chemistry of the rocks. Then, using confocal laser scanning microscopy, they produced 3D images of the fossils and compared these visualizations with the modern bacteria. The ancient microbes looked identical to the present-day ones, the team found.
The fossils studied date back to a period known as the Great Oxidation Event, which occurred when oxygen levels surged on Earth between 2.2 billion and 2.4 billion years ago. During this time, there was also a large rise in sulfate and nitrate levels, which provided all the nutrition the sulfur bacteria needed to survive and reproduce. The environment inside these deep-sea rocks hasn’t changed since then, so there has been no need for the organisms to adapt, the researchers said.
The findings were published yesterday (Feb. 2) in the journal Proceedings of the National Academy of Sciences.
Read more at Discovery News
From this deep-sea location, a team of researchers collected fossilized sulfur bacteria that was 1.8 billion years old and compared it to bacteria that lived in the same region 2.3 billion years ago. Both sets of microbes were indistinguishable from modern sulfur bacteria found off the coast of Chile.
But do the findings contradict Darwin’s theory of evolution?
“It seems astounding that life has not evolved for more than 2 billion years — nearly half the history of the Earth,” the study’s leader, J. William Schopf, a paleobiologist at UCLA, said in a statement. “Given that evolution is a fact, this lack of evolution needs to be explained.”
Darwin’s theory of evolution by natural selection states that all species develop from heritable genetic changes that make an individual better able to survive in its environment and reproduce.
So how can Darwin’s theory account for these apparently nonchanging bacteria? The answer comes in looking at the bacteria’s similarly stable surroundings. True, the deep-sea bacteria in this study haven’t changed for eons, but neither has their environment, Schopf said. Darwin’s theory doesn’t call for organisms to evolve unless their environment changes, so the microbes’ lack of evolution is consistent with the theory, Schopf added.
To compare the fossils, Schopf and his colleagues used a method known as Raman spectroscopy to measure the composition and chemistry of the rocks. Then, using confocal laser scanning microscopy, they produced 3D images of the fossils and compared these visualizations with the modern bacteria. The ancient microbes looked identical to the present-day ones, the team found.
The fossils studied date back to a period known as the Great Oxidation Event, which occurred when oxygen levels surged on Earth between 2.2 billion and 2.4 billion years ago. During this time, there was also a large rise in sulfate and nitrate levels, which provided all the nutrition the sulfur bacteria needed to survive and reproduce. The environment inside these deep-sea rocks hasn’t changed since then, so there has been no need for the organisms to adapt, the researchers said.
The findings were published yesterday (Feb. 2) in the journal Proceedings of the National Academy of Sciences.
Read more at Discovery News
Fantastically Wrong: The Weird, Kinda Perverted History of the Unicorn
This woman is a virgin. How do I know? Because unicorns ain’t got time for no non-virgins. |
But a week later, The Guardian ran a second article with a frank admission. “There is only one problem with the story,” they wrote. “It isn’t exactly true.” North Korea’s claim was in fact referring to the kirin, a mythical creature not too dissimilar from the unicorn, with the same hoofed, quadruped look but sometimes having two horns instead of one. But eccentricities of the North Korean regime aside, why is it that the unicorn, or a unicorn-like beast, pervades both Asian and European cultures?
On the Horn
If you’re looking to figure out how an ancient myth started to get out of hand, a good place to start is with the great Roman naturalist Pliny the Elder, whose epic encyclopedia Natural History stood largely as fact for some 1,600 years. Problem was, Pliny wasn’t the most incredulous of writers, and crammed his encyclopedia with pretty much any account he could get his hands on.
A father and son pose with a kirin statue at the Summer Palace in Beijing, China. |
The unicorn then shows up in various places in the Bible, at least according to some translations (it’s sometimes instead referred to as the oryx, a kind of antelope whose antlers were indeed sold as unicorn horns in medieval times, or as the auroch, a massive type of cattle that went extinct in the 17th century). Here, its fierceness is affirmed. In Numbers 24:8, for instance: “God brought him forth out of Egypt; he hath as it were the strength of an unicorn: he shall eat up the nations his enemies, and shall break their bones, and pierce them through with his arrows.”
In the 7th century, the scholar Isidore of Seville chimed in, noting that the unicorn “is very strong and pierces anything it attacks. It fights with elephants and kills them by wounding them in the belly.” He also helped popularize the myth that would serve as a hallmark in European folklore for centuries to come: Catching a unicorn is impossible…unless you have access to a virgin woman. “The unicorn is too strong to be caught by hunters,” he writes, “except by a trick: If a virgin girl is placed in front of a unicorn and she bares her breast to it, all of its fierceness will cease and it will lay its head on her bosom, and thus quieted is easily caught.” It’ll suckle until it’s lulled to sleep. So…yeah.
This is one alarm that the unicorn shan’t be sleeping through. |
Thus the unicorn became firmly implanted in European lore. What followed was a full-blown mania for their horns, which were said to detect poison if you stirred them around in your food or drink. They went for tens of thousands of dollars in today’s money, and were particularly popular among paranoid royalty. More industrious users who didn’t want to wait around to have their food poisoned would grind up the horns—usually those of the oryx or narwal (whose horn is actually a giant tooth)—to gain immunity from toxins.
Over in the East, royalty had a rather more complicated relationship with their version of the unicorn, the aforementioned kirin, or qilin. Its appearance was said to foretell the birth of a royal baby, which is nice of it, but can also predict an imminent death, which is not so nice. In the 15th century, a giraffe was brought to China for the first time and presented to the emperor as a kirin, which was a gutsy move considering its proclivities for letting royalty know they’re going to die soon. The emperor, though, dismissed it as a fraud and went on to live another 10 years.
This is what it looks like when an artist draws a rhino without ever having seen one with his own eyes…while maybe huffing a bit of glue. |
The myth of the unicorn may have come from sightings of antelope and such ungulates with only one horn, having either been born with the defect or lost the horn when scrapping with a predator or one of its own kind. Less likely still is seeing a normal antelope from afar in profile, since that would only last as long as the animal didn’t move.
A far more likely culprit is the Indian rhinoceros, and clues for this are sprinkled throughout the early accounts—indeed, the unicorn is sometimes referred to as the Indian ass. Pliny, for instance, mentions that the unicorn has “the feet of an elephant,” a rhino’s feet in fact being not hooved like a horse’s, but fleshy like an elephant’s. He also notes that it has “the tail of a boar,” much like a rhino’s, “and a single black horn three feet long in the middle of its forehead.” Writers would only later describe the horn as white.
The ancient Greeks and Romans, you see, had been making forays into India and bringing back tales of the strange beasts there, and the facts tended to get a bit…lost. Cotton, for instance, was said to grow in India as an actual lamb that sprouted from the ground, just hanging there patiently producing cotton. And while Pliny actually did a pretty good job of describing the rhino, his popularization of the “unicorn” picked up more and more improbabilities as the centuries wore on. We also know that the ancient Chinese had contact with rhinos from art made out of their horns, so the animal could well have also inspired the kirin.
Read more at Wired Science
Feb 3, 2015
Migrating Birds Take Turns Leading the Flock
Flying in a V-formation is toughest for the leader, and migrating birds compensate by taking turns so that no one gets exhausted, international researchers said Monday.
The authors of the study in the Proceedings of the National Academy of Sciences, a US peer-reviewed journal, described the discovery as the "first convincing evidence for 'turn taking' reciprocal cooperative behavior in birds."
The research is based on 14 northern bald ibis (Geronticus eremita), that migrate from Salzburg, Austria to Orbetello, Italy.
Each bird wore a data-logging device that allowed scientists to track how individuals acted within the flying V-formation.
Researchers found that the "birds changed position frequently within the flock."
"Overall, individuals spent an average of 32 percent of their time benefiting by flying in the updraft produced by another bird's flapping wings and a proportional amount of time leading a formation," the study said.
Scientists believe this high level of cooperation evolved as a survival necessity.
Since migration is risky -- some research has found that more than one third of young birds die of exhaustion on their first trip -- those that learn to fly in formation and change positions regularly can save energy, getting a bit of a free ride from flying in the updraft of other birds.
"Our study shows that the building blocks of reciprocal cooperative behavior can be very simple: ibis often travel in pairs, with one bird leading and a 'wingman' benefiting by following in the leader's updraft," said lead author Bernhard Voelkl of Oxford University's Department of Zoology.
Read more at Discovery News
The authors of the study in the Proceedings of the National Academy of Sciences, a US peer-reviewed journal, described the discovery as the "first convincing evidence for 'turn taking' reciprocal cooperative behavior in birds."
The research is based on 14 northern bald ibis (Geronticus eremita), that migrate from Salzburg, Austria to Orbetello, Italy.
Each bird wore a data-logging device that allowed scientists to track how individuals acted within the flying V-formation.
Researchers found that the "birds changed position frequently within the flock."
"Overall, individuals spent an average of 32 percent of their time benefiting by flying in the updraft produced by another bird's flapping wings and a proportional amount of time leading a formation," the study said.
Scientists believe this high level of cooperation evolved as a survival necessity.
Since migration is risky -- some research has found that more than one third of young birds die of exhaustion on their first trip -- those that learn to fly in formation and change positions regularly can save energy, getting a bit of a free ride from flying in the updraft of other birds.
"Our study shows that the building blocks of reciprocal cooperative behavior can be very simple: ibis often travel in pairs, with one bird leading and a 'wingman' benefiting by following in the leader's updraft," said lead author Bernhard Voelkl of Oxford University's Department of Zoology.
Read more at Discovery News
Isaac Newton Notes Explain How Water Rises in Plants
Sir Isaac Newton's interest in botany extended well beyond the fabled apple falling from a tree -- he also appears to have understood how water moves from roots to leaves over 200 years before botanists did.
Newton, who lived from 1643-1727, is known for his observations on the properties of light, the invention of calculus, and for his time as head of the Royal Mint, and president of the Royal Society, the leading scientific body of the day.
But buried in a notebook Newton used during his undergraduate days, is half a page of text on plant function, which has been reviewed in the journal Nature Plants by Professor David Beerling of the University of Sheffield.
In his notes, Newton describes how water is drawn up through a plant's roots and out through its pores. Newton wrote:
"Suppose a b the pore of a Vegitable filled with fluid mater & that the Globule c doth hitt away the particle b, then the rest of subtile matter in the pores riseth from a towards b & by this meanes juices continually arise up from the roots of trees upward leaving dreggs in the pores & then wanting passage stretch the pores to make them as wide as before they were clogged. which makes the plant bigger untill the pores are too narow for the juice to arise through the pores & then the plant ceaseth to grow any more."
Eureka moment
"In modern terms what's apparently being described is the evaporative escape of water from a shoot -- transpiration -- driven by energy from the sun," writes Beerling.
First proposed in 1895, transpiration involves the absorption of water through a plant's roots, its transportation up through the stem, and eventual evaporation into the atmosphere from the plant's surface, usually through leaf pores or stomata.
The system works because the loss of water through evaporation causes an area of low pressure within the plant, which creates tension.
The cohesiveness of water molecules transfers this tension down through the xylem, effectively pulling water from a higher pressure area at the base of the plant, against the opposing force of Newton's gravity, up to the leaf surface.
Beerling thinks Newton's pores were probably water-conducting tissues inside stems, rather than stomata.
He bases this on the fact that the earliest known published observations of pore-like structures on leaves appeared in 1675 in Anatome plantarum by Marcello Malpighi, who saw stomata through an early microscope.
However, Beerling writes, "presumably, Newton too could have constructed some sort of hand lens to observe stomata had he been inclined, given that he built the first reflecting telescope a few years later in 1668."
Read more at Discovery News
Newton, who lived from 1643-1727, is known for his observations on the properties of light, the invention of calculus, and for his time as head of the Royal Mint, and president of the Royal Society, the leading scientific body of the day.
But buried in a notebook Newton used during his undergraduate days, is half a page of text on plant function, which has been reviewed in the journal Nature Plants by Professor David Beerling of the University of Sheffield.
In his notes, Newton describes how water is drawn up through a plant's roots and out through its pores. Newton wrote:
"Suppose a b the pore of a Vegitable filled with fluid mater & that the Globule c doth hitt away the particle b, then the rest of subtile matter in the pores riseth from a towards b & by this meanes juices continually arise up from the roots of trees upward leaving dreggs in the pores & then wanting passage stretch the pores to make them as wide as before they were clogged. which makes the plant bigger untill the pores are too narow for the juice to arise through the pores & then the plant ceaseth to grow any more."
Eureka moment
"In modern terms what's apparently being described is the evaporative escape of water from a shoot -- transpiration -- driven by energy from the sun," writes Beerling.
First proposed in 1895, transpiration involves the absorption of water through a plant's roots, its transportation up through the stem, and eventual evaporation into the atmosphere from the plant's surface, usually through leaf pores or stomata.
The system works because the loss of water through evaporation causes an area of low pressure within the plant, which creates tension.
The cohesiveness of water molecules transfers this tension down through the xylem, effectively pulling water from a higher pressure area at the base of the plant, against the opposing force of Newton's gravity, up to the leaf surface.
Beerling thinks Newton's pores were probably water-conducting tissues inside stems, rather than stomata.
He bases this on the fact that the earliest known published observations of pore-like structures on leaves appeared in 1675 in Anatome plantarum by Marcello Malpighi, who saw stomata through an early microscope.
However, Beerling writes, "presumably, Newton too could have constructed some sort of hand lens to observe stomata had he been inclined, given that he built the first reflecting telescope a few years later in 1668."
Read more at Discovery News
Accelerated Ice Melt Causing Iceland to Rise
Parts of Iceland are rising, and the culprit may be climate change.
GPS measurements show that land in the central and southern parts of Iceland have been rising at a faster pace every year, beginning at about the same time as the onset of the ever-increasing melt of the island’s eponymous ice due to rising temperatures, a new study finds.
“There have been a lot of studies that have shown that the uplift in Iceland is primarily due to ice loss,” study lead author Kathleen Compton, a PhD student at the University of Arizona, said. But this one, detailed in the journal Geophysical Research Letters, is the first to show that the acceleration of one speeds up the other.
That uplift could in turn affect Iceland’s notorious volcanoes and hasten eruptions, which can have impacts on air travel, so clearly seen in the 2010 eruption of Eyjafjallajökull, which disrupted air traffic for weeks.
When Ice Melts, Land Rises
Any big chunk of ice like a glacier or an ice sheet pushes down on the land below it, like a person laying on a tempurpedic mattress. (This is why parts of the bedrock of Antarctica are actually below sea level.) When that ice is removed, the land slowly rebounds, just as the mattress will slowly fill in when the person gets up.
Parts of North America are still rebounding after the retreat of the ice sheets that covered the region during the last major ice age thousands of years ago. But under Iceland, the mantle — the semi-solid layer of the Earth below the crust — is a bit goopier, and so responds more quickly to changes in the weight pressing down on it.
What the new study shows is that parts of Iceland are rising much faster — as much as 30 millimeters, or 1.4 inches, a year — compared to prehistoric ice loss. This faster response is because recent ice loss, which is ultimately triggered by atmospheric warming due to the buildup of greenhouse gases, is happening at a faster pace.
“I'm not surprised at the amount of uplift, as the visible signs of ice loss are there for everyone to see,” David McGarvie, a volcanologist with The Open University in Scotland, who has studied Iceland, said.
And not only is the uplift faster, it’s accelerating 1 to 2 millimeters per year, Compton and her colleagues found.
GPS Signals
The researchers first noticed this unexpected increase in the uplift when they looked at the data coming from one of the GPS stations in a network of 62 such stations across the island. In looking at other stations in that network, they found the same trend, with the fastest accelerating uplifts closest to the biggest glaciers.
In analyzing the records, the team found that the area with the accelerated uplift coincided with the higher rates of glacier melt that has been separately documented by glaciologists since 1995.
“The upward velocity of the crust as measured by the researchers is almost certainly due to recent loss of ice mass due to melting,” McGarvie said.
As further corroboration, temperature records going back to the 1800s have showed steadily rising air temperatures since 1980. Both of these factors correspond to calculations Compton performed when Iceland’s recent uplift began.
“It’s always nice to see something come together this well,” she said.
Potential Side Effects
Compton also ran some calculations that showed that the only way to see the faster and faster uplift would be from sped up ice melt.
McGarvie isn’t completely convinced that the uplift is escalating as the data record is relatively short (only a few decades), but he says the idea is well worth continued study.
Whether or not similar uplift could be happening in other places with significant ice loss is difficult to assess because many of those spots have land that responds more slowly to pressure changes and have less extensive GPS stations than Iceland.
Read more at Discovery News
GPS measurements show that land in the central and southern parts of Iceland have been rising at a faster pace every year, beginning at about the same time as the onset of the ever-increasing melt of the island’s eponymous ice due to rising temperatures, a new study finds.
“There have been a lot of studies that have shown that the uplift in Iceland is primarily due to ice loss,” study lead author Kathleen Compton, a PhD student at the University of Arizona, said. But this one, detailed in the journal Geophysical Research Letters, is the first to show that the acceleration of one speeds up the other.
That uplift could in turn affect Iceland’s notorious volcanoes and hasten eruptions, which can have impacts on air travel, so clearly seen in the 2010 eruption of Eyjafjallajökull, which disrupted air traffic for weeks.
When Ice Melts, Land Rises
Any big chunk of ice like a glacier or an ice sheet pushes down on the land below it, like a person laying on a tempurpedic mattress. (This is why parts of the bedrock of Antarctica are actually below sea level.) When that ice is removed, the land slowly rebounds, just as the mattress will slowly fill in when the person gets up.
Parts of North America are still rebounding after the retreat of the ice sheets that covered the region during the last major ice age thousands of years ago. But under Iceland, the mantle — the semi-solid layer of the Earth below the crust — is a bit goopier, and so responds more quickly to changes in the weight pressing down on it.
What the new study shows is that parts of Iceland are rising much faster — as much as 30 millimeters, or 1.4 inches, a year — compared to prehistoric ice loss. This faster response is because recent ice loss, which is ultimately triggered by atmospheric warming due to the buildup of greenhouse gases, is happening at a faster pace.
“I'm not surprised at the amount of uplift, as the visible signs of ice loss are there for everyone to see,” David McGarvie, a volcanologist with The Open University in Scotland, who has studied Iceland, said.
And not only is the uplift faster, it’s accelerating 1 to 2 millimeters per year, Compton and her colleagues found.
GPS Signals
The researchers first noticed this unexpected increase in the uplift when they looked at the data coming from one of the GPS stations in a network of 62 such stations across the island. In looking at other stations in that network, they found the same trend, with the fastest accelerating uplifts closest to the biggest glaciers.
In analyzing the records, the team found that the area with the accelerated uplift coincided with the higher rates of glacier melt that has been separately documented by glaciologists since 1995.
“The upward velocity of the crust as measured by the researchers is almost certainly due to recent loss of ice mass due to melting,” McGarvie said.
As further corroboration, temperature records going back to the 1800s have showed steadily rising air temperatures since 1980. Both of these factors correspond to calculations Compton performed when Iceland’s recent uplift began.
“It’s always nice to see something come together this well,” she said.
Potential Side Effects
Compton also ran some calculations that showed that the only way to see the faster and faster uplift would be from sped up ice melt.
McGarvie isn’t completely convinced that the uplift is escalating as the data record is relatively short (only a few decades), but he says the idea is well worth continued study.
Whether or not similar uplift could be happening in other places with significant ice loss is difficult to assess because many of those spots have land that responds more slowly to pressure changes and have less extensive GPS stations than Iceland.
Read more at Discovery News
What’s Up With That: Why Do Cats Love Boxes So Much?
Paisley in a box. |
So what are we to make of the strange gravitational pull that empty Amazon packaging exerts on felis catus? Like many other really weird things cats do, science hasn’t fully cracked this particular feline mystery. There’s the obvious predation advantage a box affords: Cats are ambush predators, and boxes provide great hiding places to stalk prey from (and retreat to). But there’s clearly more going on here.
Thankfully, behavioral biologists and veterinarians have come up with a few other interesting explanations. In fact, when you look at all the evidence together, it could be that your cat may not just like boxes, he may need them.
The box-and-whisker plot
Understanding the feline mind is notoriously difficult. Cats, after all, tend not to be the easiest test subjects. Still, there’s a sizable amount of behavioral research on cats who are, well, used for other kinds of research (i.e., lab cats). These studies—many of which focused on environmental enrichment—have been taking place for more than 50 years and they make one thing abundantly clear: Your fuzzy companion derives comfort and security from enclosed spaces.
Cotton in a box. |
Veterinarian Claudia Vinke of Utrecht University in the Netherlands is one of the latest researchers to study stress levels in shelter cats. Working with domestic cats in a Dutch animal shelter, Vinke provided hiding boxes for a group of newly arrived cats while depriving another group of them entirely. She found a significant difference in stress levels between cats that had the boxes and those that didn’t. In effect, the cats with boxes got used to their new surroundings faster, were far less stressed early on, and were more interested in interacting with humans.
It makes sense when you consider that the first reaction of nearly all cats to a stressful situation is to withdraw and hide. “Hiding is a behavioral strategy of the species to cope with environmental changes and stressors,” Vinke said in an email.
This is as true for cats in the wild as it is for those in your home. Only instead of retreating to tree tops, dens, or caves, yours may find comfort in a shoe box.
Box (anti-)social
It’s also important to note that cats really suck at conflict resolution. To quote from The Domestic Cat: The Biology of its Behaviour, “Cats do not appear to develop conflict resolution strategies to the extent that more gregarious species do, so they may attempt to circumvent agonistic encounters by avoiding others or decreasing their activity.”
So rather than work things out, cats are more inclined to simply run away from their problems or avoid them altogether. A box, in this sense, can often represent a safe zone, a place where sources of anxiety, hostility, and unwanted attention simply disappear.
Of course the problem with these explanations is that they make box attraction seem like a symptom of a mal-adjusted, stressed out cats. I don’t know about you, but to me, Maru does not appear to be suffering from high levels of stress in the video below.
Astute feline observers will note that in addition to boxes, many cats seem to pick other odd places to relax. Some curl up in a bathroom sink. Others prefer shoes, bowls, shopping bags, coffee mugs, empty egg cartons, and other small, confined spaces.
Which brings us to the other reason your cat may like particularly small boxes (and other seemingly uncomfortable places): It’s friggin’ cold out.
According to a 2006 study by the National Research Council, the thermoneutral zone for a domestic cat is 86 to 97 degrees Fahrenheit. That’s the range of temperatures in which cats are “comfortable” and don’t have to generate extra heat to keep warm or expend metabolic energy on cooling. That range also happens to be 20 degrees higher than ours, which explains why it’s not unusual to see your neighbor’s cat sprawled out on the hot asphalt in the middle of a summer day, soaking in the sunlight.
Read more at Wired Science
Feb 2, 2015
New Owl Species Arises from Case of Mistaken Identity
Once mistaken for another species of owl, the golden-eyed "desert tawny owl" is now finally getting its due.
In a new report, researchers examined the plumage and body shape of owl specimens from museums around the world that had previously been thought to be members of a species called Hume's owl. The researchers also analyzed the owls' mitochondrial DNA, and found it was about 10 percent different from that of the Hume's owl, which is properly known as species Strix butleri.
"We reinvestigated it using all techniques available to us, and realized — especially based on the fact that there were massive genetic differences between Hume's type and specimens from elsewhere — that it was pretty obvious that there were two species involved," said Guy Kirwan, a research associate at the Field Museum of Natural History in Chicago and co-author of the new report.
The researchers named the new owl Strix hadorami, after the project's brainchild, renowned ornithologist Hadoram Shirihai.
The groundwork for the mix-up began in 1878, when Allan Hume, a famous British ornithologist who was working in India, received an owl specimen from an acquaintance named Edward Butler. That owl, which came to be known as Hume's owl (S. butleri), was likely found in present-day Pakistan.
But coincidentally, another ornithologist of the time, named Henry Tristram, had already collected an owl with similar markings.
"Hume had named his bird, and Tristram thought [his owl] was the same thing," Kirwan said.
But then, more than a century later, a new look at the owls shook things up.
The renowned ornithologist Hadoram Shirihai visited the Natural History Museum in Tring, England, while working on a book in 1985. Shirihai noticed some of the Hume's owl specimens in England looked different from each other, and also different from the Hume's owls he had seen in Israeli museums and in the wild.
Shirihai planned to rename the Israeli owls, but other commitments prevented him from ever doing so, Kirwan said. In the new paper, the researchers took a more detailed approach to looking at the owl's differences than Shirihai had taken in making his observations.
Curiously, another group of ornithologists recently named another new species of owl in the Middle East, in a study in the 2013 journal Dutch Birding. Those researchers reported observing an owl that looked similar to S. butleri, but had a different vocalization. They named this owl Strix omanensis after Oman, the country in which they had seen the animal (the researchers chose not to capture a specimen).
In fact, they may have just mistakenly renamed Hume's owl, Kirwan said.
"Everyone now accepts that Hume's owl contains two species," he said. "The Dutch team provided a very important building block in the process of discovering that there were two species, but made a mistake, in our view, in which one needed to be named."
He added, "We believe that we have now named the right new species."
Read more at Discovery News
In a new report, researchers examined the plumage and body shape of owl specimens from museums around the world that had previously been thought to be members of a species called Hume's owl. The researchers also analyzed the owls' mitochondrial DNA, and found it was about 10 percent different from that of the Hume's owl, which is properly known as species Strix butleri.
"We reinvestigated it using all techniques available to us, and realized — especially based on the fact that there were massive genetic differences between Hume's type and specimens from elsewhere — that it was pretty obvious that there were two species involved," said Guy Kirwan, a research associate at the Field Museum of Natural History in Chicago and co-author of the new report.
The researchers named the new owl Strix hadorami, after the project's brainchild, renowned ornithologist Hadoram Shirihai.
The groundwork for the mix-up began in 1878, when Allan Hume, a famous British ornithologist who was working in India, received an owl specimen from an acquaintance named Edward Butler. That owl, which came to be known as Hume's owl (S. butleri), was likely found in present-day Pakistan.
But coincidentally, another ornithologist of the time, named Henry Tristram, had already collected an owl with similar markings.
"Hume had named his bird, and Tristram thought [his owl] was the same thing," Kirwan said.
But then, more than a century later, a new look at the owls shook things up.
The renowned ornithologist Hadoram Shirihai visited the Natural History Museum in Tring, England, while working on a book in 1985. Shirihai noticed some of the Hume's owl specimens in England looked different from each other, and also different from the Hume's owls he had seen in Israeli museums and in the wild.
Shirihai planned to rename the Israeli owls, but other commitments prevented him from ever doing so, Kirwan said. In the new paper, the researchers took a more detailed approach to looking at the owl's differences than Shirihai had taken in making his observations.
Curiously, another group of ornithologists recently named another new species of owl in the Middle East, in a study in the 2013 journal Dutch Birding. Those researchers reported observing an owl that looked similar to S. butleri, but had a different vocalization. They named this owl Strix omanensis after Oman, the country in which they had seen the animal (the researchers chose not to capture a specimen).
In fact, they may have just mistakenly renamed Hume's owl, Kirwan said.
"Everyone now accepts that Hume's owl contains two species," he said. "The Dutch team provided a very important building block in the process of discovering that there were two species, but made a mistake, in our view, in which one needed to be named."
He added, "We believe that we have now named the right new species."
Read more at Discovery News
Lonely Ants Die Young and Hungry
What happens when ants get lonely? They're unable to digest their food properly and walk themselves to an early death, a study has found.
The findings may provide an insight into the negative impact of isolation on a range of social animals, even humans, say scientists.
The study tracked the behaviour of a species of carpenter ant Camponotus fellah.
Lab colonies of worker ants were studied under four scenarios -- single ants, groups of two, ten, and single workers with three to four medium-sized larvae.
The isolated ants lived just six days, whereas group-living ants lived up to 66 days, the scientists report in journal Behavioural Ecology and Sociobiology .
The scientists say their results show that ants simply don't know how to behave when alone.
"Isolated ants exhibited a much higher activity after social isolation, continuously walking without any rest," says study co-author Dr Laurent Keller, an entomologist from the University of Lausanne.
This behaviour is a recipe for trouble as the ants don't get enough energy to back it up, explains study co-author Dr Koto Akiko of the University of Tokyo.
"Because of this hyperactivity, isolated ants faced an increased energy demand", says Akiko.
"Isolated ants ingest as much food as their grouped nest mates, but the food is not processed fully by the digestive tract", he adds.
Under normal conditions, carpenter ants go out to the field and collect food on a specialised internal structure called crop. Food stored here is not consumed immediately by the ant, but rather taken to the nest for sharing and for their own consumption.
"[Isolated ants] retained food in the crop instead of digesting it which resulted in an imbalance of energy income and expenditure", says Keller.
The scientists say the findings indicate that food intake alone is not enough, suggesting that social interactions are a key aspect for the ant's proper digestive functions, so that food can go beyond the crop and into their stomach.
But further research is needed to understand how the social environment affects food digestion, says Keller.
One hypothesis proposed by Laurent's team suggest that sharing of regurgitated food, a process called trophallaxis, may be a way for morsels of food to become more digestible.
Alternatively, it may be that social interaction affects some neural pathways that promote gastrointestinal activity, notes Keller.
But Dr Ken Cheng, a behavioural biologist at Macquarie University has another explanation in mind, involving gut microbes.
"It would not surprise me if gut bacteria, which would be passed around with the exchange of food, played a role as well in the adverse effects of isolation," says Cheng, who was not involved in the research.
While the current research is focused on a humble little ant, the findings reported offer valuable insights into how social interaction affect health, and may open the way for future research on other species, says Akiko.
"Evidence is accumulating that social isolation increases the risk of many health problems, including mental disorders like depression and also physiological disorders like cardiovascular or cerebrovascular disease," he says.
Read more at Discovery News
The findings may provide an insight into the negative impact of isolation on a range of social animals, even humans, say scientists.
The study tracked the behaviour of a species of carpenter ant Camponotus fellah.
Lab colonies of worker ants were studied under four scenarios -- single ants, groups of two, ten, and single workers with three to four medium-sized larvae.
The isolated ants lived just six days, whereas group-living ants lived up to 66 days, the scientists report in journal Behavioural Ecology and Sociobiology .
The scientists say their results show that ants simply don't know how to behave when alone.
"Isolated ants exhibited a much higher activity after social isolation, continuously walking without any rest," says study co-author Dr Laurent Keller, an entomologist from the University of Lausanne.
This behaviour is a recipe for trouble as the ants don't get enough energy to back it up, explains study co-author Dr Koto Akiko of the University of Tokyo.
"Because of this hyperactivity, isolated ants faced an increased energy demand", says Akiko.
"Isolated ants ingest as much food as their grouped nest mates, but the food is not processed fully by the digestive tract", he adds.
Under normal conditions, carpenter ants go out to the field and collect food on a specialised internal structure called crop. Food stored here is not consumed immediately by the ant, but rather taken to the nest for sharing and for their own consumption.
"[Isolated ants] retained food in the crop instead of digesting it which resulted in an imbalance of energy income and expenditure", says Keller.
The scientists say the findings indicate that food intake alone is not enough, suggesting that social interactions are a key aspect for the ant's proper digestive functions, so that food can go beyond the crop and into their stomach.
But further research is needed to understand how the social environment affects food digestion, says Keller.
One hypothesis proposed by Laurent's team suggest that sharing of regurgitated food, a process called trophallaxis, may be a way for morsels of food to become more digestible.
Alternatively, it may be that social interaction affects some neural pathways that promote gastrointestinal activity, notes Keller.
But Dr Ken Cheng, a behavioural biologist at Macquarie University has another explanation in mind, involving gut microbes.
"It would not surprise me if gut bacteria, which would be passed around with the exchange of food, played a role as well in the adverse effects of isolation," says Cheng, who was not involved in the research.
While the current research is focused on a humble little ant, the findings reported offer valuable insights into how social interaction affect health, and may open the way for future research on other species, says Akiko.
"Evidence is accumulating that social isolation increases the risk of many health problems, including mental disorders like depression and also physiological disorders like cardiovascular or cerebrovascular disease," he says.
Read more at Discovery News
650-Year Drought Triggered Ancient City's Abandonment
A once-thriving Mesoamerican metropolis dried up about 1,000 years ago when below-average rainfall triggered centuries-long droughts that largely prompted people to abandon the city for greener opportunities, a new study finds.
Scientists have long debated whether it was drought or cultural forces that led to the abandonment of Cantona, a once-fortified city located just east of modern-day Mexico City. Few details were known about its past climate, which prompted researchers to take a closer look at the weather conditions that affected the pre-Columbian city in Mesoamerica.
In its heyday, about 90,000 people lived in Cantona, which is located in a dry volcanic basin. The area provided vast amounts of valuable obsidian, volcanic glass used for trade and making sharp tools for hunting and farming. But people deserted the city between A.D. 900 and A.D. 1050, research shows.
To investigate why, geographers assessed the climate before and after Cantona's collapse. They took sediment cores and samples from Aljojuca, a lake about 20 miles (32 kilometers) from the city.
As a closed lake basin, Aljojuca enabled the scientists to track the past climate in the region. The researchers examined the relationship between different oxygen isotopes, or variants of the element, in the water, and determined how much precipitation and evaporation was taking place at the lake. The ratio of the isotopes was high, indicating the area had drier summers, the scientists said. Analyses of other compounds in the sediment samples yielded similar results.
Overall, Cantona still had wet summers and dry winters, but its regular monsoon season was disturbed by frequent long-term droughts, which likely harmed the area's crops and water supply, the researchers said. Moreover, the droughts lasted hundreds of years.
A 650-year period of frequent droughts plagued the area from about A.D. 500 to about A.D. 1150, they found. This dry period wasn't isolated, but part of a period of droughts in modern-day Mexico's highlands that lasted from about 200 B.C. until A.D. 1300, just before the Aztec empire took power.
"The decline of Cantonaoccurred during this dry interval, and we conclude that climate change probably played a role, at least towards the end of the city's existence," lead researcher Tripti Bhattacharya, a graduate student of geography at the University of California, Berkeley, said in a statement.
Read more at Discovery News
Scientists have long debated whether it was drought or cultural forces that led to the abandonment of Cantona, a once-fortified city located just east of modern-day Mexico City. Few details were known about its past climate, which prompted researchers to take a closer look at the weather conditions that affected the pre-Columbian city in Mesoamerica.
In its heyday, about 90,000 people lived in Cantona, which is located in a dry volcanic basin. The area provided vast amounts of valuable obsidian, volcanic glass used for trade and making sharp tools for hunting and farming. But people deserted the city between A.D. 900 and A.D. 1050, research shows.
To investigate why, geographers assessed the climate before and after Cantona's collapse. They took sediment cores and samples from Aljojuca, a lake about 20 miles (32 kilometers) from the city.
As a closed lake basin, Aljojuca enabled the scientists to track the past climate in the region. The researchers examined the relationship between different oxygen isotopes, or variants of the element, in the water, and determined how much precipitation and evaporation was taking place at the lake. The ratio of the isotopes was high, indicating the area had drier summers, the scientists said. Analyses of other compounds in the sediment samples yielded similar results.
Overall, Cantona still had wet summers and dry winters, but its regular monsoon season was disturbed by frequent long-term droughts, which likely harmed the area's crops and water supply, the researchers said. Moreover, the droughts lasted hundreds of years.
A 650-year period of frequent droughts plagued the area from about A.D. 500 to about A.D. 1150, they found. This dry period wasn't isolated, but part of a period of droughts in modern-day Mexico's highlands that lasted from about 200 B.C. until A.D. 1300, just before the Aztec empire took power.
"The decline of Cantonaoccurred during this dry interval, and we conclude that climate change probably played a role, at least towards the end of the city's existence," lead researcher Tripti Bhattacharya, a graduate student of geography at the University of California, Berkeley, said in a statement.
Read more at Discovery News
Bronze Statues May Be Last Remaining by Michelangelo
Two sculptures that languished in obscurity for more than a century may be the only surviving bronze works by Michelangelo, researchers announced in Britain on Monday.
The international research team led by Britain's University of Cambridge and the Fitzwilliam Museum uncovered new evidence linking the two nude works to Michelangelo, whose famed works include the painted ceiling of the Sistine Chapel.
Standing at a meter (3.3 feet) tall, the sculptures are of a young man and an older man riding panthers, and if confirmed the discovery would make them only surviving Michelangelo bronzes in the world.
"It has been fantastically exciting to have been able to participate in this ground-breaking project," said Victoria Avery, Keeper of the Applied Arts Department of the Fitzwilliam Museum.
"The bronzes are exceptionally powerful and compelling works of art that deserve close-up study we hope the public will come and examine them for themselves, and engage with this ongoing debate."
The pieces were attributed to Michelangelo in their first recording in the 19th century, but this was dismissed over the last 120 years as they were undocumented and unsigned.
However, last autumn University of Cambridge Emeritus Professor of Art History Paul Joannides made a discovery that overturned this thinking.
Joannides found a drawing of a muscular youth riding a panther in a student's copy of lost sketches by Michelangelo, indicating that the artist was planning the unusual design for a sculpture.
Further study of the bronzes found them to be very similar in style and anatomy to Michelangelo's works of 1500 to 1510, the period in which scientific analysis indicates the statues were made.
Read more at Discovery News
The international research team led by Britain's University of Cambridge and the Fitzwilliam Museum uncovered new evidence linking the two nude works to Michelangelo, whose famed works include the painted ceiling of the Sistine Chapel.
Standing at a meter (3.3 feet) tall, the sculptures are of a young man and an older man riding panthers, and if confirmed the discovery would make them only surviving Michelangelo bronzes in the world.
"It has been fantastically exciting to have been able to participate in this ground-breaking project," said Victoria Avery, Keeper of the Applied Arts Department of the Fitzwilliam Museum.
"The bronzes are exceptionally powerful and compelling works of art that deserve close-up study we hope the public will come and examine them for themselves, and engage with this ongoing debate."
The pieces were attributed to Michelangelo in their first recording in the 19th century, but this was dismissed over the last 120 years as they were undocumented and unsigned.
However, last autumn University of Cambridge Emeritus Professor of Art History Paul Joannides made a discovery that overturned this thinking.
Joannides found a drawing of a muscular youth riding a panther in a student's copy of lost sketches by Michelangelo, indicating that the artist was planning the unusual design for a sculpture.
Further study of the bronzes found them to be very similar in style and anatomy to Michelangelo's works of 1500 to 1510, the period in which scientific analysis indicates the statues were made.
Read more at Discovery News
Feb 1, 2015
Meteorite may represent 'bulk background' of Mars' battered crust
NWA 7034, a meteorite found a few years ago in the Moroccan desert, is like no other rock ever found on Earth. It's been shown to be a 4.4 billion-year-old chunk of the Martian crust, and according to a new analysis, rocks just like it may cover vast swaths of Mars.
In a new paper, scientists report that spectroscopic measurements of the meteorite are a spot-on match with orbital measurements of the Martian dark plains, areas where the planet's coating of red dust is thin and the rocks beneath are exposed. The findings suggest that the meteorite, nicknamed Black Beauty, is representative of the "bulk background" of rocks on the Martian surface, says Kevin Cannon, a Brown University graduate student and lead author of the new paper.
The research, co-authored by Jack Mustard from Brown and Carl Agee from the University of New Mexico, is in press in the journal Icarus.
When scientists started analyzing Black Beauty in 2011, they knew they had something special. Its chemical makeup confirmed that it was a castaway from Mars, but it was unlike any Martian meteorite ever found. Before Black Beauty, all the Martian rocks found on Earth were classified as SNC meteorites (shergottites, nakhlites, or chassignites). They're mainly igneous rocks made of cooled volcanic material. But Black Beauty is a breccia, a mashup of different rock types welded together in a basaltic matrix. It contains sedimentary components that match the chemical makeup of rocks analyzed by the Mars rovers. Scientists concluded that it is a piece of Martian crust -- the first such sample to make it to Earth.
Cannon and Mustard thought Black Beauty might help to clear up a longstanding enigma: the spectral signal from SNC meteorites never quite match with remotely sensed specra from the Martian surface. "Most samples from Mars are somewhat similar to spacecraft measurements," Mustard said, "but annoyingly different."
So after acquiring a chip of Black Beauty from Agee, Cannon and Mustard used a variety of spectroscopic techniques to analyze it. The work included use of a hyperspectral imaging system developed by Headwall photonics, a Massachusetts-based company. The device enabled detailed spectral imaging of the entire sample.
"Other techniques give us measurements of a dime-sized spot," Cannon said. "What we wanted to do was get an average for the entire sample. That overall measurement was what ended up matching the orbital data."
The researchers say the spectral match helps put a face on the dark plains, suggesting that the regions are dominated by brecciated rocks similar to Black Beauty. Because the dark plains are dust-poor regions, they're thought to be representative of what hides beneath the red dust on much of the rest of the planet.
"This is showing that if you went to Mars and picked up a chunk of crust, you'd expect it to be heavily beat up, battered, broken apart and put back together," Cannon said.
Read more at Science Daily
In a new paper, scientists report that spectroscopic measurements of the meteorite are a spot-on match with orbital measurements of the Martian dark plains, areas where the planet's coating of red dust is thin and the rocks beneath are exposed. The findings suggest that the meteorite, nicknamed Black Beauty, is representative of the "bulk background" of rocks on the Martian surface, says Kevin Cannon, a Brown University graduate student and lead author of the new paper.
The research, co-authored by Jack Mustard from Brown and Carl Agee from the University of New Mexico, is in press in the journal Icarus.
When scientists started analyzing Black Beauty in 2011, they knew they had something special. Its chemical makeup confirmed that it was a castaway from Mars, but it was unlike any Martian meteorite ever found. Before Black Beauty, all the Martian rocks found on Earth were classified as SNC meteorites (shergottites, nakhlites, or chassignites). They're mainly igneous rocks made of cooled volcanic material. But Black Beauty is a breccia, a mashup of different rock types welded together in a basaltic matrix. It contains sedimentary components that match the chemical makeup of rocks analyzed by the Mars rovers. Scientists concluded that it is a piece of Martian crust -- the first such sample to make it to Earth.
Cannon and Mustard thought Black Beauty might help to clear up a longstanding enigma: the spectral signal from SNC meteorites never quite match with remotely sensed specra from the Martian surface. "Most samples from Mars are somewhat similar to spacecraft measurements," Mustard said, "but annoyingly different."
So after acquiring a chip of Black Beauty from Agee, Cannon and Mustard used a variety of spectroscopic techniques to analyze it. The work included use of a hyperspectral imaging system developed by Headwall photonics, a Massachusetts-based company. The device enabled detailed spectral imaging of the entire sample.
"Other techniques give us measurements of a dime-sized spot," Cannon said. "What we wanted to do was get an average for the entire sample. That overall measurement was what ended up matching the orbital data."
The researchers say the spectral match helps put a face on the dark plains, suggesting that the regions are dominated by brecciated rocks similar to Black Beauty. Because the dark plains are dust-poor regions, they're thought to be representative of what hides beneath the red dust on much of the rest of the planet.
"This is showing that if you went to Mars and picked up a chunk of crust, you'd expect it to be heavily beat up, battered, broken apart and put back together," Cannon said.
Read more at Science Daily
Subscribe to:
Posts (Atom)