Species: Latimeria chalumnae
Habitat: Between 100 and 300 metres deep off the coast of south-east Africa, around the Comoros Islands
We thought they died out 400 million years ago till one popped up off the coast of the Comoros Islands in 1938. Their two unusual front fins show they are closely related to the first fish to clamber onto land around 400 million years ago. Now, coelacanths, dubbed "living fossils" since their unexpected reappearance, have sprung another surprise: they are serial monogamists.
Coelacanths are enormous, bottom-dwelling fish that lurk between 100 and 200 metres beneath the surface, on the rocky sides of volcanic islands in the Indian Ocean. The 1.5-metre-long giants are best known from sightings in the Comoros Islands, but have also been spotted off the eastern and south-eastern coasts of Africa. They spend their days in cavities inside submarine volcanic rocks, and only venture out at night to feed – mostly on squid, octopus, cuttlefish and other fish.
"We know hardly anything about their reproduction," says Kathrin Lampert of Ruhr University Bochum in Germany. What is known is that their gestation period is an impressive three years and they bear live young.
Now, for the first time, Lampert has DNA fingerprinted two dead pregnant females and their offspring. The fish were accidentally caught by fishing boats, one off Mozambique in 1991 and the other off Zanzibar in 2009, and weighed in the range of 90 kilograms.
"For both, it was very clear there was only one male involved," says Lampert. She and her colleagues were stunned by the discovery: they expected the females would mate with multiple partners to maximise the genetic diversity and therefore the survival prospects of their offspring. "Given the length of gestation, that's a big investment in offspring by the female," says Lampert.
Not so many fish in the sea?
Only a few hundred coelacanths are thought to exist, so it could simply be that there are too few males available for multiple mating. But previous research has shown that they live in groups in caves, providing ample opportunities for females to play the field. "So it's unlikely they are restricted by mate choice," Lampert says.
The study shows that a single male sires an entire clutch, says Lampert – but it's unlikely that mates stay faithful to each other throughout their lives. Lampert says it's most likely that they are serial monogamists.
Females don't appear to care for their offspring once they are born. They may even cannibalise them given half a chance. Only one or two juveniles have ever been observed in the wild alongside adults. "Maybe they go to deeper depths to avoid being eaten by their parents," says Lampert.
The pregnant fish give one clue to the puzzle. The offspring – 23 found in one female and 26 in the other, each weighing about 500 grams – are large at birth, about a third the length of adults. This may make them more difficult to catch and eat, and help them to quickly escape to deeper waters. It would also explain why the female undergoes such a long gestation: growing the young to a large size inside her boosts their chances of escaping a hungry neighbour.
Read more at New Scientist
Sep 21, 2013
Skeleton of Ancient Prince Reveals Etruscan Life
The skeletonized body of an Etruscan prince, possibly a relative to Tarquinius Priscus, the legendary fifth king of Rome from 616 to 579 B.C., has been brought to light in an extraordinary finding that promises to reveal new insights on one of the ancient world’s most fascinating cultures.
Found in Tarquinia, a hill town about 50 miles northwest of Rome, famous for its Etruscan art treasures, the 2,600 year old intact burial site came complete with a full array of precious grave goods.
“It’s a unique discovery, as it is extremely rare to find an inviolate Etruscan tomb of an upper-class individual. It opens up huge study opportunities on the Etruscans,” Alessandro Mandolesi, of the University of Turin, told Discovery News. Mandolesi is leading the excavation in collaboration with the Archaeological Superintendency of Southern Etruria.
A fun loving and eclectic people who among other things taught the French how to make wine, the Romans how to build roads, and introduced the art of writing into Europe, the Etruscans began to flourish around 900 B.C., and dominated much of Italy for five centuries.
Known for their art, agriculture, fine metalworking and commerce, the Etruscans begun to decline during the fifth century B.C., as the Romans grew in power. By 300-100 B.C., they eventually became absorbed into the Roman empire.
Since their puzzling, non-Indo-European language was virtually extinguished (they left no literature to document their society), the Etruscans have long been considered one of antiquity's great enigmas.
Indeed, much of what we know about them comes from their cemeteries. Only the richly decorated tombs they left behind have provided clues to fully reconstruct their history.
Blocked by a perfectly sealed stone slab, the rock-cut tomb in Tarquinia appeared promising even before opening it.
Indeed, several objects, including jars, vases and even a grater, were found in the soil in front of the stone door, indicating that a funeral rite of an important person took place there.
As the heavy stone slab was removed, Mandolesi and his team were left breathless. In the small vaulted chamber, the complete skeleton of an individual was resting on a stone bed on the left. A spear lay along the body, while fibulae, or brooches, on the chest indicated that the individual, a man, was probably once dressed with a mantle.
At his feet stood a large bronze basin and a dish with food remains, while the stone table on the right might have contained the incinerated remains of another individual.
Decorated with a red strip, the upper part of the wall featured, along with several nails, a small hanging vase, which might have contained some ointment. A number of grave goods, which included large Greek Corinthian vases and precious ornaments, lay on the floor.
“That small vase has been hanging on the wall for 2,600 years. It’s amazing,” Lorenzo Benini, CEO of the company Kostelia, said.
Along with Pietro Del Grosso of the company Tecnozenith, Benini is the private investor who has largely contributed to the excavation.
Although intact, the tomb has suffered a small natural structural collapse, the effects of which are visible in some broken vases.
Mandolesi and his team believe the individual was a member of Tarquinia’s ruling family.
The underground chamber was found beside an imposing mound, the Queen Tomb, which is almost identical to an equally impressive mound, the King’s Tomb, 600 feet away.
About 130 feet in diameter, the Queen's Tomb is the largest among the more than 6,000 rock cut tombs (200 of them are painted) that make up the necropolis in Tarquinia. Mandolesi has been excavating it and its surrounding area for the past six years.
Both mounds date to the 7th century B.C., the Orientalizing period, so called due to the influence on the Etruscans from the Eastern Mediterranean.
According to Roman tradition, Demaratus, a Greek from Corinth, landed in Tarquinia as a refugee in the 7th century BC, bringing with him a team of painters and artisans who taught the local people new artistic techniques.
Demaratus then married an Etruscan noblewoman from Tarquinia, and their son, Lucumo, became the fifth king of Rome in 616 B.C., taking the name of Lucius Tarquinius Priscus.
The story emphasizes the importance of Tarquinia as one of the most powerful cities in the Etruscan league.
Indeed, the two imposing mounds would have certainly remarked the power of the princes of Tarquinia to anybody arriving from the sea.
Read more at Discovery News
Found in Tarquinia, a hill town about 50 miles northwest of Rome, famous for its Etruscan art treasures, the 2,600 year old intact burial site came complete with a full array of precious grave goods.
“It’s a unique discovery, as it is extremely rare to find an inviolate Etruscan tomb of an upper-class individual. It opens up huge study opportunities on the Etruscans,” Alessandro Mandolesi, of the University of Turin, told Discovery News. Mandolesi is leading the excavation in collaboration with the Archaeological Superintendency of Southern Etruria.
A fun loving and eclectic people who among other things taught the French how to make wine, the Romans how to build roads, and introduced the art of writing into Europe, the Etruscans began to flourish around 900 B.C., and dominated much of Italy for five centuries.
Known for their art, agriculture, fine metalworking and commerce, the Etruscans begun to decline during the fifth century B.C., as the Romans grew in power. By 300-100 B.C., they eventually became absorbed into the Roman empire.
Since their puzzling, non-Indo-European language was virtually extinguished (they left no literature to document their society), the Etruscans have long been considered one of antiquity's great enigmas.
Indeed, much of what we know about them comes from their cemeteries. Only the richly decorated tombs they left behind have provided clues to fully reconstruct their history.
Blocked by a perfectly sealed stone slab, the rock-cut tomb in Tarquinia appeared promising even before opening it.
Indeed, several objects, including jars, vases and even a grater, were found in the soil in front of the stone door, indicating that a funeral rite of an important person took place there.
As the heavy stone slab was removed, Mandolesi and his team were left breathless. In the small vaulted chamber, the complete skeleton of an individual was resting on a stone bed on the left. A spear lay along the body, while fibulae, or brooches, on the chest indicated that the individual, a man, was probably once dressed with a mantle.
At his feet stood a large bronze basin and a dish with food remains, while the stone table on the right might have contained the incinerated remains of another individual.
Decorated with a red strip, the upper part of the wall featured, along with several nails, a small hanging vase, which might have contained some ointment. A number of grave goods, which included large Greek Corinthian vases and precious ornaments, lay on the floor.
“That small vase has been hanging on the wall for 2,600 years. It’s amazing,” Lorenzo Benini, CEO of the company Kostelia, said.
Along with Pietro Del Grosso of the company Tecnozenith, Benini is the private investor who has largely contributed to the excavation.
Although intact, the tomb has suffered a small natural structural collapse, the effects of which are visible in some broken vases.
Mandolesi and his team believe the individual was a member of Tarquinia’s ruling family.
The underground chamber was found beside an imposing mound, the Queen Tomb, which is almost identical to an equally impressive mound, the King’s Tomb, 600 feet away.
About 130 feet in diameter, the Queen's Tomb is the largest among the more than 6,000 rock cut tombs (200 of them are painted) that make up the necropolis in Tarquinia. Mandolesi has been excavating it and its surrounding area for the past six years.
Both mounds date to the 7th century B.C., the Orientalizing period, so called due to the influence on the Etruscans from the Eastern Mediterranean.
According to Roman tradition, Demaratus, a Greek from Corinth, landed in Tarquinia as a refugee in the 7th century BC, bringing with him a team of painters and artisans who taught the local people new artistic techniques.
Demaratus then married an Etruscan noblewoman from Tarquinia, and their son, Lucumo, became the fifth king of Rome in 616 B.C., taking the name of Lucius Tarquinius Priscus.
The story emphasizes the importance of Tarquinia as one of the most powerful cities in the Etruscan league.
Indeed, the two imposing mounds would have certainly remarked the power of the princes of Tarquinia to anybody arriving from the sea.
Read more at Discovery News
Sep 20, 2013
Seismologists Puzzle Over Largest Deep Earthquake Ever Recorded
A magnitude 8.3 earthquake that struck deep beneath the Sea of Okhotsk on May 24, 2013, has left seismologists struggling to explain how it happened. At a depth of about 609 kilometers (378 miles), the intense pressure on the fault should inhibit the kind of rupture that took place.
"It's a mystery how these earthquakes happen. How can rock slide against rock so fast while squeezed by the pressure from 610 kilometers of overlying rock?" said Thorne Lay, professor of Earth and planetary sciences at the University of California, Santa Cruz.
Lay is coauthor of a paper, published in the September 20 issue of Science, analyzing the seismic waves from the Sea of Okhotsk earthquake. First author Lingling Ye, a graduate student working with Lay at UC Santa Cruz, led the seismic analysis, which revealed that this was the largest deep earthquake ever recorded, with a seismic moment 30 percent larger than that of the next largest, a 1994 earthquake 637 kilometers beneath Bolivia.
Deep earthquakes occur in the transition zone between the upper mantle and lower mantle, from 400 to 700 kilometers below the surface. They result from stress in a deep subducted slab where one plate of Earth's crust dives beneath another plate. Such deep earthquakes usually don't cause enough shaking on the surface to be hazardous, but scientifically they are of great interest.
The energy released by the Sea of Okhotsk earthquake produced vibrations recorded by several thousand seismic stations around the world. Ye, Lay, and their coauthors determined that it released three times as much energy as the 1994 Bolivia earthquake, comparable to a 35 megaton TNT explosion. The rupture area and rupture velocity were also much larger. The rupture extended about 180 kilometers, by far the longest rupture for any deep earthquake recorded, Lay said. It involved shear faulting with a fast rupture velocity of about 4 kilometers per second (about 9,000 miles per hour), more like a conventional earthquake near the surface than other deep earthquakes. The fault slipped as much as 10 meters, with average slip of about 2 meters.
"It looks very similar to a shallow event, whereas the Bolivia earthquake ruptured very slowly and appears to have involved a different type of faulting, with deformation rather than rapid breaking and slippage of the rock," Lay said.
The researchers attributed the dramatic differences between these two deep earthquakes to differences in the age and temperature of the subducted slab. The subducted Pacific plate beneath the Sea of Okhotsk (located between the Kamchatka Peninsula and the Russian mainland) is a lot colder than the subducted slab where the 1994 Bolivia earthquake occurred.
"In the Bolivia event, the warmer slab resulted in a more ductile process with more deformation of the rock," Lay said.
The Sea of Okhotsk earthquake may have involved re-rupture of a fault in the plate produced when the oceanic plate bent down into the Kuril-Kamchatka subduction zone as it began to sink. But the precise mechanism for initiating shear fracture under huge confining pressure remains unclear. The presence of fluid can lubricate the fault, but all of the fluids should have been squeezed out of the slab before it reached that depth.
Read more at Science Daily
"It's a mystery how these earthquakes happen. How can rock slide against rock so fast while squeezed by the pressure from 610 kilometers of overlying rock?" said Thorne Lay, professor of Earth and planetary sciences at the University of California, Santa Cruz.
Lay is coauthor of a paper, published in the September 20 issue of Science, analyzing the seismic waves from the Sea of Okhotsk earthquake. First author Lingling Ye, a graduate student working with Lay at UC Santa Cruz, led the seismic analysis, which revealed that this was the largest deep earthquake ever recorded, with a seismic moment 30 percent larger than that of the next largest, a 1994 earthquake 637 kilometers beneath Bolivia.
Deep earthquakes occur in the transition zone between the upper mantle and lower mantle, from 400 to 700 kilometers below the surface. They result from stress in a deep subducted slab where one plate of Earth's crust dives beneath another plate. Such deep earthquakes usually don't cause enough shaking on the surface to be hazardous, but scientifically they are of great interest.
The energy released by the Sea of Okhotsk earthquake produced vibrations recorded by several thousand seismic stations around the world. Ye, Lay, and their coauthors determined that it released three times as much energy as the 1994 Bolivia earthquake, comparable to a 35 megaton TNT explosion. The rupture area and rupture velocity were also much larger. The rupture extended about 180 kilometers, by far the longest rupture for any deep earthquake recorded, Lay said. It involved shear faulting with a fast rupture velocity of about 4 kilometers per second (about 9,000 miles per hour), more like a conventional earthquake near the surface than other deep earthquakes. The fault slipped as much as 10 meters, with average slip of about 2 meters.
"It looks very similar to a shallow event, whereas the Bolivia earthquake ruptured very slowly and appears to have involved a different type of faulting, with deformation rather than rapid breaking and slippage of the rock," Lay said.
The researchers attributed the dramatic differences between these two deep earthquakes to differences in the age and temperature of the subducted slab. The subducted Pacific plate beneath the Sea of Okhotsk (located between the Kamchatka Peninsula and the Russian mainland) is a lot colder than the subducted slab where the 1994 Bolivia earthquake occurred.
"In the Bolivia event, the warmer slab resulted in a more ductile process with more deformation of the rock," Lay said.
The Sea of Okhotsk earthquake may have involved re-rupture of a fault in the plate produced when the oceanic plate bent down into the Kuril-Kamchatka subduction zone as it began to sink. But the precise mechanism for initiating shear fracture under huge confining pressure remains unclear. The presence of fluid can lubricate the fault, but all of the fluids should have been squeezed out of the slab before it reached that depth.
Read more at Science Daily
Giant Prehistoric Elephant Slaughtered by Early Humans
Research by a University of Southampton archaeologist suggests that early humans, who lived thousands of years before Neanderthals, were able to work together in groups to hunt and slaughter animals as large as the prehistoric elephant.
Dr Francis Wenban-Smith discovered a site containing remains of an extinct straight-tusked elephant (Palaeoloxodon antiquus) in 2003, in an area of land at Ebbsfleet in Kent, during the construction of the High Speed 1 rail link from the Channel Tunnel to London.
Investigation of the area was carried out with independent heritage organisation Oxford Archaeology, with the support of HS1 Ltd.
Excavation revealed a deep sequence of deposits containing the elephant remains, along with numerous flint tools and a range of other species such as; wild aurochs, extinct forms of rhinoceros and lion, Barbary macaque, beaver, rabbit, various forms of vole and shrew, and a diverse assemblage of snails. These remains confirm that the deposits date to a warm period of climate around 420,000 years ago, the so-called Hoxnian interglacial, when the climate was probably slightly warmer than the present day.
Since the excavation, which took place in 2004, Francis has been carrying out a detailed analysis of evidence recovered from the site, including 80 undisturbed flint artefacts found scattered around the elephant carcass and used to butcher it. The pre-historic elephant was twice the size of today's African variety and up to four times the weight of family car.
Dr Wenban-Smith comments: "Although there is no direct evidence of how this particular animal met its end, the discovery of flint tools close to the carcass confirm butchery for its meat, probably by a group of at least four individuals.
"Early hominins of this period would have depended on nutrition from large herbivores. The key evidence for elephant hunting is that, of the few prehistoric butchered elephant carcasses that have been found across Europe, they are almost all large males in their prime, a pattern that does not suggest natural death and scavenging. Although it seems incredible that they could have killed such an animal, it must have been possible with wooden spears.. We know hominins of this period had these, and an elephant skeleton with a wooden spear through its ribs was found at the site of Lehringen in Germany in 1948."
These early humans suffered local extinction in Northern Europe during the great ice age known as the Anglian glaciation 450,000 years ago, but re-established themselves as the climate grew warmer again in the following Hoxnian interglacial.
An ability to hunt large mammals, and in particular elephants, as suggested by the Ebbsfleet find, would go some way to explaining how these people then managed to push northwards again into what is now Britain. The flint artefacts of these pioneer settlers are of a characteristic type known as Clactonian, mostly comprising simple razor-sharp flakes that would have been ideal for cutting meat, sometimes with notches on them that would have helped cut through the tougher animal hide.
The discovery of this previously undisturbed Elephant grave site is unique in Britain -- where only a handful of other elephant skeletons have been found and none of which have produced similar evidence of human exploitation.
Dr Wenban-Smith explains the Ebbsfleet area would have been very different from today: "Rich fossilised remains surrounding the elephant skeleton, including pollen, snails and a wide variety of vertebrates, provide a remarkable record of the climate and environment the early humans inhabited.
"Analysis of these deposits show they lived at a time of peak interglacial warmth, when the Ebbsfleet Valley was a lush, densely wooded tributary of the Thames, containing a quiet, almost stagnant swamp."
Read more at Science Daily
Dr Francis Wenban-Smith discovered a site containing remains of an extinct straight-tusked elephant (Palaeoloxodon antiquus) in 2003, in an area of land at Ebbsfleet in Kent, during the construction of the High Speed 1 rail link from the Channel Tunnel to London.
Investigation of the area was carried out with independent heritage organisation Oxford Archaeology, with the support of HS1 Ltd.
Excavation revealed a deep sequence of deposits containing the elephant remains, along with numerous flint tools and a range of other species such as; wild aurochs, extinct forms of rhinoceros and lion, Barbary macaque, beaver, rabbit, various forms of vole and shrew, and a diverse assemblage of snails. These remains confirm that the deposits date to a warm period of climate around 420,000 years ago, the so-called Hoxnian interglacial, when the climate was probably slightly warmer than the present day.
Since the excavation, which took place in 2004, Francis has been carrying out a detailed analysis of evidence recovered from the site, including 80 undisturbed flint artefacts found scattered around the elephant carcass and used to butcher it. The pre-historic elephant was twice the size of today's African variety and up to four times the weight of family car.
Dr Wenban-Smith comments: "Although there is no direct evidence of how this particular animal met its end, the discovery of flint tools close to the carcass confirm butchery for its meat, probably by a group of at least four individuals.
"Early hominins of this period would have depended on nutrition from large herbivores. The key evidence for elephant hunting is that, of the few prehistoric butchered elephant carcasses that have been found across Europe, they are almost all large males in their prime, a pattern that does not suggest natural death and scavenging. Although it seems incredible that they could have killed such an animal, it must have been possible with wooden spears.. We know hominins of this period had these, and an elephant skeleton with a wooden spear through its ribs was found at the site of Lehringen in Germany in 1948."
These early humans suffered local extinction in Northern Europe during the great ice age known as the Anglian glaciation 450,000 years ago, but re-established themselves as the climate grew warmer again in the following Hoxnian interglacial.
An ability to hunt large mammals, and in particular elephants, as suggested by the Ebbsfleet find, would go some way to explaining how these people then managed to push northwards again into what is now Britain. The flint artefacts of these pioneer settlers are of a characteristic type known as Clactonian, mostly comprising simple razor-sharp flakes that would have been ideal for cutting meat, sometimes with notches on them that would have helped cut through the tougher animal hide.
The discovery of this previously undisturbed Elephant grave site is unique in Britain -- where only a handful of other elephant skeletons have been found and none of which have produced similar evidence of human exploitation.
Dr Wenban-Smith explains the Ebbsfleet area would have been very different from today: "Rich fossilised remains surrounding the elephant skeleton, including pollen, snails and a wide variety of vertebrates, provide a remarkable record of the climate and environment the early humans inhabited.
"Analysis of these deposits show they lived at a time of peak interglacial warmth, when the Ebbsfleet Valley was a lush, densely wooded tributary of the Thames, containing a quiet, almost stagnant swamp."
Read more at Science Daily
Aye-Aye Gives World the Highly Elongated Finger
My mother used to tell me I’m a unique snowflake, and also that this is my last warning to stop monkeying with the damn thermostat. But let’s face it, I’m not unique. You’re not either. We’re all born largely the same animal. And while we have these pretty sweet brains, even outside of our species we’re quite closely related to other primates — sharing, for example, 96 percent of our genetic material with chimpanzees.
But in the forests of Madagascar, aye-aye mothers are also telling their children that they’re unique snowflakes, and other than their kids not literally being snowflakes, they’re absolutely right. No other creature on this planet comes close to the extraordinary aye-aye. It has the bushy tail of a squirrel, the ears and teeth of a rat, and the extremely elongated, super-thin, swiveling middle finger of … well, just the aye-aye.
Indeed, scientists initially thought it was a rodent. We now recognize the aye-aye as a primate — specifically, a kind of lemur — and have granted the species not only its own genus, but its own entire taxonomic family. This might sound lonely, but consider that we humans are forced to share a family with gorillas, chimps, and orangutans, which are all kinda like that embarrassing cousin you have who doesn’t shower and who sometimes climbs the Empire State Building and yells at biplanes. So maybe the aye-aye is alright with being alone.
While the aye-aye is no King Kong-esque menace, Malagasy superstition has painted it as a Grim Reaper of sorts. Legend goes that if an aye-aye points at you with its elongated middle finger, you’re marked for impending death, and the only path to salvation is to slaughter the defenseless animal. But it’s this finger that stands as one of the more remarkable adaptations in the animal kingdom.
As the nocturnal aye-aye slinks through the forest canopy, it rapidly taps its elongated digit on hollow branches and bamboo stalks, with the idea of agitating hidden grubs and listening for their movement, according conservation geneticist Ed Louis of the Madagascar Biodiversity and Biogeography Project at Omaha’s Henry Doorly Zoo. When it likes what it hears, it uses incredibly tough teeth to tear into the larva’s hiding place.
“They have these huge lower incisors that are fused together, that actually grow continuously throughout their lifetime, just like in rodents,” Louis said. “And they use these teeth to chew through bamboo or wood, things like that, and in captivity they’ve been known to chew through concrete cinder blocks.” So it’s a bit like Clint Eastwood in Escape From Alcatraz, only with more biting.
Once the hole in the bamboo is opened up, the aye-aye uses its middle digit to feel around for the grub, hooking it with a long nail. “Our fingers, we have these hinge joints, so we can go up and down, but that’s pretty much it,” said Louis. “But the [aye-aye’s] middle finger is actually a ball and socket, so it actually can sort of swivel like our shoulder,” granting it far more dexterity to reach its prey.
In and of itself this is an amazing technique, but it’s all the more fascinating to consider that chimps and orangutans arrived at a similar solution for gathering termites and ants — only they’re using tools in the form of sticks, jamming them into mounds to extract the insects. Aye-ayes just evolved with a tool in hand, perhaps finding the use of sticks undignified. And with such an asset they have assumed the niche that a grub-hunting woodpecker would fill elsewhere around the world.
But the aye-aye is not entirely alone in its brilliant adaptations. Thousands of miles away in Australia and New Guinea, the striped possum employs the same mode of hunting, called percussive foraging. It too has an elongated probing digit, though instead of using its middle finger the striped possum uses its ring finger (which the fashion-challenged creature would probably just call a fourth finger). It also has similar chompers for gnawing through wood. The two creatures arriving at almost identical adaptations is a great example of what’s known as convergent evolution: Where there’s a problem, unrelated species can independently develop the same fix.
For all its evolutionary triumphs, though, the aye-aye is now endangered due to, if you can believe it, human meddling. Beyond loss of habitat and getting killed just for pointing at the wrong guy, aye-ayes are often attacked by dogs, because they are unafraid to descend from the trees and trot through the human settlements in their massive territories, which are around 6 square kilometers, according to Louis. He and his team even tracked one animal traveling 25 kilometers in just four days.
Read more at Wired Science
But in the forests of Madagascar, aye-aye mothers are also telling their children that they’re unique snowflakes, and other than their kids not literally being snowflakes, they’re absolutely right. No other creature on this planet comes close to the extraordinary aye-aye. It has the bushy tail of a squirrel, the ears and teeth of a rat, and the extremely elongated, super-thin, swiveling middle finger of … well, just the aye-aye.
Indeed, scientists initially thought it was a rodent. We now recognize the aye-aye as a primate — specifically, a kind of lemur — and have granted the species not only its own genus, but its own entire taxonomic family. This might sound lonely, but consider that we humans are forced to share a family with gorillas, chimps, and orangutans, which are all kinda like that embarrassing cousin you have who doesn’t shower and who sometimes climbs the Empire State Building and yells at biplanes. So maybe the aye-aye is alright with being alone.
While the aye-aye is no King Kong-esque menace, Malagasy superstition has painted it as a Grim Reaper of sorts. Legend goes that if an aye-aye points at you with its elongated middle finger, you’re marked for impending death, and the only path to salvation is to slaughter the defenseless animal. But it’s this finger that stands as one of the more remarkable adaptations in the animal kingdom.
As the nocturnal aye-aye slinks through the forest canopy, it rapidly taps its elongated digit on hollow branches and bamboo stalks, with the idea of agitating hidden grubs and listening for their movement, according conservation geneticist Ed Louis of the Madagascar Biodiversity and Biogeography Project at Omaha’s Henry Doorly Zoo. When it likes what it hears, it uses incredibly tough teeth to tear into the larva’s hiding place.
“They have these huge lower incisors that are fused together, that actually grow continuously throughout their lifetime, just like in rodents,” Louis said. “And they use these teeth to chew through bamboo or wood, things like that, and in captivity they’ve been known to chew through concrete cinder blocks.” So it’s a bit like Clint Eastwood in Escape From Alcatraz, only with more biting.
Once the hole in the bamboo is opened up, the aye-aye uses its middle digit to feel around for the grub, hooking it with a long nail. “Our fingers, we have these hinge joints, so we can go up and down, but that’s pretty much it,” said Louis. “But the [aye-aye’s] middle finger is actually a ball and socket, so it actually can sort of swivel like our shoulder,” granting it far more dexterity to reach its prey.
In and of itself this is an amazing technique, but it’s all the more fascinating to consider that chimps and orangutans arrived at a similar solution for gathering termites and ants — only they’re using tools in the form of sticks, jamming them into mounds to extract the insects. Aye-ayes just evolved with a tool in hand, perhaps finding the use of sticks undignified. And with such an asset they have assumed the niche that a grub-hunting woodpecker would fill elsewhere around the world.
But the aye-aye is not entirely alone in its brilliant adaptations. Thousands of miles away in Australia and New Guinea, the striped possum employs the same mode of hunting, called percussive foraging. It too has an elongated probing digit, though instead of using its middle finger the striped possum uses its ring finger (which the fashion-challenged creature would probably just call a fourth finger). It also has similar chompers for gnawing through wood. The two creatures arriving at almost identical adaptations is a great example of what’s known as convergent evolution: Where there’s a problem, unrelated species can independently develop the same fix.
Read more at Wired Science
Invasion of the High-Altitude Alien Algae! Or Not
Alien discovery stories in the tabloid press are a dime a dozen. Claims of flying saucers, strange lights in the sky and close encounters of the third kind are a main staple for conspiracy theorists. But when a scientist goes on the record to say that he and his team has discovered basic alien lifeforms floating high in the atmosphere over the UK, suddenly there seems to be an air of legitimacy.
Unfortunately, despite all the PhDs and reputable host universities, a recent claim of airborne alien microbes fails on its first challenge — where’s the extraordinary evidence that proves these high-altitude samples came from outer space? Well, there isn’t any.
The online press is currently getting excited about this claim made by Milton Wainwright, a professor at the Department of Molecular Biology and Biotechnology at the University of Sheffield, and his team who flew a high-altitude balloon 17 miles over Chester in northwest England during the Perseid meteor shower on July 31. The balloon was carrying a sample capture system that opened for a few minutes and grabbed any aerosols floating around in the stratosphere.
On returning to Earth, Wainwright’s team (including scientists from Buckingham University) analyzed what was stuck to the sampling apparatus. What they discovered was, according to Wainwright, “revolutionary.”
In a series of papers published in the Journal of Cosmology (yes, the Journal of Cosmology. Alarm bells ringing much?), details of these high-altitude “diatoms” are discussed.
Diatoms are basic forms of algae when found on Earth, but should these basic forms of biology be found hitching a ride on, say, a meteorite, it could signify that life exists beyond Earth and the hypothetical mechanism of panspermia is real. Earlier this year, another group of researchers published their findings (of course, in the Journal of Cosmology) of diatoms hitching a ride inside a freshly fallen meteorite. But, like the atmospheric diatoms described in this research, these meteoric diatoms lacked any skeptical thought.
However, as Wainwright rightfully says, should diatoms be found to come from outer space, the discovery would “completely change our view of biology and evolution.” And the team of researchers think they have discovered something quite profound.
“Most people will assume that these biological particles must have just drifted up to the stratosphere from Earth,” said Wainwright. “But it is generally accepted that a particle of the size found cannot be lifted from Earth to heights of, for example, 27km. The only known exception is by a violent volcanic eruption, none of which occurred within three years of the sampling trip.
“In the absence of a mechanism by which large particles like these can be transported to the stratosphere we can only conclude that the biological entities originated from space.” (Emphasis added.)
So the logic is as follows: We can’t explain it, therefore… aliens.
It’s funny, in the conclusions of the team’s paper, they seem to work through this logical knot, saying: “Of course the standard mode of rebuttal to a space origin for the fragment is to assert that Occam’s razor informs us that there must be a mechanism for lofting particles of this size from Earth to the stratosphere and that our findings are proof of the existence of such an unknown mechanism, the search for which must now begin.”
This sounds reasonable enough. Barring any contamination of the high-altitude sampling mechanism (which, the researchers claim, isn’t a possibility), these diatoms had to have come from somewhere, so maybe this biological evidence traveled on some previously unknown mechanism from the Earth’s biosphere into a region of the atmosphere that should be lifeless. Unfortunately, this is the only portion of the paper that urges an ounce of skepticism.
As there is no known mechanism that could carry these diatoms to such a high altitude (such as a volcanic eruption, for example), “the diatom fragment … must most plausibly have come from space.” In other words: we don’t know where they came from… therefore, you know… aliens!
Taking a lead from Wainwright’s own advice, Occam’s Razor urges us to look for the simplest explanation (as it’s usually the correct one). The simplest explanation is that life from Earth (as Earth is KNOWN to be abundant in life) somehow found a way into Earth’s stratosphere. The simplest explanation isn’t that life came from deep space — a place that is, according to our current state of knowledge, lifeless.
We know that microscopic lifeforms exist in Earth’s atmosphere at lower altitudes and some forms of airborne fungi are thought to be complicit in cloud formation. But at stratospheric altitudes as sampled by Wainwright’s team, the atmosphere isn’t thought to sustain any kind of life. The diatoms collected by the researchers appear to be fragments of diatoms, so the idea that these diatom traces hitched a ride on the tiny particles of the Perseid meteor shower seems very attractive.
Unfortunately, it’s research like this that will always grab the headlines, despite the fact that it is published in the Journal of Cosmology, a publication with a questionable publishing record. Wainwright’s team may have found evidence of alien biology, but coming to such grand, extraordinary conclusions without the supporting extraordinary evidence and repeated, verified experiments, is a front to the scientific method. This is, apparently, one single flight of a high-altitude balloon over one single location on Earth. To arrive at the “alien conclusion” surely global, repeated tests are needed? And if there were any basis for the claim of a diatomic alien invader, why didn’t a reputable journal pick it up?
Read more at Discovery News
Unfortunately, despite all the PhDs and reputable host universities, a recent claim of airborne alien microbes fails on its first challenge — where’s the extraordinary evidence that proves these high-altitude samples came from outer space? Well, there isn’t any.
The online press is currently getting excited about this claim made by Milton Wainwright, a professor at the Department of Molecular Biology and Biotechnology at the University of Sheffield, and his team who flew a high-altitude balloon 17 miles over Chester in northwest England during the Perseid meteor shower on July 31. The balloon was carrying a sample capture system that opened for a few minutes and grabbed any aerosols floating around in the stratosphere.
On returning to Earth, Wainwright’s team (including scientists from Buckingham University) analyzed what was stuck to the sampling apparatus. What they discovered was, according to Wainwright, “revolutionary.”
In a series of papers published in the Journal of Cosmology (yes, the Journal of Cosmology. Alarm bells ringing much?), details of these high-altitude “diatoms” are discussed.
Diatoms are basic forms of algae when found on Earth, but should these basic forms of biology be found hitching a ride on, say, a meteorite, it could signify that life exists beyond Earth and the hypothetical mechanism of panspermia is real. Earlier this year, another group of researchers published their findings (of course, in the Journal of Cosmology) of diatoms hitching a ride inside a freshly fallen meteorite. But, like the atmospheric diatoms described in this research, these meteoric diatoms lacked any skeptical thought.
However, as Wainwright rightfully says, should diatoms be found to come from outer space, the discovery would “completely change our view of biology and evolution.” And the team of researchers think they have discovered something quite profound.
“Most people will assume that these biological particles must have just drifted up to the stratosphere from Earth,” said Wainwright. “But it is generally accepted that a particle of the size found cannot be lifted from Earth to heights of, for example, 27km. The only known exception is by a violent volcanic eruption, none of which occurred within three years of the sampling trip.
“In the absence of a mechanism by which large particles like these can be transported to the stratosphere we can only conclude that the biological entities originated from space.” (Emphasis added.)
So the logic is as follows: We can’t explain it, therefore… aliens.
It’s funny, in the conclusions of the team’s paper, they seem to work through this logical knot, saying: “Of course the standard mode of rebuttal to a space origin for the fragment is to assert that Occam’s razor informs us that there must be a mechanism for lofting particles of this size from Earth to the stratosphere and that our findings are proof of the existence of such an unknown mechanism, the search for which must now begin.”
This sounds reasonable enough. Barring any contamination of the high-altitude sampling mechanism (which, the researchers claim, isn’t a possibility), these diatoms had to have come from somewhere, so maybe this biological evidence traveled on some previously unknown mechanism from the Earth’s biosphere into a region of the atmosphere that should be lifeless. Unfortunately, this is the only portion of the paper that urges an ounce of skepticism.
As there is no known mechanism that could carry these diatoms to such a high altitude (such as a volcanic eruption, for example), “the diatom fragment … must most plausibly have come from space.” In other words: we don’t know where they came from… therefore, you know… aliens!
Taking a lead from Wainwright’s own advice, Occam’s Razor urges us to look for the simplest explanation (as it’s usually the correct one). The simplest explanation is that life from Earth (as Earth is KNOWN to be abundant in life) somehow found a way into Earth’s stratosphere. The simplest explanation isn’t that life came from deep space — a place that is, according to our current state of knowledge, lifeless.
We know that microscopic lifeforms exist in Earth’s atmosphere at lower altitudes and some forms of airborne fungi are thought to be complicit in cloud formation. But at stratospheric altitudes as sampled by Wainwright’s team, the atmosphere isn’t thought to sustain any kind of life. The diatoms collected by the researchers appear to be fragments of diatoms, so the idea that these diatom traces hitched a ride on the tiny particles of the Perseid meteor shower seems very attractive.
Unfortunately, it’s research like this that will always grab the headlines, despite the fact that it is published in the Journal of Cosmology, a publication with a questionable publishing record. Wainwright’s team may have found evidence of alien biology, but coming to such grand, extraordinary conclusions without the supporting extraordinary evidence and repeated, verified experiments, is a front to the scientific method. This is, apparently, one single flight of a high-altitude balloon over one single location on Earth. To arrive at the “alien conclusion” surely global, repeated tests are needed? And if there were any basis for the claim of a diatomic alien invader, why didn’t a reputable journal pick it up?
Read more at Discovery News
Sep 19, 2013
New Species of Legless Lizard Found at LAX
A bustling airport would hardly seem the place to find a new species of reclusive animal, but a team of California biologists recently found a shy new species of legless lizard living at the end of a runway at Los Angeles International Airport.
What’s more, the same team discovered three additional new species of these distinctive, snake-like lizards that are also living in some inhospitable-sounding places for wildlife: at a vacant lot in downtown Bakersfield, among oil derricks in the lower San Joaquin Valley and on the margins of the Mojave desert.
All are described in the latest issue of Breviora, a publication of the Museum of Comparative Zoology at Harvard University.
“This shows that there is a lot of undocumented biodiversity within California,” Theodore Papenfuss, one of the scientists, was quoted as saying in a press release.
Papenfuss, an amphibian and reptile expert at Berkeley’s Museum of Vertebrate Zoology, made the discoveries with James Parham of California State University, Fullerton.
“These are animals that have existed in the San Joaquin Valley, separate from any other species, for millions of years, completely unknown,” Parham said.
Legless lizards look a lot like snakes, but they’re different reptiles. The lizards are distinguishable from their slithery relatives based on one or more of the following: eyelids, external ear openings, lack of broad belly scales and/or a very long tail. Snakes, conversely, have a long body and a short tail.
Legless lizards, represented by more than 200 species worldwide, are well adapted to life in loose soil, Papenfuss said. Millions of years ago, lizards on five continents independently lost their limbs in order to burrow more quickly into sand or soil, wriggling like snakes. Some still have vestigial legs.
Though up to 8 inches in length, the creatures are seldom seen because they live mostly underground, eating insects and larvae, and may spend their lives within an area the size of a dining table. Most are discovered in moist areas when people overturn logs or rocks. It’s interesting to consider the LAX-based lizard’s life, considering all of that airplane rumbling overhead!
The researchers are now working with the California Department of Fish and Wildlife (CDFW) to determine whether the lizards need protected status. Currently, the common legless lizard is listed by the state as a species of special concern.
“These species definitely warrant attention, but we need to do a lot more surveys in California before we can know whether they need higher listing,” Parham said.
Read more at Discovery News
What’s more, the same team discovered three additional new species of these distinctive, snake-like lizards that are also living in some inhospitable-sounding places for wildlife: at a vacant lot in downtown Bakersfield, among oil derricks in the lower San Joaquin Valley and on the margins of the Mojave desert.
All are described in the latest issue of Breviora, a publication of the Museum of Comparative Zoology at Harvard University.
“This shows that there is a lot of undocumented biodiversity within California,” Theodore Papenfuss, one of the scientists, was quoted as saying in a press release.
Papenfuss, an amphibian and reptile expert at Berkeley’s Museum of Vertebrate Zoology, made the discoveries with James Parham of California State University, Fullerton.
“These are animals that have existed in the San Joaquin Valley, separate from any other species, for millions of years, completely unknown,” Parham said.
Legless lizards look a lot like snakes, but they’re different reptiles. The lizards are distinguishable from their slithery relatives based on one or more of the following: eyelids, external ear openings, lack of broad belly scales and/or a very long tail. Snakes, conversely, have a long body and a short tail.
Legless lizards, represented by more than 200 species worldwide, are well adapted to life in loose soil, Papenfuss said. Millions of years ago, lizards on five continents independently lost their limbs in order to burrow more quickly into sand or soil, wriggling like snakes. Some still have vestigial legs.
Though up to 8 inches in length, the creatures are seldom seen because they live mostly underground, eating insects and larvae, and may spend their lives within an area the size of a dining table. Most are discovered in moist areas when people overturn logs or rocks. It’s interesting to consider the LAX-based lizard’s life, considering all of that airplane rumbling overhead!
The researchers are now working with the California Department of Fish and Wildlife (CDFW) to determine whether the lizards need protected status. Currently, the common legless lizard is listed by the state as a species of special concern.
“These species definitely warrant attention, but we need to do a lot more surveys in California before we can know whether they need higher listing,” Parham said.
Read more at Discovery News
Raiders of the Lost Lake
A lake larger that Lake Superior once brought life to an area of south-central Africa that now hosts only a salty desert. The long-gone Lake Makgadikgadi filled in a huge expanse of northern Botswana near the wildlife paradise of the Okavango Delta, the world’s largest inland river delta.
In its prime, the 90,000 square kilometer (35,000 sq. mi.) lake would have boasted the second largest surface area of any inland body of water after the Caspian Sea.
Prehistoric Africans must have been perplexed when the lake drained away thousands of years ago. Rich fishing and hunting grounds shriveled as the water evaporated or escaped into the Zambezi River.
Nineteenth century European explorers noted the ancient shorelines etched on surrounding hillsides and realized that a massive lake once filled the area. But they didn’t know exactly how large the lake was or what had caused its demise.
More recent geological sleuthing revealed the rise and fall of Lake Makgadikgadi.
The mega lake fluctuated in size between approximately 1.8 million years ago to possibly as recently as 8,500 years ago, until it finally disappeared, Joel Podgorski of the Institute of Geophysics in Zurich, Switzerland told Discovery News. Podgorski’s study of the lake’s ancient boundaries will be published in the October issue of Geology.
Podgorski used magnetic imaging of the lost lake region to determine its ancient shorelines. His study also found a massive lost inland delta, or mega-fan, beneath the vanished lake. A biological paradise, similar to the modern Okavango Delta, may have first been drowned by the mega-lake, then dessicated when the water drained.
Populations of animals in the two small lakes remaining in the Makgadikgadi region may be remnants of the ancient lake’s ecosystem.
“Molecular dating of catfish and crocodiles points to paleo-Lake Makgadikgadi having existed as a connection between now separate populations of these species,” said Podgorski. “One could presume that the lake hosted a sizable wildlife population much as the nearby Okavango Delta does today,” said Podgorski.
Shifts in the Earth’s crust eventually caused the rivers that fed Lake Makgadikgadi to change course. The lower Zambezi River captured the water that once flowed into the mega-lake.
As recently as 1952, earthquakes have altered the flow of rivers in the region. A magnitude 6.7 quake in ’52 changed the water flow in part of the Okavango Delta.
Geologic activity also flattened out the ground beneath the lost lake, making it shallower and hastening its evaporation.
Read more at Discovery News
In its prime, the 90,000 square kilometer (35,000 sq. mi.) lake would have boasted the second largest surface area of any inland body of water after the Caspian Sea.
Prehistoric Africans must have been perplexed when the lake drained away thousands of years ago. Rich fishing and hunting grounds shriveled as the water evaporated or escaped into the Zambezi River.
Nineteenth century European explorers noted the ancient shorelines etched on surrounding hillsides and realized that a massive lake once filled the area. But they didn’t know exactly how large the lake was or what had caused its demise.
More recent geological sleuthing revealed the rise and fall of Lake Makgadikgadi.
The mega lake fluctuated in size between approximately 1.8 million years ago to possibly as recently as 8,500 years ago, until it finally disappeared, Joel Podgorski of the Institute of Geophysics in Zurich, Switzerland told Discovery News. Podgorski’s study of the lake’s ancient boundaries will be published in the October issue of Geology.
Podgorski used magnetic imaging of the lost lake region to determine its ancient shorelines. His study also found a massive lost inland delta, or mega-fan, beneath the vanished lake. A biological paradise, similar to the modern Okavango Delta, may have first been drowned by the mega-lake, then dessicated when the water drained.
Populations of animals in the two small lakes remaining in the Makgadikgadi region may be remnants of the ancient lake’s ecosystem.
“Molecular dating of catfish and crocodiles points to paleo-Lake Makgadikgadi having existed as a connection between now separate populations of these species,” said Podgorski. “One could presume that the lake hosted a sizable wildlife population much as the nearby Okavango Delta does today,” said Podgorski.
Shifts in the Earth’s crust eventually caused the rivers that fed Lake Makgadikgadi to change course. The lower Zambezi River captured the water that once flowed into the mega-lake.
As recently as 1952, earthquakes have altered the flow of rivers in the region. A magnitude 6.7 quake in ’52 changed the water flow in part of the Okavango Delta.
Geologic activity also flattened out the ground beneath the lost lake, making it shallower and hastening its evaporation.
Read more at Discovery News
How Much Longer Can Earth Support Life?
Earth could continue to host life for at least another 1.75 billion years, as long as nuclear holocaust, an errant asteroid or some other disaster doesn't intervene, a new study calculates.
But even without such dramatic doomsday scenarios, astronomical forces will eventually render the planet uninhabitable. Somewhere between 1.75 billion and 3.25 billion years from now, Earth will travel out of the solar system's habitable zone and into the "hot zone," new research indicates.
These zones are defined by water. In the habitable zone, a planet (whether in this solar system or an alien one) is just the right distance from its star to have liquid water. Closer to the sun, in the "hot zone," the Earth's oceans would evaporate. Of course, conditions for complex life, including humans, would become untenable before the planet entered the hot zone.
But the researchers' main concern was the search for life on other planets, not predicting a timeline for the end of life on this one.
The evolution of complex life on Earth suggests the process requires a lot of time.
Simple cells first appeared on Earth nearly 4 billion years ago. "We had insects 400 million years ago, dinosaurs 300 million years ago and flowering plants 130 million years ago," lead researcher Andrew Rushby, of the University of East Anglia in the United Kingdom, said in a statement. "Anatomically modern humans have only been around for the last 200,000 years — so you can see it takes a really long time for intelligent life to develop."
Rushby and his colleagues developed a new tool to help evaluate the amount of time available for the evolution of life on other planets: a model that predicts the time a planet would spend in its habitable zone. In the research, published today (Sept. 18) in the journal Astrobiology,they applied the model to Earth and eight other planets currently in the habitable zone, including Mars.
They calculated that Earth's habitable-zone lifetime is as long as 7.79 billion years. (Earth is estimated to be about 4.5 billion years old.) Meanwhile, the other planets had habitable-zone lifetimes ranging from 1 billion years to 54.72 billion years.
Read more at Discovery News
But even without such dramatic doomsday scenarios, astronomical forces will eventually render the planet uninhabitable. Somewhere between 1.75 billion and 3.25 billion years from now, Earth will travel out of the solar system's habitable zone and into the "hot zone," new research indicates.
These zones are defined by water. In the habitable zone, a planet (whether in this solar system or an alien one) is just the right distance from its star to have liquid water. Closer to the sun, in the "hot zone," the Earth's oceans would evaporate. Of course, conditions for complex life, including humans, would become untenable before the planet entered the hot zone.
But the researchers' main concern was the search for life on other planets, not predicting a timeline for the end of life on this one.
The evolution of complex life on Earth suggests the process requires a lot of time.
Simple cells first appeared on Earth nearly 4 billion years ago. "We had insects 400 million years ago, dinosaurs 300 million years ago and flowering plants 130 million years ago," lead researcher Andrew Rushby, of the University of East Anglia in the United Kingdom, said in a statement. "Anatomically modern humans have only been around for the last 200,000 years — so you can see it takes a really long time for intelligent life to develop."
Rushby and his colleagues developed a new tool to help evaluate the amount of time available for the evolution of life on other planets: a model that predicts the time a planet would spend in its habitable zone. In the research, published today (Sept. 18) in the journal Astrobiology,they applied the model to Earth and eight other planets currently in the habitable zone, including Mars.
They calculated that Earth's habitable-zone lifetime is as long as 7.79 billion years. (Earth is estimated to be about 4.5 billion years old.) Meanwhile, the other planets had habitable-zone lifetimes ranging from 1 billion years to 54.72 billion years.
Read more at Discovery News
From the Deepest Coma, New Brain Activity Found
When a patient's brain falls completely silent, and electrical recordings devices show a flat line, reflecting a lack of brain activity, doctors consider the patient to have reached the deepest stage of a coma. However, new findings suggest there can be a coma stage even deeper than this flat line -- and that brain activity can ramp up again from this state.
In the case of one patient in a drug-induced coma, and in subsequent experiments on cats, the researchers found that after deepening the coma by administering a higher dose of drugs, the silent brain started showing minimum but widespread neural activity across the brain, according to the study published today (Sept. 18) in the journal PLOS ONE.
The findings were based on measures of the brain's electrical activity, detected by electroencephalography (EEG), which shows various waveforms. In comatose patients, depending on the stage of their coma, the waveforms are altered. As the coma deepens, the EEG device will eventually show a flat line instead of a wave -- this stage is considered to be the turning point between a living brain and a deceased brain.
"Flat line was the deepest known form of coma," said study researcher Florin Amzica, neurophysiologist at Université de Montréal.
The new study shows "there's a deeper form of coma that goes beyond the flat line, and during this state of very deep coma, cortical activity revives," Amzica said. He noted the findings apply to patients in a medically induced coma with healthy brains that are receiving blood and oxygen. The conclusions may not extend to cases of comatose patients who have suffered major brain damage, he said.
The newly discovered coma state is characterized by electrical waves called Nu-complexes that are unlike other waveforms generated by the brain during known coma states, sleep or wakefulness. These waves originate in a deep brain region called the hippocampus, and then spread across the cortex (the brain’s outermost layer), according to the study.
The new findings came from a serendipitous observation in a patient who was in a deep coma and receiving powerful epilepsy medication required to control his convulsions. EEG recordings of his brain's electrical activity showed peculiar and unexplainable waveforms, the researchers said.
Using anesthetic drugs, the researchers recreated the patient's state in cats. When the cats reached the flat-line coma stage, the researchers increased the anesthetic's dose, and observed brain activity re-emerging in cats.
It is still unclear how the activity in neurons in the hippocampus can spread throughout the brain, the researcher said. One possible scenario is that silencing the brain even more may ease the control over neurons in the hippocampus that other brain areas normally maintain.
"The more the brain is unconscious, the less this activity is disturbed," Amzica said. The activity in the hippocampus then has more potential to become strong enough to spread into other areas, he said.
The findings may have therapeutic potential, the researchers said. Sometimes a coma is induced in patients who are at high risk of brain injury from incidents such as physical trauma, drug overdose or life-threatening seizures. By reducing the activity in the brain and slowing its metabolism, an induced coma can help protect the neural tissue.
Read more at Discovery News
In the case of one patient in a drug-induced coma, and in subsequent experiments on cats, the researchers found that after deepening the coma by administering a higher dose of drugs, the silent brain started showing minimum but widespread neural activity across the brain, according to the study published today (Sept. 18) in the journal PLOS ONE.
The findings were based on measures of the brain's electrical activity, detected by electroencephalography (EEG), which shows various waveforms. In comatose patients, depending on the stage of their coma, the waveforms are altered. As the coma deepens, the EEG device will eventually show a flat line instead of a wave -- this stage is considered to be the turning point between a living brain and a deceased brain.
"Flat line was the deepest known form of coma," said study researcher Florin Amzica, neurophysiologist at Université de Montréal.
The new study shows "there's a deeper form of coma that goes beyond the flat line, and during this state of very deep coma, cortical activity revives," Amzica said. He noted the findings apply to patients in a medically induced coma with healthy brains that are receiving blood and oxygen. The conclusions may not extend to cases of comatose patients who have suffered major brain damage, he said.
The newly discovered coma state is characterized by electrical waves called Nu-complexes that are unlike other waveforms generated by the brain during known coma states, sleep or wakefulness. These waves originate in a deep brain region called the hippocampus, and then spread across the cortex (the brain’s outermost layer), according to the study.
The new findings came from a serendipitous observation in a patient who was in a deep coma and receiving powerful epilepsy medication required to control his convulsions. EEG recordings of his brain's electrical activity showed peculiar and unexplainable waveforms, the researchers said.
Using anesthetic drugs, the researchers recreated the patient's state in cats. When the cats reached the flat-line coma stage, the researchers increased the anesthetic's dose, and observed brain activity re-emerging in cats.
It is still unclear how the activity in neurons in the hippocampus can spread throughout the brain, the researcher said. One possible scenario is that silencing the brain even more may ease the control over neurons in the hippocampus that other brain areas normally maintain.
"The more the brain is unconscious, the less this activity is disturbed," Amzica said. The activity in the hippocampus then has more potential to become strong enough to spread into other areas, he said.
The findings may have therapeutic potential, the researchers said. Sometimes a coma is induced in patients who are at high risk of brain injury from incidents such as physical trauma, drug overdose or life-threatening seizures. By reducing the activity in the brain and slowing its metabolism, an induced coma can help protect the neural tissue.
Read more at Discovery News
Sep 18, 2013
Southern Ocean Sampling Reveals Travels of Marine Microbes
By collecting water samples up to six kilometres below the surface of the Southern Ocean, UNSW researchers have shown for the first time the impact of ocean currents on the distribution and abundance of marine micro-organisms.
The sampling was the deepest ever undertaken from the Australian icebreaker, RSV Aurora Australis.
Microbes are so tiny they are invisible to the naked eye, but they are vital to sustaining life on earth, producing most of the oxygen we breathe, soaking up carbon dioxide from the atmosphere and recycling nutrients.
"Microbes form the bulk of the biomass in oceans. All the fish, dolphins, whales, sponges and other creatures account for less than 5 per cent of the biomass," says Professor Rick Cavicchioli, of the UNSW School of Biotechnology and Biomolecular Sciences, and leader of the team.
"Microbes perform roles that nothing else can carry out. And if one critical group of microbes was destroyed, life on the planet would cease to exist."
The influence of environmental conditions on the make-up of microbial communities in different regions of the ocean has been studied, as has the role of physical barriers in preventing their dispersal.
"Collecting samples in the Southern Ocean was an enormous challenge. But it has meant we were able to carry out the first study showing how physical transport in the ocean on currents can also shape microbial communities," says Professor Cavicchioli.
The results are published in the journal Nature Communications.
Twenty five samples were collected across a 3000 kilometre stretch of ocean between Antarctica and the southern tip of Western Australia. Sampling depths were determined by temperature, salinity and dissolved oxygen measurements, to ensure microbes were collected from all the distinct water masses of the Southern Ocean.
These water masses include the circumpolar deep water, which flows toward the south pole from the Indian, Pacific Ocean and Atlantic oceans; the surface water near the Antarctic coastline; and the cold, dense Antarctic bottom water, which flows north, away from the pole, at more than 4 kilometres depth.
Genetic sequencing of the microbial DNA in each sample was carried out to characterise the microbial communities in different water-masses. The research shows that communities that are connected by ocean currents are more similar to each other.
"So a microbial community could be very different to one only a few hundred metres away, but closely related to one that is thousands of kilometres away because they are connected by a current," says Professor Cavicchioli.
"Researchers need to take this into account when they are studying these important micro-organisms."
Read more at Science Daily
The sampling was the deepest ever undertaken from the Australian icebreaker, RSV Aurora Australis.
Microbes are so tiny they are invisible to the naked eye, but they are vital to sustaining life on earth, producing most of the oxygen we breathe, soaking up carbon dioxide from the atmosphere and recycling nutrients.
"Microbes form the bulk of the biomass in oceans. All the fish, dolphins, whales, sponges and other creatures account for less than 5 per cent of the biomass," says Professor Rick Cavicchioli, of the UNSW School of Biotechnology and Biomolecular Sciences, and leader of the team.
"Microbes perform roles that nothing else can carry out. And if one critical group of microbes was destroyed, life on the planet would cease to exist."
The influence of environmental conditions on the make-up of microbial communities in different regions of the ocean has been studied, as has the role of physical barriers in preventing their dispersal.
"Collecting samples in the Southern Ocean was an enormous challenge. But it has meant we were able to carry out the first study showing how physical transport in the ocean on currents can also shape microbial communities," says Professor Cavicchioli.
The results are published in the journal Nature Communications.
Twenty five samples were collected across a 3000 kilometre stretch of ocean between Antarctica and the southern tip of Western Australia. Sampling depths were determined by temperature, salinity and dissolved oxygen measurements, to ensure microbes were collected from all the distinct water masses of the Southern Ocean.
These water masses include the circumpolar deep water, which flows toward the south pole from the Indian, Pacific Ocean and Atlantic oceans; the surface water near the Antarctic coastline; and the cold, dense Antarctic bottom water, which flows north, away from the pole, at more than 4 kilometres depth.
Genetic sequencing of the microbial DNA in each sample was carried out to characterise the microbial communities in different water-masses. The research shows that communities that are connected by ocean currents are more similar to each other.
"So a microbial community could be very different to one only a few hundred metres away, but closely related to one that is thousands of kilometres away because they are connected by a current," says Professor Cavicchioli.
"Researchers need to take this into account when they are studying these important micro-organisms."
Read more at Science Daily
A Slave's Life in Ancient Pompeii
Her name was Amica, and her name and footprint are embedded in a terra cotta tile belonging to an ancient Roman temple. The signed tile is a rare find because Amica was a Roman slave, and her footprint survives. For the most part, the slaves of the well-preserved city of Pompeii still remain largely "invisible" in history, according to the University of Delaware's Lauren Hackworth Petersen.
Petersen, an associate professor of art history at UD, is exploring new approaches, drawing on literature, law, art and other material evidence, to bring the lives of Pompeii's slaves out of the shadows. The research is part of a forthcoming book she is co-authoring with Sandra Joshel, professor of history at the University of Washington.
During the inaugural lecture of the UD Department of History's Graduate Student Lecture Series on Sept. 11, Petersen spoke of countless hours spent in Pompeii walking on the stone streets and narrow sidewalks "in the scorching sun of summer, in the rain and howling wind of winter," imagining where the city's slaves may have traveled as they carried out their daily work.
Who were these slaves? Roman slaveholders got them from many places. Some were Greeks, some were Africans, some were bred in the country specifically for the role, according to Petersen.
Mt. Vesuvius buried Pompeii in 79 A.D. in a searing avalanche of hot air, volcanic ash and rock. The city's population has been estimated at 20,000 people near the time of its destruction. Although no one knows exactly how many slaves were in the city, the typical Roman household may have had five to seven slaves, Petersen said, with larger houses such as the impressive House of the Menander, nearly the size of a city block, having many more.
Using a map of Pompeii showing detailed plots of the ancient streets and structures, Petersen pointed out the main doors to houses, which would have been the focus of doorwatchers inside, and the side doors and other "spaces of backdoor culture" through which a household's slaves most likely passed.
Slaves might snatch precious time out of their owner's (and various slave supervisors') sight fetching water at a public fountain, slipping into a tavern, bakery or cookshop, resting on a masonry bench in the shade of a house a few streets away, lingering in a garden on the south side of the city. In doing so, "a slave could become more anonymous and invisible on highly frequented streets," Petersen said.
Those narrow, two-way stone streets would have been noisy and odoriferous, filled with donkey carts, human sewage and animal feces, with slaves carrying the wealthy elite above the mob on litters.
Surprisingly, Petersen said, slaves were not immediately identifiable by their dress. The simple tunic was the clothing of choice worn by slaves and their owners alike. Only the toga was reserved for Roman citizens; however, many did not wear it, Petersen said, because the long length of material was cumbersome and difficult to keep clean.
Urine, used as a cleansing agent due to its high ammonia content, was collected in jars and taken to the fulleries where clothing was laundered. Slaves working in the fulleries would stand in small tubs filled with urine, water and dirty clothes stomping on them to clean the cloth.
Where slaves are more visible in Roman history is in literature and the law, Petersen said, because slaves were viewed as property, and if they were damaged by an erratic donkey cart or a falling pot flung from an upstairs window, for example, financial retribution would need to be made by the perpetrator.
Although some slaves escaped, extensive means of recovering fugitives led to the recapture of many. Petersen said the gruesome remains of a slave shackled in irons, unable to flee Mt. Vesuvius' eruption, were found in a slave prison when the city was excavated centuries later.
Petersen calls the reconstructive work at Pompeii a starting point for thinking of places in context.
Read more at Science Daily
Petersen, an associate professor of art history at UD, is exploring new approaches, drawing on literature, law, art and other material evidence, to bring the lives of Pompeii's slaves out of the shadows. The research is part of a forthcoming book she is co-authoring with Sandra Joshel, professor of history at the University of Washington.
During the inaugural lecture of the UD Department of History's Graduate Student Lecture Series on Sept. 11, Petersen spoke of countless hours spent in Pompeii walking on the stone streets and narrow sidewalks "in the scorching sun of summer, in the rain and howling wind of winter," imagining where the city's slaves may have traveled as they carried out their daily work.
Who were these slaves? Roman slaveholders got them from many places. Some were Greeks, some were Africans, some were bred in the country specifically for the role, according to Petersen.
Mt. Vesuvius buried Pompeii in 79 A.D. in a searing avalanche of hot air, volcanic ash and rock. The city's population has been estimated at 20,000 people near the time of its destruction. Although no one knows exactly how many slaves were in the city, the typical Roman household may have had five to seven slaves, Petersen said, with larger houses such as the impressive House of the Menander, nearly the size of a city block, having many more.
Using a map of Pompeii showing detailed plots of the ancient streets and structures, Petersen pointed out the main doors to houses, which would have been the focus of doorwatchers inside, and the side doors and other "spaces of backdoor culture" through which a household's slaves most likely passed.
Slaves might snatch precious time out of their owner's (and various slave supervisors') sight fetching water at a public fountain, slipping into a tavern, bakery or cookshop, resting on a masonry bench in the shade of a house a few streets away, lingering in a garden on the south side of the city. In doing so, "a slave could become more anonymous and invisible on highly frequented streets," Petersen said.
Those narrow, two-way stone streets would have been noisy and odoriferous, filled with donkey carts, human sewage and animal feces, with slaves carrying the wealthy elite above the mob on litters.
Surprisingly, Petersen said, slaves were not immediately identifiable by their dress. The simple tunic was the clothing of choice worn by slaves and their owners alike. Only the toga was reserved for Roman citizens; however, many did not wear it, Petersen said, because the long length of material was cumbersome and difficult to keep clean.
Urine, used as a cleansing agent due to its high ammonia content, was collected in jars and taken to the fulleries where clothing was laundered. Slaves working in the fulleries would stand in small tubs filled with urine, water and dirty clothes stomping on them to clean the cloth.
Where slaves are more visible in Roman history is in literature and the law, Petersen said, because slaves were viewed as property, and if they were damaged by an erratic donkey cart or a falling pot flung from an upstairs window, for example, financial retribution would need to be made by the perpetrator.
Although some slaves escaped, extensive means of recovering fugitives led to the recapture of many. Petersen said the gruesome remains of a slave shackled in irons, unable to flee Mt. Vesuvius' eruption, were found in a slave prison when the city was excavated centuries later.
Petersen calls the reconstructive work at Pompeii a starting point for thinking of places in context.
Read more at Science Daily
Termites Create Their Own Antibiotics
Termites cause $40 billion in damage every year, worldwide, and researchers say the insects have developed an ingenious defense against pesticide: They make antibacterial nests out of their own poo.
Termites have evolved to use their feces as a source of natural antibiotics, according to a report in the latest Proceedings of the Royal Society B. By integrating their poo into building materials, termite nests prevent the spread of disease and counter certain insecticides.
An average termite is just 3/8 of an inch long, and yet these tiny subterranean insects have foiled humans for centuries.
“Killing a single termite is not a problem,” lead author Thomas Chouvenc told Discovery News. “Killing a whole colony is a challenge.”
“With the Formosan subterranean termites, the nest can be spread in the ground over 150 meters (492 feet) through a complex system of tunnels,” added Chouvenc, a researcher at the University of Florida. “They are therefore difficult to detect, and usually people notice them in their house after extensive damage becomes visible.”
There are about 3,000 described species of termites, but only 80 are considered structural pests. The Formosan termite is now under countless homes in subtropical and temperate areas, he said. The Eastern subterranean termite, native to the United States, is also prevalent.
Chouvenc and his team collected five Formosan termite colonies in Broward County, Fla. The researchers analyzed the nests, including performing tests to determine the antimicrobial activity.
The scientists determined that the poo-containing nest material promoted the growth of Streptomyces, a beneficial bacteria. It, in turn, prevented infection caused by other microbes.
This mode of defense adds to the termites’ already powerful disease-resistant arsenal. It’s a three-part punch that helps them win battles with homeowners.
First, termites possess an innate immunity, due to their biochemistry, which wards off bodily intruders, such as bacteria and pesticides.
“Second,” Chouvenc said, “termites have what is now accepted as ‘social immunity,’ as they can increase their disease resistance as a group with the help of prophylactic behaviors (such as grooming, cadaver removal and cannibalism).”
This third, most recently discovered, defense likely evolved because “subterranean termites have survived in the soil in constant contact with a variety of pathogens.” They also produce a lot of waste that accumulates in their confined environment. It’s a win-win for them to recycle the feces into a building material.
Humans aren’t so lucky, in terms of this form of recycling. Due to the termites’ diet, their fecal material consists mostly of partially digested wood, a resource that is poor in nitrogen and therefore limits the growth of many organisms.
“The big difference here is that, while we have beneficial bacteria inside ourselves, termites were able to partially export it outside to maintain a clean environment,” Chouvenc said.
Rebeca Rosengaus also studies termites and is an associate professor in the Department of Marine and Environmental Sciences at Northeastern University.
Read more at Discovery News
Termites have evolved to use their feces as a source of natural antibiotics, according to a report in the latest Proceedings of the Royal Society B. By integrating their poo into building materials, termite nests prevent the spread of disease and counter certain insecticides.
An average termite is just 3/8 of an inch long, and yet these tiny subterranean insects have foiled humans for centuries.
“Killing a single termite is not a problem,” lead author Thomas Chouvenc told Discovery News. “Killing a whole colony is a challenge.”
“With the Formosan subterranean termites, the nest can be spread in the ground over 150 meters (492 feet) through a complex system of tunnels,” added Chouvenc, a researcher at the University of Florida. “They are therefore difficult to detect, and usually people notice them in their house after extensive damage becomes visible.”
There are about 3,000 described species of termites, but only 80 are considered structural pests. The Formosan termite is now under countless homes in subtropical and temperate areas, he said. The Eastern subterranean termite, native to the United States, is also prevalent.
Chouvenc and his team collected five Formosan termite colonies in Broward County, Fla. The researchers analyzed the nests, including performing tests to determine the antimicrobial activity.
The scientists determined that the poo-containing nest material promoted the growth of Streptomyces, a beneficial bacteria. It, in turn, prevented infection caused by other microbes.
This mode of defense adds to the termites’ already powerful disease-resistant arsenal. It’s a three-part punch that helps them win battles with homeowners.
First, termites possess an innate immunity, due to their biochemistry, which wards off bodily intruders, such as bacteria and pesticides.
“Second,” Chouvenc said, “termites have what is now accepted as ‘social immunity,’ as they can increase their disease resistance as a group with the help of prophylactic behaviors (such as grooming, cadaver removal and cannibalism).”
This third, most recently discovered, defense likely evolved because “subterranean termites have survived in the soil in constant contact with a variety of pathogens.” They also produce a lot of waste that accumulates in their confined environment. It’s a win-win for them to recycle the feces into a building material.
Humans aren’t so lucky, in terms of this form of recycling. Due to the termites’ diet, their fecal material consists mostly of partially digested wood, a resource that is poor in nitrogen and therefore limits the growth of many organisms.
“The big difference here is that, while we have beneficial bacteria inside ourselves, termites were able to partially export it outside to maintain a clean environment,” Chouvenc said.
Rebeca Rosengaus also studies termites and is an associate professor in the Department of Marine and Environmental Sciences at Northeastern University.
Read more at Discovery News
Lens Changes Focus Like a Human Eye
Human eyes are an ideal lens. They can easily shift focus between several objects in a given scene, even if those objects are located at different distances. Attempting a similar ability with a camera may require the photographer to change lenses.
Ohio State University engineers took a crack at giving a camera lens some of the versatility of a human. They made a fluid-filled lens that can change its shape and focus, as well as alter the direction it focuses in. The work was described in the Technical Digest of the 25th IEEE International Conference on Micro Electro Mechanical Systems. The technology could improve the capabilities of digital phone cameras and make cameras overall more reliable by eliminating the need for certain moving parts.
The Ohio State University lens is made from a flexible polymer. The design is like an insect’s compound eye, with a single large lens made up of several small dome-shaped pockets, each filled with fluid. Tiny channels supply the fluid to each of the pockets.
By pumping fluid in and out of the pockets, the engineers were able to alter the lens’ shape and focus. The point where the image is focused can also be moved off-center. In a lens made of glass or plastic the only way to change where the image is centered is to point the lens in a different direction.
This method of focusing is a lot like what human eyes do. In humans, the muscles in the eye squeeze the lens or stretch it a bit to change the focal point of the image. When you look at something far off, for instance, the lens in your eye becomes slightly flatter.
Another advantage of the design is a wide angle of view. This is where the designers took a cue from insects’ compound eyes. The reason flies can see behind them is that their eyes are made of thousands of tiny facets, each pointed in a different direction. The down side (for the fly) is that each of those tiny facets can’t focus very well. The artificial lens solves that problem by adjusting the fluid-filled lenses.
Read more at Discovery News
Ohio State University engineers took a crack at giving a camera lens some of the versatility of a human. They made a fluid-filled lens that can change its shape and focus, as well as alter the direction it focuses in. The work was described in the Technical Digest of the 25th IEEE International Conference on Micro Electro Mechanical Systems. The technology could improve the capabilities of digital phone cameras and make cameras overall more reliable by eliminating the need for certain moving parts.
The Ohio State University lens is made from a flexible polymer. The design is like an insect’s compound eye, with a single large lens made up of several small dome-shaped pockets, each filled with fluid. Tiny channels supply the fluid to each of the pockets.
By pumping fluid in and out of the pockets, the engineers were able to alter the lens’ shape and focus. The point where the image is focused can also be moved off-center. In a lens made of glass or plastic the only way to change where the image is centered is to point the lens in a different direction.
This method of focusing is a lot like what human eyes do. In humans, the muscles in the eye squeeze the lens or stretch it a bit to change the focal point of the image. When you look at something far off, for instance, the lens in your eye becomes slightly flatter.
Another advantage of the design is a wide angle of view. This is where the designers took a cue from insects’ compound eyes. The reason flies can see behind them is that their eyes are made of thousands of tiny facets, each pointed in a different direction. The down side (for the fly) is that each of those tiny facets can’t focus very well. The artificial lens solves that problem by adjusting the fluid-filled lenses.
Read more at Discovery News
Sep 17, 2013
Mt. Zion Dig Reveals Possible Second Temple Period Priestly Mansion
In excavating sites in a long-inhabited urban area like Jerusalem, archaeologists are accustomed to noting complexity in their finds -- how various occupying civilizations layer over one another during the site's continuous use over millennia. But when an area has also been abandoned for intermittent periods, paradoxically there may be even richer finds uncovered, as some layers have been buried and remain undisturbed by development.
Such appears be the case at an archaeological dig on Jerusalem's Mount Zion, conducted by the University of North Carolina at Charlotte, where the 2013 excavations have revealed the well-preserved lower levels of what the archaeological team believes is an Early Roman period mansion (first century CE), possibly belonging to a member of the Jewish ruling priestly caste.
If the mansion does prove to be an elite priestly residence, the dig team hopes the relatively undisturbed nature of the buried ruin may yield significant domestic details concerning the rulers of Jerusalem at the time of Jesus.
Particularly important in the season's discoveries were a buried vaulted chamber that has proven to be an unusual finished bathroom (with bathtub) adjacent to a large below-ground ritual cleansing pool (mikveh) -- only the fourth bathroom to be found in Israel from the Second Temple period, with two of the others found in palaces of Herod the Great at Jericho and Masada.
Shimon Gibson, the British-born archaeologist co-directing the UNC Charlotte excavation, notes that the addition of the bathroom to the mikveh is a clear sign of the wealth and status of the resident.
"The bathroom is very important because hitherto, except for Jerusalem, it is usually found within palace complexes, associated with the rulers of the country," Gibson said."We have examples of bathrooms of this kind mainly in palatial buildings."
The other example of a contemporary mikveh with an attached bathroom is at a site excavated in Jerusalem in the nearby Jewish Quarter."A bathroom that is almost a copy of ours was found in an excavation of a palatial mansion," noted Gibson. "It is only a stone's throw away and I wouldn't hesitate to say that the people who made that bathroom probably were the same ones who made this one. It's almost identical, not only in the way it's made, but also in the finishing touches, like the edge of the bath itself."
"The building in the Jewish Quarter is similar in characteristics to our own with an inscription of a priestly family," Gibson added. "The working theory is that we're dealing also with a priestly family."
Gibson notes that there are other details about the site that suggest that its first century residents may have been members of the ruling elite."The building that we are excavating is in the shadow -- immediately to the southeast -- of the very, very large palace of Herod the Great, his compound and the later seat of the Roman governors (praetorium)."
The location is a strong indication of a high-status resident. "Whoever lived in this house would have been a neighbor and would have been able to pop into the palace," he speculated.
While also cautious about reaching premature conclusions, dig co-director James Tabor, a UNC Charlotte scholar of early Christian history, believes there might be significant historical information uncovered, should the building turn out to be a priestly residence.
"If this turns out to be the priestly residence of a wealthy first century Jewish family, it immediately connects not just to the elite of Jerusalem -- the aristocrats, the rich and famous of that day -- but to Jesus himself," Tabor said. "These are the families who had Jesus arrested and crucified, so for us to know more about them and their domestic life -- and the level of wealth that they enjoyed -- would really fill in for us some key history."
Though the artifacts found this season are still being evaluated, one set of items in particular stand out as highly unusual: a large number of murex shells, the largest number ever found in the ruins of first-century Jerusalem. Species of murex (a genus of Mediterranean sea snail) were highly valued in Roman times because of a rich purple dye that could be extracted from the living creature.
"This color was highly desired," Gibson said. "The dye industry seems to be something that was supervised by the priestly class for the priestly vestments and for other aspects of clothing which were vital for those who wished to officiate in the capital precincts."
Why anyone in Jerusalem would be in possession of such "a very large quantity" of murex shells, however, remains a mystery to the excavation team, since the shells are not involved in the actual dye making process. Gibson hypothesizes that the shells may have been used to identify different grades of dye, since the quality of the product can vary from species to species. Some species are used to make a turquoise blue dye.
"It is significant that these are household activities which may have been undertaken by the priests," Gibson said. "If so, it tells us a lot more about the priests than we knew before. We know from the writings of Josephus Flavius and later rabbinical texts about their activities in the area of the Jewish temple, but there is hardly any information about their priestly activities outside the holy precinct. This is new information, and that is quite exciting. We might find in future seasons further aspects of industries which were supervised by these priestly families."
The domestic details of the first-century Jewish ruling class may yield insights into New Testament history, Tabor notes. "Jesus, in fact, criticizes the wealth of this class," Tabor said. "He talks about their clothing and their long robes and their finery, and, in a sense, pokes fun at it. So for us to get closer to understanding that -- to supplement the text -- it could be really fascinating."
Gibson also notes that historical legends from several centuries later point as well to the possibility that the building is a priestly residence."Byzantine tradition places in our general area the mansion of the high priest Caiaphas or perhaps Annas, who was his father-in-law," Gibson said. "In those days you had extended families who would have been using the same building complex, which might have had up to 20 rooms and several different floors."
Further discoveries this season suggest still other details of history from first century Jerusalem. At the bottom of the residence's large, 30-foot deep cistern, the excavators found cooking pots and the remains of an oven. While Gibson stresses that it is again too early to draw conclusions about these items, he and the other researchers are considering these items as a possible indication that the emptied cistern was used as a refuge by Jewish residents hiding from Roman soldiers during the siege of 70 CE.
"When we started clearing it we found a lot of debris inside, which included substantial numbers of animal bones and then right at the bottom we came across a number of vessels, which seemed to be sitting on the floor -- cooking pots and bits of an oven as well," Gibson said."We still need to look at this material very carefully and be absolutely certain of our conclusions, but it might be that these are the remnants of a kitchen in use by Jews hiding from the Romans -- their last resort was to go into these cisterns. It was a common practice, but this conclusion is theoretical. It makes for a very good story and it does look that way, but we've got to be certain."
Gibson notes that the Roman-Jewish historian Titus Flavius Josephus talks about such a scene in his description of the siege:
One John, a leader of the rebels, along with his brother Simon, who were found starved to death in the cisterns and water systems that ran under the city. Over 2000 bodies found in the various underground chambers, most dead from starvation. (Josephus, War 6:429-433)
Gibson credits the rich amount of detail and archaeological information present at the first century level of the dig with the accident of the site's location in Jerusalem. Ruins in major urban areas are rarely preserved with parts of the structure buried intact because subsequent residents tend to cannibalize buildings for materials for their own structures. However, when the Jerusalem of Jesus's era was destroyed by the occupying Romans in 70 CE, it was deserted for 65 years, until the Roman Emperor Hadrian re-built a city (Aelia Capitolina) on the ruins in 135 CE. At that point however,the new development was on the other side of the present-day city and Mount Zion was left unoccupied.
"The ruined field of first-century houses in our area remained there intact up until the beginning of the Byzantine period (early 4th Century)," Gibson said. "When the Byzantine inhabitants built there, they leveled things off a bit but they used the same plan of the older houses, building their walls on top of the older walls."
Subsequently, the sixth century Byzantine Emperor Justinian contributed another layer of preservation when he completed the construction of a massive new cathedral, the Nea Ekklesia of the Theotokos, just to the north-east of the site on Mt. Zion. The construction involved the excavation of enormous underground reservoirs and the excavation fill was dumped downhill, burying the more recent Byzantine constructions.
"The area got submerged, " Gibson said. "The early Byzantine reconstruction of these two-story Early Roman houses then got buried under rubble and soil fills. Then they established buildings above it. That's why we found an unusually well-preserved set of stratigraphic levels."
In addition to straight-forward archaeological research, the excavation is being used as a field school for the instruction of UNC Charlotte students in archaeology, especially since the site is remarkable in the way it exhibits the complexity of the urban history of Jerusalem. In addition to Roman-Jewish and Byzantine layers, there are also strata present reflecting a variety of the many Islamic cultures that have ruled the city between the Umayyad and Ottoman periods (seventh to twentieth centuries).
"One of the purposes of this dig is an educational one," Gibson said. "One of the ways it can be used is to try to understand the different cultures that had possession of Jerusalem at different points in time. The Islamic part of this is not fully understood, at least not in terms of the domestic picture.
Read more at Science Daily
Such appears be the case at an archaeological dig on Jerusalem's Mount Zion, conducted by the University of North Carolina at Charlotte, where the 2013 excavations have revealed the well-preserved lower levels of what the archaeological team believes is an Early Roman period mansion (first century CE), possibly belonging to a member of the Jewish ruling priestly caste.
If the mansion does prove to be an elite priestly residence, the dig team hopes the relatively undisturbed nature of the buried ruin may yield significant domestic details concerning the rulers of Jerusalem at the time of Jesus.
Particularly important in the season's discoveries were a buried vaulted chamber that has proven to be an unusual finished bathroom (with bathtub) adjacent to a large below-ground ritual cleansing pool (mikveh) -- only the fourth bathroom to be found in Israel from the Second Temple period, with two of the others found in palaces of Herod the Great at Jericho and Masada.
Shimon Gibson, the British-born archaeologist co-directing the UNC Charlotte excavation, notes that the addition of the bathroom to the mikveh is a clear sign of the wealth and status of the resident.
"The bathroom is very important because hitherto, except for Jerusalem, it is usually found within palace complexes, associated with the rulers of the country," Gibson said."We have examples of bathrooms of this kind mainly in palatial buildings."
The other example of a contemporary mikveh with an attached bathroom is at a site excavated in Jerusalem in the nearby Jewish Quarter."A bathroom that is almost a copy of ours was found in an excavation of a palatial mansion," noted Gibson. "It is only a stone's throw away and I wouldn't hesitate to say that the people who made that bathroom probably were the same ones who made this one. It's almost identical, not only in the way it's made, but also in the finishing touches, like the edge of the bath itself."
"The building in the Jewish Quarter is similar in characteristics to our own with an inscription of a priestly family," Gibson added. "The working theory is that we're dealing also with a priestly family."
Gibson notes that there are other details about the site that suggest that its first century residents may have been members of the ruling elite."The building that we are excavating is in the shadow -- immediately to the southeast -- of the very, very large palace of Herod the Great, his compound and the later seat of the Roman governors (praetorium)."
The location is a strong indication of a high-status resident. "Whoever lived in this house would have been a neighbor and would have been able to pop into the palace," he speculated.
While also cautious about reaching premature conclusions, dig co-director James Tabor, a UNC Charlotte scholar of early Christian history, believes there might be significant historical information uncovered, should the building turn out to be a priestly residence.
"If this turns out to be the priestly residence of a wealthy first century Jewish family, it immediately connects not just to the elite of Jerusalem -- the aristocrats, the rich and famous of that day -- but to Jesus himself," Tabor said. "These are the families who had Jesus arrested and crucified, so for us to know more about them and their domestic life -- and the level of wealth that they enjoyed -- would really fill in for us some key history."
Though the artifacts found this season are still being evaluated, one set of items in particular stand out as highly unusual: a large number of murex shells, the largest number ever found in the ruins of first-century Jerusalem. Species of murex (a genus of Mediterranean sea snail) were highly valued in Roman times because of a rich purple dye that could be extracted from the living creature.
"This color was highly desired," Gibson said. "The dye industry seems to be something that was supervised by the priestly class for the priestly vestments and for other aspects of clothing which were vital for those who wished to officiate in the capital precincts."
Why anyone in Jerusalem would be in possession of such "a very large quantity" of murex shells, however, remains a mystery to the excavation team, since the shells are not involved in the actual dye making process. Gibson hypothesizes that the shells may have been used to identify different grades of dye, since the quality of the product can vary from species to species. Some species are used to make a turquoise blue dye.
"It is significant that these are household activities which may have been undertaken by the priests," Gibson said. "If so, it tells us a lot more about the priests than we knew before. We know from the writings of Josephus Flavius and later rabbinical texts about their activities in the area of the Jewish temple, but there is hardly any information about their priestly activities outside the holy precinct. This is new information, and that is quite exciting. We might find in future seasons further aspects of industries which were supervised by these priestly families."
The domestic details of the first-century Jewish ruling class may yield insights into New Testament history, Tabor notes. "Jesus, in fact, criticizes the wealth of this class," Tabor said. "He talks about their clothing and their long robes and their finery, and, in a sense, pokes fun at it. So for us to get closer to understanding that -- to supplement the text -- it could be really fascinating."
Gibson also notes that historical legends from several centuries later point as well to the possibility that the building is a priestly residence."Byzantine tradition places in our general area the mansion of the high priest Caiaphas or perhaps Annas, who was his father-in-law," Gibson said. "In those days you had extended families who would have been using the same building complex, which might have had up to 20 rooms and several different floors."
Further discoveries this season suggest still other details of history from first century Jerusalem. At the bottom of the residence's large, 30-foot deep cistern, the excavators found cooking pots and the remains of an oven. While Gibson stresses that it is again too early to draw conclusions about these items, he and the other researchers are considering these items as a possible indication that the emptied cistern was used as a refuge by Jewish residents hiding from Roman soldiers during the siege of 70 CE.
"When we started clearing it we found a lot of debris inside, which included substantial numbers of animal bones and then right at the bottom we came across a number of vessels, which seemed to be sitting on the floor -- cooking pots and bits of an oven as well," Gibson said."We still need to look at this material very carefully and be absolutely certain of our conclusions, but it might be that these are the remnants of a kitchen in use by Jews hiding from the Romans -- their last resort was to go into these cisterns. It was a common practice, but this conclusion is theoretical. It makes for a very good story and it does look that way, but we've got to be certain."
Gibson notes that the Roman-Jewish historian Titus Flavius Josephus talks about such a scene in his description of the siege:
One John, a leader of the rebels, along with his brother Simon, who were found starved to death in the cisterns and water systems that ran under the city. Over 2000 bodies found in the various underground chambers, most dead from starvation. (Josephus, War 6:429-433)
Gibson credits the rich amount of detail and archaeological information present at the first century level of the dig with the accident of the site's location in Jerusalem. Ruins in major urban areas are rarely preserved with parts of the structure buried intact because subsequent residents tend to cannibalize buildings for materials for their own structures. However, when the Jerusalem of Jesus's era was destroyed by the occupying Romans in 70 CE, it was deserted for 65 years, until the Roman Emperor Hadrian re-built a city (Aelia Capitolina) on the ruins in 135 CE. At that point however,the new development was on the other side of the present-day city and Mount Zion was left unoccupied.
"The ruined field of first-century houses in our area remained there intact up until the beginning of the Byzantine period (early 4th Century)," Gibson said. "When the Byzantine inhabitants built there, they leveled things off a bit but they used the same plan of the older houses, building their walls on top of the older walls."
Subsequently, the sixth century Byzantine Emperor Justinian contributed another layer of preservation when he completed the construction of a massive new cathedral, the Nea Ekklesia of the Theotokos, just to the north-east of the site on Mt. Zion. The construction involved the excavation of enormous underground reservoirs and the excavation fill was dumped downhill, burying the more recent Byzantine constructions.
"The area got submerged, " Gibson said. "The early Byzantine reconstruction of these two-story Early Roman houses then got buried under rubble and soil fills. Then they established buildings above it. That's why we found an unusually well-preserved set of stratigraphic levels."
In addition to straight-forward archaeological research, the excavation is being used as a field school for the instruction of UNC Charlotte students in archaeology, especially since the site is remarkable in the way it exhibits the complexity of the urban history of Jerusalem. In addition to Roman-Jewish and Byzantine layers, there are also strata present reflecting a variety of the many Islamic cultures that have ruled the city between the Umayyad and Ottoman periods (seventh to twentieth centuries).
"One of the purposes of this dig is an educational one," Gibson said. "One of the ways it can be used is to try to understand the different cultures that had possession of Jerusalem at different points in time. The Islamic part of this is not fully understood, at least not in terms of the domestic picture.
Read more at Science Daily
Why Do Young Adults Start Smoking?
The risk of becoming a smoker among young adults who have never smoked is high: 14% will become smokers between the ages of 18 and 24, and three factors predict this behaviour. “Smoking initiation also occurs among young adults, and in particular among those who are impulsive, have poor grades, or who use alcohol regularly,” said Jennifer O'Loughlin, a Professor at the University of Montreal School of Public Health (ESPUM) and author of a Journal of Adolescent Health study published in August. O’Loughlin believes smoking prevention campaigns should also target young adults aged 18 to 24.'
A recent phenomenon
With smoking rates declining markedly in the past three decades, the researchers cited several studies suggesting that the tobacco industry is increasing its efforts to appeal to young adults.
In the United States, there is a 50% increase in the number of young adults who start smoking after high school.
This trend prompted O’Loughlin and her team at the ESPUM to identify predictors of young adults starting to smoke which may lead to avenues for prevention.
They analyzed data from a cohort study called “NDIT” (Nicotine Dependence in Teens), which began in 1999 in the Greater Montreal Area, in which nearly 1,300 young people aged 12-13 took part.
In this cohort, fully 75% tried smoking. Of these young people, 44% began smoking before high school; 43% began smoking during high school, and 14% began after high school.
Not all, however, continued smoking, but among the “late” smokers, the researchers found that smoking onset is associated with three risk factors: high levels of impulsivity, poor school performance, and higher alcohol consumption.
Explaining the three risk factors
Some late smokers showed greater impulsivity compared to the other participants in the study. According to O’Loughlin, it is possible that impulsivity is more freely expressed when one becomes an adult, since parents are no longer there to exert control. “We can postulate that parents of impulsive children exercise tighter control when they are living with them at home to protect their children from adopting behaviours that can lead to smoking, and this protection may diminish over time,” she explains.
In addition, school difficulties increase the risk of becoming a smoker because they are related to dropping out of school and, seeking employment in workplaces where smoking rates are higher.
Finally, since young people are more likely to frequent places where they can consume alcohol, they are more prone to be influenced by smokers, or at least be more easily tempted. “Since alcohol reduces inhibitions and self-control, it is an important risk factor for beginning to smoke,” warns O’Loughlin.
Toward targeted prevention campaigns
Smoking prevention campaigns usually target teenagers because studies show that people usually begin to smoke at age of 12 or 13. The phenomenon is well known, and numerous prevention programs are geared toward teenagers.
“Our study indicates that it is also important to address prevention among young adults, especially because advertizing campaigns of tobacco companies specifically target this group," says O'Loughlin.
Read more at Science Daily
A recent phenomenon
With smoking rates declining markedly in the past three decades, the researchers cited several studies suggesting that the tobacco industry is increasing its efforts to appeal to young adults.
In the United States, there is a 50% increase in the number of young adults who start smoking after high school.
This trend prompted O’Loughlin and her team at the ESPUM to identify predictors of young adults starting to smoke which may lead to avenues for prevention.
They analyzed data from a cohort study called “NDIT” (Nicotine Dependence in Teens), which began in 1999 in the Greater Montreal Area, in which nearly 1,300 young people aged 12-13 took part.
In this cohort, fully 75% tried smoking. Of these young people, 44% began smoking before high school; 43% began smoking during high school, and 14% began after high school.
Not all, however, continued smoking, but among the “late” smokers, the researchers found that smoking onset is associated with three risk factors: high levels of impulsivity, poor school performance, and higher alcohol consumption.
Explaining the three risk factors
Some late smokers showed greater impulsivity compared to the other participants in the study. According to O’Loughlin, it is possible that impulsivity is more freely expressed when one becomes an adult, since parents are no longer there to exert control. “We can postulate that parents of impulsive children exercise tighter control when they are living with them at home to protect their children from adopting behaviours that can lead to smoking, and this protection may diminish over time,” she explains.
In addition, school difficulties increase the risk of becoming a smoker because they are related to dropping out of school and, seeking employment in workplaces where smoking rates are higher.
Finally, since young people are more likely to frequent places where they can consume alcohol, they are more prone to be influenced by smokers, or at least be more easily tempted. “Since alcohol reduces inhibitions and self-control, it is an important risk factor for beginning to smoke,” warns O’Loughlin.
Toward targeted prevention campaigns
Smoking prevention campaigns usually target teenagers because studies show that people usually begin to smoke at age of 12 or 13. The phenomenon is well known, and numerous prevention programs are geared toward teenagers.
“Our study indicates that it is also important to address prevention among young adults, especially because advertizing campaigns of tobacco companies specifically target this group," says O'Loughlin.
Read more at Science Daily
Ten-Year Project Redraws the Map of Bird Brains
Explorers need good maps, which they often end up drawing themselves.
Pursuing their interests in using the brains of birds as a model for the human brain, an international team of researchers led by Duke neuroscientist Erich Jarvis and his collaborators Chun-Chun Chen and Kazuhiro Wada have just completed a mapping of the bird brain based on a 10-year exploration of the tiny cerebrums of eight species of birds.
In a special issue appearing online in the Journal of Comparative Neurology, two papers from the Jarvis group propose a dramatic redrawing of some boundaries and functional areas based on a computational analysis of the activity of 52 genes across 23 areas of the bird brain.
Jarvis, who is a professor of neurobiology at Duke, member of the Duke Institute for Brain Sciences, and a Howard Hughes Medical Institute investigator, said the most important takeaway from the new map is that the brains of all vertebrates, a group that includes birds as well as humans, have some important similarities that can be useful to research.
Most significantly, the new map argues for and supports the existence of columnar organization in the bird brain. "Columnar organization is a rule, rather than an exception found only in mammals," Jarvis said. "One way I visualize this view is that the avian brain is one big, giant gyrus folding around a ventricle space, functioning like what you'd find in the mammalian brain," he said.
To create different patterns of gene expression for the analysis, the birds were exposed to various environmental factors such as darkness or light, silence or bird song, hopping on a treadmill, and in the case of migratory warblers, a magnetic field that stimulated their navigational circuits.
The new map follows up on a 2004 model, proposed by an Avian Brain Nomenclature Consortium, also lead by Jarvis and colleagues, which officially changed a century-old view on the prevailing model that the avian brain contained mostly primitive regions. They argued instead that the avian brain has a cortical-like area and other forebrain regions similar to mammals, but organized differently.
"The change in terminology is small this time, but the change in concept is big," Jarvis said. For this special issue, the of Journal of Comparative Neurology commissioned a commentary by Juan Montiel and Zoltan Molnar, experts in brain evolution, to summarize the large amount of data presented in the studies by the Jarvis group.
One of the major findings is that two populations of cells on either side of a void called the ventricle are actually the same cell types with similar patterns of gene expression. Earlier investigators had thought of the ventricle as a physical barrier separating cell types, but in development studies led by Jarvis' post doctoral fellow Chun-chun Chen, the Duke researchers showed how dividing cells spread in a sheet and flow around the ventricle as they multiply.
The new map simplifies the bird cortex, called pallium, from seven populations of cells down to four major populations. Humans have five populations of cells in six layers.
Part of this refinement is simply that the tools are getting better, says Harvey Karten, a professor of neurosciences at the University of California-San Diego who proposed a dramatic re-thinking of bird cortical organization in the late 1960s. The best tools in that era were microscopes, specific cell stains and electrophysiology. Karten and colleagues are authors of a fourth paper in the special issue which announces a database of gene expression profiles of the avian brain containing some of the data that the Jarvis group used.
Jarvis said having a more specific map is necessary for properly sampling cell populations for gene expression analysis to do even more functional analysis of how the brain operates. As a next step, his team is considering doing an even more detailed bird map with "several hundred" genes rather than the 52 used to make this map.
Jarvis and colleagues are working now on a similar mapping of the crocodile brain with the ultimate goal of being able to say something about how dinosaur brains were organized, since both birds and crocs are descended from them. At a Society for Neuroscience conference in November, they'll be presenting some early findings from that project.
Though the specifics of this newest map may only be of interest within the bird research community, Jarvis said, it builds the awareness that birds can be a useful model for many questions about the human brain.
"Where does the mammalian brain come from?" Karten asks. "And what's the origin of these structures at the cellular and molecular level?" Some neuroscientists have argued that the mammalian cortex -- the one we have -- is something apart from the brains of other vertebrates. Jarvis and Karten now think vertebrate brains have more commonalities than differences.
Read more at Science Daily
Pursuing their interests in using the brains of birds as a model for the human brain, an international team of researchers led by Duke neuroscientist Erich Jarvis and his collaborators Chun-Chun Chen and Kazuhiro Wada have just completed a mapping of the bird brain based on a 10-year exploration of the tiny cerebrums of eight species of birds.
In a special issue appearing online in the Journal of Comparative Neurology, two papers from the Jarvis group propose a dramatic redrawing of some boundaries and functional areas based on a computational analysis of the activity of 52 genes across 23 areas of the bird brain.
Jarvis, who is a professor of neurobiology at Duke, member of the Duke Institute for Brain Sciences, and a Howard Hughes Medical Institute investigator, said the most important takeaway from the new map is that the brains of all vertebrates, a group that includes birds as well as humans, have some important similarities that can be useful to research.
Most significantly, the new map argues for and supports the existence of columnar organization in the bird brain. "Columnar organization is a rule, rather than an exception found only in mammals," Jarvis said. "One way I visualize this view is that the avian brain is one big, giant gyrus folding around a ventricle space, functioning like what you'd find in the mammalian brain," he said.
To create different patterns of gene expression for the analysis, the birds were exposed to various environmental factors such as darkness or light, silence or bird song, hopping on a treadmill, and in the case of migratory warblers, a magnetic field that stimulated their navigational circuits.
The new map follows up on a 2004 model, proposed by an Avian Brain Nomenclature Consortium, also lead by Jarvis and colleagues, which officially changed a century-old view on the prevailing model that the avian brain contained mostly primitive regions. They argued instead that the avian brain has a cortical-like area and other forebrain regions similar to mammals, but organized differently.
"The change in terminology is small this time, but the change in concept is big," Jarvis said. For this special issue, the of Journal of Comparative Neurology commissioned a commentary by Juan Montiel and Zoltan Molnar, experts in brain evolution, to summarize the large amount of data presented in the studies by the Jarvis group.
One of the major findings is that two populations of cells on either side of a void called the ventricle are actually the same cell types with similar patterns of gene expression. Earlier investigators had thought of the ventricle as a physical barrier separating cell types, but in development studies led by Jarvis' post doctoral fellow Chun-chun Chen, the Duke researchers showed how dividing cells spread in a sheet and flow around the ventricle as they multiply.
The new map simplifies the bird cortex, called pallium, from seven populations of cells down to four major populations. Humans have five populations of cells in six layers.
Part of this refinement is simply that the tools are getting better, says Harvey Karten, a professor of neurosciences at the University of California-San Diego who proposed a dramatic re-thinking of bird cortical organization in the late 1960s. The best tools in that era were microscopes, specific cell stains and electrophysiology. Karten and colleagues are authors of a fourth paper in the special issue which announces a database of gene expression profiles of the avian brain containing some of the data that the Jarvis group used.
Jarvis said having a more specific map is necessary for properly sampling cell populations for gene expression analysis to do even more functional analysis of how the brain operates. As a next step, his team is considering doing an even more detailed bird map with "several hundred" genes rather than the 52 used to make this map.
Jarvis and colleagues are working now on a similar mapping of the crocodile brain with the ultimate goal of being able to say something about how dinosaur brains were organized, since both birds and crocs are descended from them. At a Society for Neuroscience conference in November, they'll be presenting some early findings from that project.
Though the specifics of this newest map may only be of interest within the bird research community, Jarvis said, it builds the awareness that birds can be a useful model for many questions about the human brain.
"Where does the mammalian brain come from?" Karten asks. "And what's the origin of these structures at the cellular and molecular level?" Some neuroscientists have argued that the mammalian cortex -- the one we have -- is something apart from the brains of other vertebrates. Jarvis and Karten now think vertebrate brains have more commonalities than differences.
Read more at Science Daily
How Birds Got Their Wings: Fossil Data Show Scaling of Limbs Altered as Birds Originated from Dinosaurs
Birds originated from a group of small, meat-eating theropod dinosaurs called maniraptorans sometime around 150 million years ago. Recent findings from around the world show that many maniraptorans were very bird-like, with feathers, hollow bones, small body sizes and high metabolic rates.
But the question remains, at what point did forelimbs evolve into wings -- making it possible to fly?
McGill University professor Hans Larsson and a former graduate student, Alexander Dececchi, set out to answer that question by examining fossil data, greatly expanded in recent years, from the period marking the origin of birds.
In a study published in the September issue of Evolution, Larsson and Dececchi find that throughout most of the history of carnivorous dinosaurs, limb lengths showed a relatively stable scaling relationship to body size. This is despite a 5000-fold difference in mass between Tyrannosaurus rex and the smallest feathered theropods from China. This limb scaling changed, however, at the origin of birds, when both the forelimbs and hind limbs underwent a dramatic decoupling from body size. This change may have been critical in allowing early birds to evolve flight, and then to exploit the forest canopy, the authors conclude.
As forelimbs lengthened, they became long enough to serve as an airfoil, allowing for the evolution of powered flight. When coupled with the shrinking of the hind limbs, this helped refine flight control and efficiency in early birds. Shorter legs would have aided in reducing drag during flight -- the reason modern birds tuck their legs as they fly -- and also in perching and moving about on small branches in trees. This combination of better wings with more compact legs would have been critical for the survival of birds in a time when another group of flying reptiles, the pterosaurs, dominated the skies and competed for food.
"Our findings suggest that birds underwent an abrupt change in their developmental mechanisms, such that their forelimbs and hind limbs became subject to different length controls," says Larsson, Canada Research Chair in Macroevolution at McGill's Redpath Museum. Deviations from the rules of how an animal's limbs scale with changes in body size -- another example is the relatively long legs and short arms of humans -- usually indicate some major shift in function or behaviour. "This decoupling may be fundamental to the success of birds, the most diverse class of land vertebrates on Earth today."
"The origin of birds and powered flight is a classic major evolutionary transition," says Dececchi, now a postdoctoral researcher at the University of South Dakota. "Our findings suggest that the limb lengths of birds had to be dissociated from general body size before they could radiate so successfully. It may be that this fact is what allowed them to become more than just another lineage of maniraptorans and led them to expand to the wide range of limb shapes and sizes present in today's birds."
"This work, coupled with our previous findings that the ancestors of birds were not tree dwellers, does much to illuminate the ecology of bird antecedents." says Dr. Dececchi. "Knowing where birds came from, and how they got to where they are now, is crucial for understanding how the modern world came to look the way it is."
Read more at Science Daily
But the question remains, at what point did forelimbs evolve into wings -- making it possible to fly?
McGill University professor Hans Larsson and a former graduate student, Alexander Dececchi, set out to answer that question by examining fossil data, greatly expanded in recent years, from the period marking the origin of birds.
In a study published in the September issue of Evolution, Larsson and Dececchi find that throughout most of the history of carnivorous dinosaurs, limb lengths showed a relatively stable scaling relationship to body size. This is despite a 5000-fold difference in mass between Tyrannosaurus rex and the smallest feathered theropods from China. This limb scaling changed, however, at the origin of birds, when both the forelimbs and hind limbs underwent a dramatic decoupling from body size. This change may have been critical in allowing early birds to evolve flight, and then to exploit the forest canopy, the authors conclude.
As forelimbs lengthened, they became long enough to serve as an airfoil, allowing for the evolution of powered flight. When coupled with the shrinking of the hind limbs, this helped refine flight control and efficiency in early birds. Shorter legs would have aided in reducing drag during flight -- the reason modern birds tuck their legs as they fly -- and also in perching and moving about on small branches in trees. This combination of better wings with more compact legs would have been critical for the survival of birds in a time when another group of flying reptiles, the pterosaurs, dominated the skies and competed for food.
"Our findings suggest that birds underwent an abrupt change in their developmental mechanisms, such that their forelimbs and hind limbs became subject to different length controls," says Larsson, Canada Research Chair in Macroevolution at McGill's Redpath Museum. Deviations from the rules of how an animal's limbs scale with changes in body size -- another example is the relatively long legs and short arms of humans -- usually indicate some major shift in function or behaviour. "This decoupling may be fundamental to the success of birds, the most diverse class of land vertebrates on Earth today."
"The origin of birds and powered flight is a classic major evolutionary transition," says Dececchi, now a postdoctoral researcher at the University of South Dakota. "Our findings suggest that the limb lengths of birds had to be dissociated from general body size before they could radiate so successfully. It may be that this fact is what allowed them to become more than just another lineage of maniraptorans and led them to expand to the wide range of limb shapes and sizes present in today's birds."
"This work, coupled with our previous findings that the ancestors of birds were not tree dwellers, does much to illuminate the ecology of bird antecedents." says Dr. Dececchi. "Knowing where birds came from, and how they got to where they are now, is crucial for understanding how the modern world came to look the way it is."
Read more at Science Daily
Sep 16, 2013
Time Is in the Eye of the Beholder: Time Perception in Animals Depends On Their Pace of Life
An international collaboration led by scientists from Trinity College Dublin including researchers from the University of Edinburgh and the University of St Andrews has shown that animals' ability to perceive time is linked to their pace of life.
The rate at which time is perceived varies across animals. For example, flies owe their skill at avoiding rolled up newspapers to their ability to observe motion on finer timescales than our own eyes can achieve, allowing them to avoid the newspaper in a similar fashion to the "bullet time" sequence in the popular film The Matrix. In contrast, one species of tiger beetle runs faster than its eyes can keep up, essentially becoming blind and requiring it to stop periodically to re-evaluate its prey's position. Even in humans, athletes in various sports have also been shown to quicken their eyes' ability to track moving balls during games.
The study which was just published in the international journal Animal Behaviour, showed that small-bodied animals with fast metabolic rates, such as some birds, perceive more information in a unit of time, hence experiencing time more slowly than large bodied animals with slow metabolic rates, such as large turtles.
Commenting on the findings, Assistant Professor at the School of Natural Sciences at Trinity College Dublin, Andrew Jackson, said: "Ecology for an organism is all about finding a niche where you can succeed that no-one else can occupy. Our results suggest that time perception offers an as yet unstudied dimension along which animals can specialise and there is considerable scope to study this system in more detail. We are beginning to understand that there is a whole world of detail out there that only some animals can perceive and it's fascinating to think of how they might perceive the world differently to us."
"Our results lend support to the importance of time perception in animals where the ability to perceive time on very small scales may be the difference between life and death for fast moving organisms such as predators and their prey," commented lead author Kevin Healy, PhD student at the School of Natural Sciences, Trinity College Dublin. This time perception ability can be shown to vary across all animals, using a phenomenon called the critical flicker fusion frequency. The phenomenon, based on the maximum speed of flashes of light an individual can see before the light source is perceived as constant, is the principle behind the illusion of non-flashing television, computer and cinema screens. This is also the reason pet dogs see flickering televisions, as their eyes have a refresh rate higher than the screen of the TV.
The researchers took advantage of this phenomenon to explain the observed variation in time perception across a broad range of animals, showing that animals that would be expected to be agile possess the most refined ability to see time at high resolutions.
Professor Graeme Ruxton of the University of St Andrews in Scotland, who collaborated on the research project, said "Having eyes that send updates to the brain at much higher frequencies than our eyes do is of no value if the brain cannot process that information equally quickly. Hence, this work highlights the impressive capabilities of even the smallest animal brains. Flies might not be deep thinkers but they can make good decisions very quickly."
Read more at Science Daily
The rate at which time is perceived varies across animals. For example, flies owe their skill at avoiding rolled up newspapers to their ability to observe motion on finer timescales than our own eyes can achieve, allowing them to avoid the newspaper in a similar fashion to the "bullet time" sequence in the popular film The Matrix. In contrast, one species of tiger beetle runs faster than its eyes can keep up, essentially becoming blind and requiring it to stop periodically to re-evaluate its prey's position. Even in humans, athletes in various sports have also been shown to quicken their eyes' ability to track moving balls during games.
The study which was just published in the international journal Animal Behaviour, showed that small-bodied animals with fast metabolic rates, such as some birds, perceive more information in a unit of time, hence experiencing time more slowly than large bodied animals with slow metabolic rates, such as large turtles.
Commenting on the findings, Assistant Professor at the School of Natural Sciences at Trinity College Dublin, Andrew Jackson, said: "Ecology for an organism is all about finding a niche where you can succeed that no-one else can occupy. Our results suggest that time perception offers an as yet unstudied dimension along which animals can specialise and there is considerable scope to study this system in more detail. We are beginning to understand that there is a whole world of detail out there that only some animals can perceive and it's fascinating to think of how they might perceive the world differently to us."
"Our results lend support to the importance of time perception in animals where the ability to perceive time on very small scales may be the difference between life and death for fast moving organisms such as predators and their prey," commented lead author Kevin Healy, PhD student at the School of Natural Sciences, Trinity College Dublin. This time perception ability can be shown to vary across all animals, using a phenomenon called the critical flicker fusion frequency. The phenomenon, based on the maximum speed of flashes of light an individual can see before the light source is perceived as constant, is the principle behind the illusion of non-flashing television, computer and cinema screens. This is also the reason pet dogs see flickering televisions, as their eyes have a refresh rate higher than the screen of the TV.
The researchers took advantage of this phenomenon to explain the observed variation in time perception across a broad range of animals, showing that animals that would be expected to be agile possess the most refined ability to see time at high resolutions.
Professor Graeme Ruxton of the University of St Andrews in Scotland, who collaborated on the research project, said "Having eyes that send updates to the brain at much higher frequencies than our eyes do is of no value if the brain cannot process that information equally quickly. Hence, this work highlights the impressive capabilities of even the smallest animal brains. Flies might not be deep thinkers but they can make good decisions very quickly."
Read more at Science Daily
Magnetic Jet Shows How Stars Begin Their Final Transformation
An international team of astronomers have for the first time found a jet of high-energy particles emanating from a dying star. The discovery, by a collaboration of scientists from Sweden, Germany and Australia, is a crucial step in explaining how some of the most beautiful objects in space are formed -- and what happens when stars like the sun reach the end of their lives.
The researchers publish their results in the journal Monthly Notices of the Royal Astronomical Society.
At the end of their lives, stars like the sun transform into some of the most beautiful objects in space: amazing symmetric clouds of gas called planetary nebulae. But how planetary nebulae get their strange shapes has long been a mystery to astronomers.
Scientists at Chalmers University of Technology in Sweden have together with colleagues from Germany and Australia discovered what could be the key to the answer: a high-speed, magnetic jet from a dying star.
Using the CSIRO Australia Telescope Compact Array, an array of six 22-metre radio telescopes in New South Wales, Australia, they studied a star at the end of its life. The star, known as IRAS 15445−5449, is in the process of becoming a planetary nebula, and lies 23,000 light years away in the southern constellation Triangulum Australe (the Southern Triangle).
"In our data we found the clear signature of a narrow and extremely energetic jet of a type which has never been seen before in an old, sun-like star," says Andrés Pérez Sánchez, graduate student in astronomy at Bonn University, who led the study.
The strength of the radio waves of different frequencies from the star match the expected signature for a jet of high-energy particles which are, thanks to strong magnetic fields, accelerated up to speeds close to the speed of light. Similar jets have been seen in many other types of astronomical object, from newborn stars to supermassive black holes.
"What we're seeing is a powerful jet of particles spiralling through a strong magnetic field," says Wouter Vlemmings, astronomer at Onsala Space Observatory, Chalmers. "Its brightness indicates that it's in the process of creating a symmetric nebula around the star."
Right now the star is going through a short but dramatic phase in its development, the scientists believe.
"The radio signal from the jet varies in a way that means that it may only last a few decades. Over the course of just a few hundred years the jet can determine how the nebula will look when it finally gets lit up by the star," says team member Jessica Chapman, astronomer at CSIRO in Sydney, Australia.
Read more at Science Daily
The researchers publish their results in the journal Monthly Notices of the Royal Astronomical Society.
At the end of their lives, stars like the sun transform into some of the most beautiful objects in space: amazing symmetric clouds of gas called planetary nebulae. But how planetary nebulae get their strange shapes has long been a mystery to astronomers.
Scientists at Chalmers University of Technology in Sweden have together with colleagues from Germany and Australia discovered what could be the key to the answer: a high-speed, magnetic jet from a dying star.
Using the CSIRO Australia Telescope Compact Array, an array of six 22-metre radio telescopes in New South Wales, Australia, they studied a star at the end of its life. The star, known as IRAS 15445−5449, is in the process of becoming a planetary nebula, and lies 23,000 light years away in the southern constellation Triangulum Australe (the Southern Triangle).
"In our data we found the clear signature of a narrow and extremely energetic jet of a type which has never been seen before in an old, sun-like star," says Andrés Pérez Sánchez, graduate student in astronomy at Bonn University, who led the study.
The strength of the radio waves of different frequencies from the star match the expected signature for a jet of high-energy particles which are, thanks to strong magnetic fields, accelerated up to speeds close to the speed of light. Similar jets have been seen in many other types of astronomical object, from newborn stars to supermassive black holes.
"What we're seeing is a powerful jet of particles spiralling through a strong magnetic field," says Wouter Vlemmings, astronomer at Onsala Space Observatory, Chalmers. "Its brightness indicates that it's in the process of creating a symmetric nebula around the star."
Right now the star is going through a short but dramatic phase in its development, the scientists believe.
"The radio signal from the jet varies in a way that means that it may only last a few decades. Over the course of just a few hundred years the jet can determine how the nebula will look when it finally gets lit up by the star," says team member Jessica Chapman, astronomer at CSIRO in Sydney, Australia.
Read more at Science Daily
World's Most Vulnerable Areas to Climate Change Mapped
Using data from the world's ecosystems and predictions of how climate change will impact them, scientists from the Wildlife Conservation Society, the University of Queensland, and Stanford University have produced a roadmap that identifies the world's most vulnerable and least vulnerable areas in the Age of Climate Change.
The authors say the vulnerability map will help governments, environmental agencies, and donors identify areas where to best invest in protected area establishment, restoration efforts, and other conservation activities so as to have the biggest return on investment in saving ecosystems and the services they provide to wildlife and people alike.
The study appears in an online version of the journal Nature Climate Change. The authors include: Dr James Watson of the Wildlife Conservation Society and the University of Queensland; Dr Takuya Iwamura of Stanford University; and Nathalie Butt of the University of Queensland.
"We need to realize that climate change is going to impact ecosystems both directly and indirectly in a variety of ways and we can't keep on assuming that all adaptation actions are suitable everywhere. The fact is there is only limited funds out there and we need to start to be clever in our investments in adaptation strategies around the world,," said Dr. James Watson, Director of WCS's Climate Change Program and lead author of the Nature study. "The analysis and map in this study is a means of bringing clarity to complicated decisions on where limited resources will do the most good."
The researchers argue that almost all climate change assessments to date are incomplete in that they assess how future climate change is going to impact landscapes and seascapes, without considering the fact that most of these landscapes have modified by human activities in different ways, making them more or less susceptible to climate change.
A vulnerability map produced in the study examines the relationship of two metrics: how intact an ecosystem is, and how stable the ecosystem is going to be under predictions of future climate change. The analysis creates a rating system with four general categories for the world's terrestrial regions, with management recommendations determined by the combination of factors.
Ecosystems with highly intact vegetation and high relative climate stability, for instance, are the best locations for future protected areas, as these have the best chance of retaining species. In contrast, ecosystems with low levels of vegetation and high relative climate stability could merit efforts at habitat restoration. Ecosystems with low levels of vegetation intactness and low climate stability would be most at risk and would require significant levels of investment to achieve conservation outcomes.
The new map, the authors say, identifies southern and southeastern Asia, western and central Europe, eastern South America, and southern Australia as some of the most vulnerable regions. The analysis differs from previous climate change exposure assessments based on only climate change exposure which shows the most vulnerable regions as central Africa, northern South America, and northern Australia.
Read more at Science Daily
The authors say the vulnerability map will help governments, environmental agencies, and donors identify areas where to best invest in protected area establishment, restoration efforts, and other conservation activities so as to have the biggest return on investment in saving ecosystems and the services they provide to wildlife and people alike.
The study appears in an online version of the journal Nature Climate Change. The authors include: Dr James Watson of the Wildlife Conservation Society and the University of Queensland; Dr Takuya Iwamura of Stanford University; and Nathalie Butt of the University of Queensland.
"We need to realize that climate change is going to impact ecosystems both directly and indirectly in a variety of ways and we can't keep on assuming that all adaptation actions are suitable everywhere. The fact is there is only limited funds out there and we need to start to be clever in our investments in adaptation strategies around the world,," said Dr. James Watson, Director of WCS's Climate Change Program and lead author of the Nature study. "The analysis and map in this study is a means of bringing clarity to complicated decisions on where limited resources will do the most good."
The researchers argue that almost all climate change assessments to date are incomplete in that they assess how future climate change is going to impact landscapes and seascapes, without considering the fact that most of these landscapes have modified by human activities in different ways, making them more or less susceptible to climate change.
A vulnerability map produced in the study examines the relationship of two metrics: how intact an ecosystem is, and how stable the ecosystem is going to be under predictions of future climate change. The analysis creates a rating system with four general categories for the world's terrestrial regions, with management recommendations determined by the combination of factors.
Ecosystems with highly intact vegetation and high relative climate stability, for instance, are the best locations for future protected areas, as these have the best chance of retaining species. In contrast, ecosystems with low levels of vegetation and high relative climate stability could merit efforts at habitat restoration. Ecosystems with low levels of vegetation intactness and low climate stability would be most at risk and would require significant levels of investment to achieve conservation outcomes.
The new map, the authors say, identifies southern and southeastern Asia, western and central Europe, eastern South America, and southern Australia as some of the most vulnerable regions. The analysis differs from previous climate change exposure assessments based on only climate change exposure which shows the most vulnerable regions as central Africa, northern South America, and northern Australia.
Read more at Science Daily
TV Drug Ads: The Whole Truth?
Consumers should be wary when watching those advertisements for pharmaceuticals on the nightly TV news, as six out of 10 claims could potentially mislead the viewer, say researchers in an article published in the Journal of General Internal Medicine.
Researchers Adrienne E. Faerber of The Dartmouth Institute for Health Policy & Clinical Practice and David H. Kreling of The University of Wisconsin-Madison School of Pharmacy found that potentially misleading claims are prevalent throughout consumer-targeted prescription and non-prescription drug advertisements on television.
Over the past 15 years, researchers and policymakers have debated whether drug advertising informs consumers about new drugs, or persuades consumers to take medicines that they may not need. "Healthcare consumers need unrestricted access to high-quality information about health," said Faerber of The Dartmouth Institute, "but these TV drug ads had misleading statements that omitted or exaggerated information. These results conflict with arguments that drug ads are helping inform consumers."
Pharmaceutical companies spent $4.8 billion in 2009, surpassing consumer promotion for nonprescription products of $3 billion that year, the researchers said.
Content for this study came from the Vanderbilt TV News Archive, an indexed archive of recordings of the nightly news broadcasts (the news and commercial segments) on ABC, CBS, and NBC since 1968 and on CNN since 1992. Researchers viewed advertisements in the 6:30 pm EST period because the nightly news is a desirable time slot for drug advertisers because of the older audience that watches the nightly news.
The researchers reviewed 168 TV advertisements for prescription and over-the-counter drugs aired between 2008 and 2010, and identified statements that were strongly emphasized in the ad. A team of trained analysts then classified those claims as being truthful, potentially misleading or false.
They found that false claims, which are factually false or unsubstantiated, were rare, with only 1 in 10 claims false. False advertising is illegal and can lead to criminal and civil penalties.
Most claims were potentially misleading -- 6 in 10 claims left out important information, exaggerated information, provided opinions, or made meaningless associations with lifestyles, the researchers said.
False or potentially misleading claims may be more frequent in over-the-counter drug ads than ads for prescription drugs -- 6 of 10 claims in prescription drug ads were misleading or false, while 8 of 10 claims in OTC drug ads were misleading or false.
The Food and Drug Administration oversees prescription drug advertising while the Federal Trade Commission oversees advertising for nonprescription drugs.
The FDA and FTC have different definitions of false and misleading claims. For example, the FDA interpretation says prescription drug advertising must include information about the harms of the drug, but information on harms is left out of most OTC drug ads.
The researchers said there were some limitations in the study method: the sample was drawn from a 30-minute period of the TV broadcast day on four major networks, and does not represent all ads on TV. Also, they only analyzed what they determined as the most-emphasized claim in each advertisement and the coders need to interpret the meaning of claims to facilitate analysis, which did introduce subjectivity.
Read more at Science Daily
Researchers Adrienne E. Faerber of The Dartmouth Institute for Health Policy & Clinical Practice and David H. Kreling of The University of Wisconsin-Madison School of Pharmacy found that potentially misleading claims are prevalent throughout consumer-targeted prescription and non-prescription drug advertisements on television.
Over the past 15 years, researchers and policymakers have debated whether drug advertising informs consumers about new drugs, or persuades consumers to take medicines that they may not need. "Healthcare consumers need unrestricted access to high-quality information about health," said Faerber of The Dartmouth Institute, "but these TV drug ads had misleading statements that omitted or exaggerated information. These results conflict with arguments that drug ads are helping inform consumers."
Pharmaceutical companies spent $4.8 billion in 2009, surpassing consumer promotion for nonprescription products of $3 billion that year, the researchers said.
Content for this study came from the Vanderbilt TV News Archive, an indexed archive of recordings of the nightly news broadcasts (the news and commercial segments) on ABC, CBS, and NBC since 1968 and on CNN since 1992. Researchers viewed advertisements in the 6:30 pm EST period because the nightly news is a desirable time slot for drug advertisers because of the older audience that watches the nightly news.
The researchers reviewed 168 TV advertisements for prescription and over-the-counter drugs aired between 2008 and 2010, and identified statements that were strongly emphasized in the ad. A team of trained analysts then classified those claims as being truthful, potentially misleading or false.
They found that false claims, which are factually false or unsubstantiated, were rare, with only 1 in 10 claims false. False advertising is illegal and can lead to criminal and civil penalties.
Most claims were potentially misleading -- 6 in 10 claims left out important information, exaggerated information, provided opinions, or made meaningless associations with lifestyles, the researchers said.
False or potentially misleading claims may be more frequent in over-the-counter drug ads than ads for prescription drugs -- 6 of 10 claims in prescription drug ads were misleading or false, while 8 of 10 claims in OTC drug ads were misleading or false.
The Food and Drug Administration oversees prescription drug advertising while the Federal Trade Commission oversees advertising for nonprescription drugs.
The FDA and FTC have different definitions of false and misleading claims. For example, the FDA interpretation says prescription drug advertising must include information about the harms of the drug, but information on harms is left out of most OTC drug ads.
The researchers said there were some limitations in the study method: the sample was drawn from a 30-minute period of the TV broadcast day on four major networks, and does not represent all ads on TV. Also, they only analyzed what they determined as the most-emphasized claim in each advertisement and the coders need to interpret the meaning of claims to facilitate analysis, which did introduce subjectivity.
Read more at Science Daily
Sep 15, 2013
Tropical Forest Carbon Absorption May Hinge On an Odd Couple
A unique housing arrangement between a specific group of tree species and a carbo-loading bacteria may determine how well tropical forests can absorb carbon dioxide from the atmosphere, according to a Princeton University-based study. The findings suggest that the role of tropical forests in offsetting the atmospheric buildup of carbon from fossil fuels depends on tree diversity, particularly in forests recovering from exploitation.
Tropical forests thrive on natural nitrogen fertilizer pumped into the soil by trees in the legume family, a diverse group that includes beans and peas, the researchers report in the journal Nature. The researchers studied second-growth forests in Panama that had been used for agriculture five to 300 years ago. The presence of legume trees ensured rapid forest growth in the first 12 years of recovery and thus a substantial carbon "sink," or carbon-storage capacity. Tracts of land that were pasture only 12 years before had already accumulated as much as 40 percent of the carbon found in fully mature forests. Legumes contributed more than half of the nitrogen needed to make that happen, the researchers reported.
These fledgling woodlands had the capacity to store 50 metric tons of carbon per hectare (2.47 acres), which equates to roughly 185 tons of carbon dioxide, or the exhaust of some 21,285 gallons of gasoline. That much fuel would take the average car in the United States more than half a million miles. Though the legumes' nitrogen fertilizer output waned in later years, the species nonetheless took up carbon at rates that were up to nine times faster than non-legume trees.
The legumes' secret is a process known as nitrogen fixation, carried out in concert with infectious bacteria known as rhizobia, which dwell in little pods inside the tree's roots known as root nodules. As a nutrient, nitrogen is essential for plant growth, but tropical soil is short on nitrogen and surprisingly non-nutritious for trees. Legumes use secretions to invite rhizobia living in the soil to infect their roots, and the bacteria signal back to initiate nodule growth. The rhizobia move into the root cells of the host plant and -- in exchange for carbohydrates the tree produces by photosynthesis -- convert nitrogen in the air into the fertilizer form that plants need. Excess nitrogen from the legume eventually creates a nitrogen cycle that benefits neighboring trees.
By nurturing bigger, healthier trees that take up more carbon, legumes have a newly realized importance when it comes to influencing atmospheric carbon dioxide, said second author Lars Hedin, a Princeton professor of ecology and evolutionary biology and the Princeton Environmental Institute. Scientists have recently put numbers on how much carbon forests as a whole absorb, with a recent paper suggesting that the world's forests took up 2.4 quadrillion tons of carbon from 1990 to 2007.
"Tropical forests are a huge carbon sink. If trees could just grow and store carbon, you could have a rapid sink, but if they don't have enough nitrogen they don't take up carbon," said Hedin, adding that nitrogen-fixing trees are uncommon in temperate forests such as those in most of North America and Europe.
"Legumes are a group of plants that perform a valuable function, but no one knew how much they help with the carbon sink," Hedin said. "This work shows that they may be critical for the carbon sink, and that the level of biodiversity in a tropical forest may determine the size of the carbon sink."
First author Sarah Batterman, a postdoctoral research associate in Hedin's research group, said legumes, or nitrogen fixers, are especially important for forests recovering from agricultural use, logging, fire or other human activities. The researchers studied 16 forest plots that were formerly pasture and are maintained by the Smithsonian Tropical Research Institute (STRI).
Forest degradation, however, comes with a loss of biodiversity that can affect nitrogen fixers, too, even though legumes are not specifically coveted or threatened, Batterman said. If the numbers and diversity of nitrogen fixers plummet then the health of the surrounding forest would likely be affected for a very long time.
"This study is showing that there is an important place for nitrogen fixation in these disturbed areas," Batterman said. "Nitrogen fixers are a component of biodiversity and they're really important for the function of these forests, but we do not know enough about how this valuable group of trees influences forests. While some species may thrive on disturbance, others are in older forests where they may be sensitive to human activities."
The researchers found that the nine legume species they studied did not contribute nitrogen to surrounding trees at the same time. Certain species were more active in the youngest forests, others in middle-aged forests, and still other species went into action mainly in 300-year-old tracts, though not nearly to the same extent as legumes in younger plots. The researchers found that individual trees reduced their fixation as nitrogen accumulated in soils, with the number of legumes actively fixing nitrogen dropping from 71 to 23 percent between 12- and 80-year-old forests.
"In that way, the diversity of species that are present in the forest is really critical because it ensures that there can be fixation at all different time periods of forest recovery whenever it's necessary," Batterman said. "If you were to lose one of those species and it turned out to be essential for a specific time period, fixation might drop dramatically."
Such details can improve what scientists know about future climate change, Batterman said. Computer models that calculate the global balance of atmospheric carbon dioxide also must factor in sinks that offset carbon, such as tropical forests. And if forests take up carbon differently depending on the abundance and diversity of legumes, models should reflect that variation, she said. Batterman is currently working with Princeton Assistant Professor of Geosciences David Medvigy on a method for considering nitrogen fixation in models.
Read more at Science Daily
Tropical forests thrive on natural nitrogen fertilizer pumped into the soil by trees in the legume family, a diverse group that includes beans and peas, the researchers report in the journal Nature. The researchers studied second-growth forests in Panama that had been used for agriculture five to 300 years ago. The presence of legume trees ensured rapid forest growth in the first 12 years of recovery and thus a substantial carbon "sink," or carbon-storage capacity. Tracts of land that were pasture only 12 years before had already accumulated as much as 40 percent of the carbon found in fully mature forests. Legumes contributed more than half of the nitrogen needed to make that happen, the researchers reported.
These fledgling woodlands had the capacity to store 50 metric tons of carbon per hectare (2.47 acres), which equates to roughly 185 tons of carbon dioxide, or the exhaust of some 21,285 gallons of gasoline. That much fuel would take the average car in the United States more than half a million miles. Though the legumes' nitrogen fertilizer output waned in later years, the species nonetheless took up carbon at rates that were up to nine times faster than non-legume trees.
The legumes' secret is a process known as nitrogen fixation, carried out in concert with infectious bacteria known as rhizobia, which dwell in little pods inside the tree's roots known as root nodules. As a nutrient, nitrogen is essential for plant growth, but tropical soil is short on nitrogen and surprisingly non-nutritious for trees. Legumes use secretions to invite rhizobia living in the soil to infect their roots, and the bacteria signal back to initiate nodule growth. The rhizobia move into the root cells of the host plant and -- in exchange for carbohydrates the tree produces by photosynthesis -- convert nitrogen in the air into the fertilizer form that plants need. Excess nitrogen from the legume eventually creates a nitrogen cycle that benefits neighboring trees.
By nurturing bigger, healthier trees that take up more carbon, legumes have a newly realized importance when it comes to influencing atmospheric carbon dioxide, said second author Lars Hedin, a Princeton professor of ecology and evolutionary biology and the Princeton Environmental Institute. Scientists have recently put numbers on how much carbon forests as a whole absorb, with a recent paper suggesting that the world's forests took up 2.4 quadrillion tons of carbon from 1990 to 2007.
"Tropical forests are a huge carbon sink. If trees could just grow and store carbon, you could have a rapid sink, but if they don't have enough nitrogen they don't take up carbon," said Hedin, adding that nitrogen-fixing trees are uncommon in temperate forests such as those in most of North America and Europe.
"Legumes are a group of plants that perform a valuable function, but no one knew how much they help with the carbon sink," Hedin said. "This work shows that they may be critical for the carbon sink, and that the level of biodiversity in a tropical forest may determine the size of the carbon sink."
First author Sarah Batterman, a postdoctoral research associate in Hedin's research group, said legumes, or nitrogen fixers, are especially important for forests recovering from agricultural use, logging, fire or other human activities. The researchers studied 16 forest plots that were formerly pasture and are maintained by the Smithsonian Tropical Research Institute (STRI).
Forest degradation, however, comes with a loss of biodiversity that can affect nitrogen fixers, too, even though legumes are not specifically coveted or threatened, Batterman said. If the numbers and diversity of nitrogen fixers plummet then the health of the surrounding forest would likely be affected for a very long time.
"This study is showing that there is an important place for nitrogen fixation in these disturbed areas," Batterman said. "Nitrogen fixers are a component of biodiversity and they're really important for the function of these forests, but we do not know enough about how this valuable group of trees influences forests. While some species may thrive on disturbance, others are in older forests where they may be sensitive to human activities."
The researchers found that the nine legume species they studied did not contribute nitrogen to surrounding trees at the same time. Certain species were more active in the youngest forests, others in middle-aged forests, and still other species went into action mainly in 300-year-old tracts, though not nearly to the same extent as legumes in younger plots. The researchers found that individual trees reduced their fixation as nitrogen accumulated in soils, with the number of legumes actively fixing nitrogen dropping from 71 to 23 percent between 12- and 80-year-old forests.
"In that way, the diversity of species that are present in the forest is really critical because it ensures that there can be fixation at all different time periods of forest recovery whenever it's necessary," Batterman said. "If you were to lose one of those species and it turned out to be essential for a specific time period, fixation might drop dramatically."
Such details can improve what scientists know about future climate change, Batterman said. Computer models that calculate the global balance of atmospheric carbon dioxide also must factor in sinks that offset carbon, such as tropical forests. And if forests take up carbon differently depending on the abundance and diversity of legumes, models should reflect that variation, she said. Batterman is currently working with Princeton Assistant Professor of Geosciences David Medvigy on a method for considering nitrogen fixation in models.
Read more at Science Daily
Subscribe to:
Posts (Atom)