Scientists at the Helmholtz Center Berlin (HZB) along with researchers at China's Fudan University have characterized a new class of materials called protein crystalline frameworks (PCFs).
Thanks to certain helper substances, in PCFs proteins are fixated in a way so as to align themselves symmetrically, forming highly stable crystals. Next, the HZB and Fudan University researchers are planning on looking into how PCFs may be used as functional materials. Their findings are being published today in the scientific journal Nature Communications.Proteins are sensitive molecules. Everyone knows that -- at least from having boiled eggs. Under certain circumstances -- like immersion in boiling water -- they denature, losing their natural shape, and becoming hard. True, researchers have been able to handle these substances for some time now, even to the point of crystallizing them in their native state. Admittedly, though, this does require considerable effort, but it is the only way how researchers can find out the structure of these substances at high resolution. Moreover, protein crystals are extremely fragile, highly sensitive and hard to handle.
Now, for the first time ever, scientists at China's Fudan University have managed to work around these downsides by linking the protein concanavalin A to helper molecules belonging to the sugar family, and to the dye rhodamin. The concanavalin molecules that have been thus fixated tended to arrange themselves symmetrically within the helper molecule framework, forming crystals, in which the proteins achieve high stability and are intricately interconnected -- into a protein crystalline framework.
Developing molecular structures like these is pointless unless you know exactly how they form and what their structure looks like at the level of the atoms. During the quest for suitable experimental methods, the Shanghai researchers turned to a Chinese scientist working at the HZB for help. She called her colleagues' attention to the MX beamlines at the HZB's electron storage ring BESSY II.
"Here at the HZB, we were able to offer them our highly specialized crystallography stations -- the perfect venue for characterizing PCFs at high resolutions," says Dr. Manfred Weiss, one of the leading scientists working at the HZB-MX laboratory. It quickly became clear that the helper molecules even allowed the researchers to decide how powerfully they wanted them to penetrate the protein frameworks. "This gives the PCFs a great deal of flexibility and variability, which we'll always keep in mind when doing research on potential applications," says Manfred Weiss.
From Science Daily
Aug 23, 2014
Voyager map details Neptune's strange moon Triton
NASA's Voyager 2 spacecraft gave humanity its first close-up look at Neptune and its moon Triton in the summer of 1989. Like an old film, Voyager's historic footage of Triton has been "restored" and used to construct the best-ever global color map of that strange moon. The map, produced by Paul Schenk, a scientist at the Lunar and Planetary Institute in Houston, has also been used to make a movie recreating that historic Voyager encounter, which took place 25 years ago, on August 25, 1989.
The new Triton map has a resolution of 1,970 feet (600 meters) per pixel. The colors have been enhanced to bring out contrast but are a close approximation to Triton's natural colors. Voyager's "eyes" saw in colors slightly different from human eyes, and this map was produced using orange, green and blue filter images.
In 1989, most of the northern hemisphere was in darkness and unseen by Voyager. Because of the speed of Voyager's visit and the slow rotation of Triton, only one hemisphere was seen clearly at close distance. The rest of the surface was either in darkness or seen as blurry markings.
The production of the new Triton map was inspired by anticipation of NASA's New Horizons encounter with Pluto, coming up a little under a year from now. Among the improvements on the map are updates to the accuracy of feature locations, sharpening of feature details by removing some of the blurring effects of the camera, and improved color processing.
Although Triton is a moon of a planet and Pluto is a dwarf planet, Triton serves as a preview of sorts for the upcoming Pluto encounter. Although both bodies originated in the outer solar system, Triton was captured by Neptune and has undergone a radically different thermal history than Pluto. Tidal heating has likely melted the interior of Triton, producing the volcanoes, fractures and other geological features that Voyager saw on that bitterly cold, icy surface.
Pluto is unlikely to be a copy of Triton, but some of the same types of features may be present. Triton is slightly larger than Pluto, has a very similar internal density and bulk composition, and has the same low-temperature volatiles frozen on its surface. The surface composition of both bodies includes carbon monoxide, carbon dioxide, methane and nitrogen ices.
Voyager also discovered atmospheric plumes on Triton, making it one of the known active bodies in the outer solar system, along with objects such as Jupiter's moon Io and Saturn's moon Enceladus. Scientists will be looking at Pluto next year to see if it will join this list. They will also be looking to see how Pluto and Triton compare and contrast, and how their different histories have shaped the surfaces we see.
Although a fast flyby, New Horizons' Pluto encounter on July 14, 2015, will not be a replay of Voyager but more of a sequel and a reboot, with a new and more technologically advanced spacecraft and, more importantly, a new cast of characters. Those characters are Pluto and its family of five known moons, all of which will be seen up close for the first time next summer.
Triton may not be a perfect preview of coming attractions, but it serves as a prequel to the cosmic blockbuster expected when New Horizons arrives at Pluto next year.
Read more at Science Daily
The new Triton map has a resolution of 1,970 feet (600 meters) per pixel. The colors have been enhanced to bring out contrast but are a close approximation to Triton's natural colors. Voyager's "eyes" saw in colors slightly different from human eyes, and this map was produced using orange, green and blue filter images.
In 1989, most of the northern hemisphere was in darkness and unseen by Voyager. Because of the speed of Voyager's visit and the slow rotation of Triton, only one hemisphere was seen clearly at close distance. The rest of the surface was either in darkness or seen as blurry markings.
The production of the new Triton map was inspired by anticipation of NASA's New Horizons encounter with Pluto, coming up a little under a year from now. Among the improvements on the map are updates to the accuracy of feature locations, sharpening of feature details by removing some of the blurring effects of the camera, and improved color processing.
Although Triton is a moon of a planet and Pluto is a dwarf planet, Triton serves as a preview of sorts for the upcoming Pluto encounter. Although both bodies originated in the outer solar system, Triton was captured by Neptune and has undergone a radically different thermal history than Pluto. Tidal heating has likely melted the interior of Triton, producing the volcanoes, fractures and other geological features that Voyager saw on that bitterly cold, icy surface.
Pluto is unlikely to be a copy of Triton, but some of the same types of features may be present. Triton is slightly larger than Pluto, has a very similar internal density and bulk composition, and has the same low-temperature volatiles frozen on its surface. The surface composition of both bodies includes carbon monoxide, carbon dioxide, methane and nitrogen ices.
Voyager also discovered atmospheric plumes on Triton, making it one of the known active bodies in the outer solar system, along with objects such as Jupiter's moon Io and Saturn's moon Enceladus. Scientists will be looking at Pluto next year to see if it will join this list. They will also be looking to see how Pluto and Triton compare and contrast, and how their different histories have shaped the surfaces we see.
Although a fast flyby, New Horizons' Pluto encounter on July 14, 2015, will not be a replay of Voyager but more of a sequel and a reboot, with a new and more technologically advanced spacecraft and, more importantly, a new cast of characters. Those characters are Pluto and its family of five known moons, all of which will be seen up close for the first time next summer.
Triton may not be a perfect preview of coming attractions, but it serves as a prequel to the cosmic blockbuster expected when New Horizons arrives at Pluto next year.
Read more at Science Daily
Aug 22, 2014
Genetic 'Recipe' Found for Lizard Tail Regrowth
Are humans one step closer to being able to regrow spinal cord? Thanks to the green anole lizard, we may well be.
A team of researchers from Arizona State University (ASU) studied the genes of the lizard -- Anolis carolinensis, in classification-speak -- when it was in tail regeneration mode, and the scientists announced they have found a kind of genetic "recipe" for how the lizard works its magic. Their work has been published in the journal PLOS ONE.
"Using next-generation technologies to sequence all the genes expressed during regeneration, we have unlocked the mystery of what genes are needed to regrow the lizard tail," said lead author Kenro Kusumi, professor in ASU's School of Life Sciences and associate dean in the College of Liberal Arts and Sciences, in a press release.
Lizards are the animal most closely related to humans that can grow back whole appendages, Kusumi explained, noting that his team found more than 320 genes the creatures turn on in specific tail regions during regeneration.
"By following the genetic recipe for regeneration that is found in lizards, and then harnessing those same genes in human cells, it may be possible to regrow new cartilage, muscle or even spinal cord in the future," Kusumi said.
The ASU team hopes its work will help break future ground in the treatment of spinal cord injuries, birth defects, and diseases such as arthritis.
From Discovery News
A team of researchers from Arizona State University (ASU) studied the genes of the lizard -- Anolis carolinensis, in classification-speak -- when it was in tail regeneration mode, and the scientists announced they have found a kind of genetic "recipe" for how the lizard works its magic. Their work has been published in the journal PLOS ONE.
"Using next-generation technologies to sequence all the genes expressed during regeneration, we have unlocked the mystery of what genes are needed to regrow the lizard tail," said lead author Kenro Kusumi, professor in ASU's School of Life Sciences and associate dean in the College of Liberal Arts and Sciences, in a press release.
Lizards are the animal most closely related to humans that can grow back whole appendages, Kusumi explained, noting that his team found more than 320 genes the creatures turn on in specific tail regions during regeneration.
"By following the genetic recipe for regeneration that is found in lizards, and then harnessing those same genes in human cells, it may be possible to regrow new cartilage, muscle or even spinal cord in the future," Kusumi said.
The ASU team hopes its work will help break future ground in the treatment of spinal cord injuries, birth defects, and diseases such as arthritis.
From Discovery News
The Surprising Reason Hummingbirds Love Sweets
Nectar-slurping hummingbirds clearly have a taste for sweets — but they shouldn't. Like all other birds, they lack sweet-taste receptors on their palates and long tongues, so in theory, they should be immune to the temptations of sugary foods.
However, new research reveals why hummingbirds feast freely on nectar: At some point in their evolution, the birds transformed a taste receptor that's typically used to detect savory or umami flavors into one that's used to taste sweets instead.
Hummingbirds are constantly wavering between a sugar rush and starvation. Their metabolisms are hyperactive, their hearts can beat 20 times a second, and they often need to eat more than their body weight in food each day to stay alive.
The small birds eat the occasional insect, but they largely subsist on nectar from flowers, which is not a typical source of food for most other birds. As a result, hummingbirds have been able to carve out a distinct environmental niche. The birds can now be found throughout North and South America, in habitats ranging from high-altitude mountains in the Andes to tropical rainforests, and they're quite diverse. They have split into more than 300 species in the estimated 42 million years since they parted from their closest relative, the insect-eating swift.
Scientists have been puzzled by the fact that hummingbirds maintain such a sugary diet without a sweet-taste receptor. For most mammals, the sweet-taste receptor that responds to sugars in plant-based carbohydrates is made up of two proteins: T1R2 and T1R3. The taste receptor that detects savory, or umami, flavors found in meat and mushrooms is made up of the proteins T1R1 and T1R3.
But after the chicken genome was sequenced in 2004, researchers noticed the birds lacked the gene that encodes T1R2, a crucial component of the sweet-taste receptor. This same pattern was seen in other bird genomes.
"If a species is missing one of those two parts, then the species can't taste sweet at all," said Maude Baldwin, a doctoral student of evolutionary biology at Harvard University and one of the researchers on the study.
When scientists sequenced the genomes of cats, lions, tigers and cheetahs — true carnivores that also don't have a taste for sweets — they found these species still have a nonfunctional "pseudogene" (a nonfunctional gene that's lost its protein-coding powers) for the sweet-taste receptor. But in bird genomes, scientists never even found a trace of a pseudogene for a sweet tooth, Baldwin told Live Science.
To figure out what made hummingbirds like sweets despite their lack of the sweet-taste receptor, Baldwin and colleagues cloned the genes for the T1R1-T1R3 taste receptors from omnivorous chickens, insectivorous swifts and nectivorous hummingbirds. The researchers then tested how the taste-receptor proteins produced by these genes reacted to different "flavors" in a cell culture.
For chickens and swifts, the receptor had a strong reaction to the amino acids behind umami flavors. The hummingbird receptor, on the other hand, was only weakly stimulated by umami flavors, but it did responded strongly to the sweet flavors of carbs, the researchers found.
Then, to look for the molecular basis for this change in function, Baldwin and colleagues made taste-receptor hybrids using different parts of the chicken and hummingbird receptors. They found that by mutating the chicken receptor in 19 different places, they could get it to respond to sweets, but the researchers suspect there are more mutations that contributed to the change in hummingbirds.
Read more at Discovery News
However, new research reveals why hummingbirds feast freely on nectar: At some point in their evolution, the birds transformed a taste receptor that's typically used to detect savory or umami flavors into one that's used to taste sweets instead.
Hummingbirds are constantly wavering between a sugar rush and starvation. Their metabolisms are hyperactive, their hearts can beat 20 times a second, and they often need to eat more than their body weight in food each day to stay alive.
The small birds eat the occasional insect, but they largely subsist on nectar from flowers, which is not a typical source of food for most other birds. As a result, hummingbirds have been able to carve out a distinct environmental niche. The birds can now be found throughout North and South America, in habitats ranging from high-altitude mountains in the Andes to tropical rainforests, and they're quite diverse. They have split into more than 300 species in the estimated 42 million years since they parted from their closest relative, the insect-eating swift.
Scientists have been puzzled by the fact that hummingbirds maintain such a sugary diet without a sweet-taste receptor. For most mammals, the sweet-taste receptor that responds to sugars in plant-based carbohydrates is made up of two proteins: T1R2 and T1R3. The taste receptor that detects savory, or umami, flavors found in meat and mushrooms is made up of the proteins T1R1 and T1R3.
But after the chicken genome was sequenced in 2004, researchers noticed the birds lacked the gene that encodes T1R2, a crucial component of the sweet-taste receptor. This same pattern was seen in other bird genomes.
"If a species is missing one of those two parts, then the species can't taste sweet at all," said Maude Baldwin, a doctoral student of evolutionary biology at Harvard University and one of the researchers on the study.
When scientists sequenced the genomes of cats, lions, tigers and cheetahs — true carnivores that also don't have a taste for sweets — they found these species still have a nonfunctional "pseudogene" (a nonfunctional gene that's lost its protein-coding powers) for the sweet-taste receptor. But in bird genomes, scientists never even found a trace of a pseudogene for a sweet tooth, Baldwin told Live Science.
To figure out what made hummingbirds like sweets despite their lack of the sweet-taste receptor, Baldwin and colleagues cloned the genes for the T1R1-T1R3 taste receptors from omnivorous chickens, insectivorous swifts and nectivorous hummingbirds. The researchers then tested how the taste-receptor proteins produced by these genes reacted to different "flavors" in a cell culture.
For chickens and swifts, the receptor had a strong reaction to the amino acids behind umami flavors. The hummingbird receptor, on the other hand, was only weakly stimulated by umami flavors, but it did responded strongly to the sweet flavors of carbs, the researchers found.
Then, to look for the molecular basis for this change in function, Baldwin and colleagues made taste-receptor hybrids using different parts of the chicken and hummingbird receptors. They found that by mutating the chicken receptor in 19 different places, they could get it to respond to sweets, but the researchers suspect there are more mutations that contributed to the change in hummingbirds.
Read more at Discovery News
Oldest Metal Object in Middle East Discovered in Grave
A copper awl is the oldest metal object unearthed to date in the Middle East. The discovery reveals that metals were exchanged across hundreds of miles in this region more than 6,000 years ago, centuries earlier than previously thought, researchers say.
The artifact was unearthed in Tel Tsaf, an archaeological site in Israel located near the Jordan River and Israel's border with Jordan. The area was a village from about 5100 B.C. to 4600 B.C., and was first discovered in A.D. 1950, with digs taking place from the end of the 1970s up to the present day.
Tel Tsaf possessed large buildings made of mud bricks and a great number of silos that could each store 15 to 30 tons of wheat and barley, an unprecedented scale for the ancient Near East. The village had many roasting ovens in the courtyards, all filled with burnt animal bones, which suggests people held large events there. Moreover, scientists had unearthed items made of obsidian, a volcanic glass with origins in Anatolia or Armenia, as well as shells from the Nile River in Egypt and pottery from either Syria or Mesopotamia. All in all, these previous findings suggest this community was an ancient international center of commerce that possessed great wealth.
Archaeologists discovered the cone-shaped awl in the grave of a woman who was about 40 years old when she died, and who had a belt around her waist made of 1,668 ostrich-egg shell beads. Several large stones covered the grave, which was dug inside a silo, suggesting both the woman and the silo were considered special.
The copper awl is about 1.6 inches (4.1 centimeters) long, about 0.2 inches (5 mm) wide at its base and just 0.03 inches (1 mm) wide at its tip. It was set in a wooden handle, and since it was buried with her, the researchers suggest the awl may have belonged to the woman.
"The appearance of the item in a woman's grave, which represents one of the most elaborate burials we've seen in our region from that era, testifies to both the importance of the awl and the importance of the woman, and it's possible that we are seeing here the first indications of social hierarchy and complexity," study co-author Danny Rosenberg, an archaeologist at the University of Haifa in Israel, said in a statement.
Before this discovery, the earliest pieces of evidence for metal use in the ancient Near East were found in the southern Levant and included copper artifacts from the Nahal Mishmar cave and gold rings found inside the Nahal Qanah cave dating from 4500 B.C. to 3800 B.C. The awl suggests people in the area started using metals as early as 5100 B.C., several centuries earlier than previously thought. Chemical analysis of the copper also revealed it probably came from about 620 miles (1,000 kilometers) away, in the Caucasus region. This discovery suggests people in this area originally imported metal artifacts and only later created them locally.
Read more at Discovery News
The artifact was unearthed in Tel Tsaf, an archaeological site in Israel located near the Jordan River and Israel's border with Jordan. The area was a village from about 5100 B.C. to 4600 B.C., and was first discovered in A.D. 1950, with digs taking place from the end of the 1970s up to the present day.
Tel Tsaf possessed large buildings made of mud bricks and a great number of silos that could each store 15 to 30 tons of wheat and barley, an unprecedented scale for the ancient Near East. The village had many roasting ovens in the courtyards, all filled with burnt animal bones, which suggests people held large events there. Moreover, scientists had unearthed items made of obsidian, a volcanic glass with origins in Anatolia or Armenia, as well as shells from the Nile River in Egypt and pottery from either Syria or Mesopotamia. All in all, these previous findings suggest this community was an ancient international center of commerce that possessed great wealth.
Archaeologists discovered the cone-shaped awl in the grave of a woman who was about 40 years old when she died, and who had a belt around her waist made of 1,668 ostrich-egg shell beads. Several large stones covered the grave, which was dug inside a silo, suggesting both the woman and the silo were considered special.
The copper awl is about 1.6 inches (4.1 centimeters) long, about 0.2 inches (5 mm) wide at its base and just 0.03 inches (1 mm) wide at its tip. It was set in a wooden handle, and since it was buried with her, the researchers suggest the awl may have belonged to the woman.
"The appearance of the item in a woman's grave, which represents one of the most elaborate burials we've seen in our region from that era, testifies to both the importance of the awl and the importance of the woman, and it's possible that we are seeing here the first indications of social hierarchy and complexity," study co-author Danny Rosenberg, an archaeologist at the University of Haifa in Israel, said in a statement.
Before this discovery, the earliest pieces of evidence for metal use in the ancient Near East were found in the southern Levant and included copper artifacts from the Nahal Mishmar cave and gold rings found inside the Nahal Qanah cave dating from 4500 B.C. to 3800 B.C. The awl suggests people in the area started using metals as early as 5100 B.C., several centuries earlier than previously thought. Chemical analysis of the copper also revealed it probably came from about 620 miles (1,000 kilometers) away, in the Caucasus region. This discovery suggests people in this area originally imported metal artifacts and only later created them locally.
Read more at Discovery News
The Bird That Builds Nests So Huge They Pull Down Trees
At some point, a tree becomes more nest than tree. That sounds like the kind of proverb that old guy at my local bar would tell me. |
For its size (and lack of opposable thumbs) though, Africa’s incredible social weaver surely comes close. These birds, about the size of the sparrows here in the States, come together in colonies of as many as 500 individuals to build by far the most enormous nests on Earth, at more than 2,000 pounds and 20 feet long by 13 feet wide by 7 feet thick. The structures are so big they can collapse the trees they’re built in, and so well-constructed they can last for a century, according to Gavin Leighton, a biologist at the University of Miami. Occupying as many as 100 chambers, these are quite possibly the biggest vertebrate societies centered around a single structure—outside of human beings and their skyscrapers, of course.
The social weaver with some building material. Or is that a tiny cigar. I can’t tell. |
Social weavers build entrances to their nests at the bottom, which makes them more inaccessible to predators other than the dreaded tape-measure-handed human being. |
The social weaver’s setup is so sweet that other species are more than happy to squat in their digs. African vultures are fond of hanging out on the roof, and red-headed finches will raise their families in the chambers—as do pygmy falcons, birds of prey more than capable of killing their landlords. Indeed, while it’s not common, “it has been documented that pygmy falcons will sometimes eat social weavers,” said Leighton. “Which is kind of a depressing thought, because social weavers are building and maintaining this giant apartment complex, and then a predator moves in and starts eating them.”
A pygmy falcon just chillin’ and looking cool as hell. |
That’s not to say social weavers don’t stand up for themselves. Should a snake wander up the tree or a pygmy falcon land on their roof, they’ll crowd around the menace and fire out high-pitched alarm calls—incessantly. While this may scare away smaller birds, it does little good on falcons and serpents. Curiously, the weavers won’t aggressively take to the air and flock to scare away their enemies, as crows or mockingbirds do. Why that is, Leighton can’t say for sure.
Read more at Wired Science
Aug 21, 2014
Sphinxes Emerge From Huge Ancient Greek Tomb
Two headless sphinxes emerged from a massive burial site in northern Greece as archaeologists began removing large stones from the tomb’s sealing wall.
The headless, wingless 4.8-foot-high sphinxes each weigh about 1.5 tons and bear traces of red coloring on their feet. They would have been 6.5 feet high with their heads, the Greek Culture Ministry said in a statement.
The statues are believed to have been placed there to guard the burial, which is the largest tomb ever uncovered in Greece.
The tomb dates back to around 325-300 B.C., at the end of the reign of warrior-king Alexander the Great. It lies in the ancient city of Amphipolis, in Greece’s northeastern Macedonia region about 65 miles from the country’s second-biggest city, Thessaloniki.
The city, an Athenian colony, was conquered by Philip II of Macedon, Alexander’s father, in 357 B.C.
Prominent generals and admirals of Alexander had links with Amphipolis. Moreover, it’s here that Alexander’s wife Roxana and his son Alexander IV were killed in 311 B.C. on the orders of his successor, King Cassander.
Archaeologists began excavating the site, a huge mound complex, in 2012. They revealed a circular tomb measuring 1,600 feet across which featured a 10-foot high perimeter wall. This was built of marble brought from the island of Thassos.
The burial complex site was possibly built by Dinocrates, a famous architect of the time and a close friend of Alexander. It is 10 times larger than the tomb of Alexander’s father, Philip II, which was discovered in Vergina, central Macedonia, in the 1970s.
A wide path leads to the tomb whose entrance is guarded by the two sphinx statues.
“Pieces of the sphinx’s wings were found at the site, allowing for a full restoration,” the Greek Ministry of Culture said.
“Part of the back of the statue of the Lion of Amphipolis was also unearthed at the site,” the statement said.
Work led by Katerina Peristeri, the archaeologist in charge of the dig, proved the impressive 16-foot-tall marble lion statue which now stands on a pedestal three miles from the excavation site, once crowned the monumental tomb.
Much of the tomb was demolished during the Roman occupation of Greece, and several marble blocks were reused to stabilize the banks of the river Strymon. It was right on the river bed that the 4th century B.C. lion and other marble blocks were found in 1912 by the Greek army.
According to the Culture Ministry, the sphinxes and the lion, both in Thassos marble, appear to be have been crafted in the same workshop.
“The seated sphinxes — as opposed to the lying sphinxes in Egyptian art — are unusual,” classical archaeologist Dorothy King wrote in her blog.
“The closest parallel I can think of are those from the Hecatomnid Androns at Labraunda, about a quarter of a century earlier,” she said.
Those bearded Hecatomnid figures reflected Persian royal iconography, King noted.
Behind the sphinx guarded entrance, the archaeologists found a mosaic floor featuring black and white rhombus shapes.
What lies behind the entrance remains a mystery. A geophysical survey carried out last year indicates the interior of the tomb consists of three rooms.
Peristeri’s team hopes to fully explore the burial by the end of the month to determine who was laid to rest there.
Archaeologists have debunked speculation that the body of Alexander the Great lies in the tomb. Alexander, the overlord of an empire stretching from Greece to India, died at Babylon, now in central Iraq, in June of 323 B.C. — just before his 33rd birthday.
His elusive tomb is one of the great unsolved mysteries of the ancient world.
History has it that after Alexander died in Babylon, his body, en route to Macedon, was hijacked by Ptolemy and taken to Egypt. The sarcophagus of the warrior king was then moved from Memphis to Alexandria, the capital of his kingdom, and there it remained until late Antiquity.
Read more at Discovery News
The headless, wingless 4.8-foot-high sphinxes each weigh about 1.5 tons and bear traces of red coloring on their feet. They would have been 6.5 feet high with their heads, the Greek Culture Ministry said in a statement.
The statues are believed to have been placed there to guard the burial, which is the largest tomb ever uncovered in Greece.
The tomb dates back to around 325-300 B.C., at the end of the reign of warrior-king Alexander the Great. It lies in the ancient city of Amphipolis, in Greece’s northeastern Macedonia region about 65 miles from the country’s second-biggest city, Thessaloniki.
The city, an Athenian colony, was conquered by Philip II of Macedon, Alexander’s father, in 357 B.C.
Prominent generals and admirals of Alexander had links with Amphipolis. Moreover, it’s here that Alexander’s wife Roxana and his son Alexander IV were killed in 311 B.C. on the orders of his successor, King Cassander.
Archaeologists began excavating the site, a huge mound complex, in 2012. They revealed a circular tomb measuring 1,600 feet across which featured a 10-foot high perimeter wall. This was built of marble brought from the island of Thassos.
The burial complex site was possibly built by Dinocrates, a famous architect of the time and a close friend of Alexander. It is 10 times larger than the tomb of Alexander’s father, Philip II, which was discovered in Vergina, central Macedonia, in the 1970s.
A wide path leads to the tomb whose entrance is guarded by the two sphinx statues.
“Pieces of the sphinx’s wings were found at the site, allowing for a full restoration,” the Greek Ministry of Culture said.
“Part of the back of the statue of the Lion of Amphipolis was also unearthed at the site,” the statement said.
Work led by Katerina Peristeri, the archaeologist in charge of the dig, proved the impressive 16-foot-tall marble lion statue which now stands on a pedestal three miles from the excavation site, once crowned the monumental tomb.
Much of the tomb was demolished during the Roman occupation of Greece, and several marble blocks were reused to stabilize the banks of the river Strymon. It was right on the river bed that the 4th century B.C. lion and other marble blocks were found in 1912 by the Greek army.
According to the Culture Ministry, the sphinxes and the lion, both in Thassos marble, appear to be have been crafted in the same workshop.
“The seated sphinxes — as opposed to the lying sphinxes in Egyptian art — are unusual,” classical archaeologist Dorothy King wrote in her blog.
“The closest parallel I can think of are those from the Hecatomnid Androns at Labraunda, about a quarter of a century earlier,” she said.
Those bearded Hecatomnid figures reflected Persian royal iconography, King noted.
Behind the sphinx guarded entrance, the archaeologists found a mosaic floor featuring black and white rhombus shapes.
What lies behind the entrance remains a mystery. A geophysical survey carried out last year indicates the interior of the tomb consists of three rooms.
Peristeri’s team hopes to fully explore the burial by the end of the month to determine who was laid to rest there.
Archaeologists have debunked speculation that the body of Alexander the Great lies in the tomb. Alexander, the overlord of an empire stretching from Greece to India, died at Babylon, now in central Iraq, in June of 323 B.C. — just before his 33rd birthday.
His elusive tomb is one of the great unsolved mysteries of the ancient world.
History has it that after Alexander died in Babylon, his body, en route to Macedon, was hijacked by Ptolemy and taken to Egypt. The sarcophagus of the warrior king was then moved from Memphis to Alexandria, the capital of his kingdom, and there it remained until late Antiquity.
Read more at Discovery News
2,800-Year-Old Zigzag Art Found in Greek Tomb
Archaeologists working at the ancient city of Corinth, Greece, have discovered a tomb dating back around 2,800 years that has pottery decorated with zigzagging designs.
The tomb was built sometime between 800 B.C. and 760 B.C., a time when Corinth was emerging as a major power and Greeks were colonizing the coasts of the Mediterranean Sea.
The tomb itself consists of a shaft and burial pit, the pit having a limestone sarcophagus that is about 5.8 feet (1.76 meters) long, 2.8 feet (0.86 m) wide and 2.1 feet (0.63 m) high. When researchers opened the sarcophagus, they found a single individual had been buried inside, with only fragments of bones surviving.
The scientists found several pottery vessels beside the sarcophagus, and the tomb also contained a niche, sealed with a limestone slab, which held 13 mostly complete vessels.
"The wealth of the occupant here is indicated by the sarcophagus and the large number of vessels," writes a team of researchers in a recent issue of the journal Hesperia. Except for two vessels imported from Athens all the pottery was made in Corinth, the researchers noted.
The vessels were decorated with a variety of designs, including wavy, zigzagging lines and meandering patterns that look like a maze. This style of pottery was popular at the time, and archaeologists often refer to this as Greece's "Geometric" period.
Several centuries later, in Roman times, the tomb would almost be destroyed after a wall was built beside it. When archaeologists excavated that wall, they found a limestone column that may have originally served as a grave marker for the tomb.
A group of rulers called the Bacchiadae came to power in Corinth in 747 B.C. (a few decades after the tomb was constructed), ancient records indicate. Those rulers built colonies in modern-day Sicily and Corfu, decisions that helped Corinth increase trade and grow wealthy.
"Once these colonies in the west and northwest had been established, Corinth, because of its favorable geographical location, became the most important trading center for commerce between them and mainland Greece," wrote Elke Stein-Hölkeskamp, an instructor at the University of Münster in Germany, in a paper published in the book "A Companion to Archaic Greece" (Wiley-Blackwell, 2009).
Read more at Discovery News
The tomb was built sometime between 800 B.C. and 760 B.C., a time when Corinth was emerging as a major power and Greeks were colonizing the coasts of the Mediterranean Sea.
The tomb itself consists of a shaft and burial pit, the pit having a limestone sarcophagus that is about 5.8 feet (1.76 meters) long, 2.8 feet (0.86 m) wide and 2.1 feet (0.63 m) high. When researchers opened the sarcophagus, they found a single individual had been buried inside, with only fragments of bones surviving.
The scientists found several pottery vessels beside the sarcophagus, and the tomb also contained a niche, sealed with a limestone slab, which held 13 mostly complete vessels.
"The wealth of the occupant here is indicated by the sarcophagus and the large number of vessels," writes a team of researchers in a recent issue of the journal Hesperia. Except for two vessels imported from Athens all the pottery was made in Corinth, the researchers noted.
The vessels were decorated with a variety of designs, including wavy, zigzagging lines and meandering patterns that look like a maze. This style of pottery was popular at the time, and archaeologists often refer to this as Greece's "Geometric" period.
Several centuries later, in Roman times, the tomb would almost be destroyed after a wall was built beside it. When archaeologists excavated that wall, they found a limestone column that may have originally served as a grave marker for the tomb.
A group of rulers called the Bacchiadae came to power in Corinth in 747 B.C. (a few decades after the tomb was constructed), ancient records indicate. Those rulers built colonies in modern-day Sicily and Corfu, decisions that helped Corinth increase trade and grow wealthy.
"Once these colonies in the west and northwest had been established, Corinth, because of its favorable geographical location, became the most important trading center for commerce between them and mainland Greece," wrote Elke Stein-Hölkeskamp, an instructor at the University of Münster in Germany, in a paper published in the book "A Companion to Archaic Greece" (Wiley-Blackwell, 2009).
Read more at Discovery News
Children With Autism Have Extra Synapses
The brain is a formidable computer and, as any computer, it relies on complex wiring — in the form of synapses — to transmit information. For children with autism, a key issue may be a lack of pruning of the brain's wiring. New research has confirmed that autistic children have an overabundance of synapses in their brains.
While more may seem like a good thing, the surplus is thought to cloud the ability of the brain's synapses to connect and communicate with each other.
A surge of synapses form during infancy, particularly in the cortex, a region linked to autistic behaviors. By late adolescence, about half of these cortical synapses are pruned. The new research by neuroscientists at Columbia University Medical Center (CUMC), confirmed that this process is inhibited in the brains of autistic children. The research also identified a protein called mTOR, which, when overactive, could play a role in the pruning defect.
"This is an important finding that could lead to a novel and much-needed therapeutic strategy for autism," Jeffrey Lieberman, MD, Lawrence C. Kolb Professor and Chair of Psychiatry at CUMC and director of New York State Psychiatric Institute, said in a press release. Kolb was not involved in the study.
To test the idea that autistic brains prune less than normal brains, co-author Guomei Tang, assistant professor of neurology at CUMC, measured synapse density in the brains of deceased autistic children, including 13 brains from children ages two to nine, and 13 brains from children ages 13 to 20. Twenty-two brains from children without autism were also examined for comparison.
Tang measured synapse density in cortical brain tissue samples by counting the tiny spines that extend from neurons. In the control brains, she found that the density of the spines dropped by about half. In the autistic brain samples, they were reduced only by 16 percent.
Taking the research a step further, the team then examined the brains of mice to try and trace the pruning defect. They zeroed-in on a protein called mTOR, which, when overactive, inhibits the brain’s self-trimming ability.
"While people usually think of learning as requiring formation of new synapses, the removal of inappropriate synapses may be just as important," the study's senior investigator, David Sulzer, professor of neurobiology in the Departments of Psychiatry, Neurology, and Pharmacology at CUMC, explained in a press release.
The team then tested the drug rapamycin to try and reverse the effects of the overactive mTOR protein and they found the drug was effective in restoring the brain's pruning behavior even after the symptoms had appeared.
This finding, as well as the fact that large amounts of the overactive protein have been found in the brains of autism patients, lends hope that targeting this protein could be an effective therapy against autism.
Read more at Discovery News
While more may seem like a good thing, the surplus is thought to cloud the ability of the brain's synapses to connect and communicate with each other.
A surge of synapses form during infancy, particularly in the cortex, a region linked to autistic behaviors. By late adolescence, about half of these cortical synapses are pruned. The new research by neuroscientists at Columbia University Medical Center (CUMC), confirmed that this process is inhibited in the brains of autistic children. The research also identified a protein called mTOR, which, when overactive, could play a role in the pruning defect.
"This is an important finding that could lead to a novel and much-needed therapeutic strategy for autism," Jeffrey Lieberman, MD, Lawrence C. Kolb Professor and Chair of Psychiatry at CUMC and director of New York State Psychiatric Institute, said in a press release. Kolb was not involved in the study.
To test the idea that autistic brains prune less than normal brains, co-author Guomei Tang, assistant professor of neurology at CUMC, measured synapse density in the brains of deceased autistic children, including 13 brains from children ages two to nine, and 13 brains from children ages 13 to 20. Twenty-two brains from children without autism were also examined for comparison.
Tang measured synapse density in cortical brain tissue samples by counting the tiny spines that extend from neurons. In the control brains, she found that the density of the spines dropped by about half. In the autistic brain samples, they were reduced only by 16 percent.
Taking the research a step further, the team then examined the brains of mice to try and trace the pruning defect. They zeroed-in on a protein called mTOR, which, when overactive, inhibits the brain’s self-trimming ability.
"While people usually think of learning as requiring formation of new synapses, the removal of inappropriate synapses may be just as important," the study's senior investigator, David Sulzer, professor of neurobiology in the Departments of Psychiatry, Neurology, and Pharmacology at CUMC, explained in a press release.
The team then tested the drug rapamycin to try and reverse the effects of the overactive mTOR protein and they found the drug was effective in restoring the brain's pruning behavior even after the symptoms had appeared.
This finding, as well as the fact that large amounts of the overactive protein have been found in the brains of autism patients, lends hope that targeting this protein could be an effective therapy against autism.
Read more at Discovery News
Traces of One of Universe's First Stars Detected
An ancient star in the halo surrounding the Milky Way galaxy appears to contain traces of material released by the death of one of the universe's first stars, a new study reports.
The chemical signature of the ancient star suggests that it incorporated material blasted into space by a supernova explosion that marked the death of a huge star in the early universe — one that may have been 200 times more massive than the sun.
"The impact of very-massive stars and their explosions on subsequent star formation and galaxy formation should be significant," lead author Wako Aoki, of the National Astronomical Observatory of Japan, told Space.com by email.
The first stars in the cosmos, known as Population III stars, formed from the hydrogen and helium that dominated the early universe. Through nuclear fusion, other elements were forged in their hearts. At the end of their lifetimes, supernovas scattered these elements into the space around them, where the material was folded into the next generation of stars.
The universe's first massive stars would have been short-lived, so to determine their composition, scientists must examine the makeup of their offspring — stars that formed from the material distributed by their explosive deaths. While numerical simulations have suggested that at least some of the first stars should have reached enormous proportions, no previous observational evidence had managed to confirm their existence.
Aoki and a team of scientists used the Subaru Telescope in Hawaii to perform follow-up observations of a large sample of low-mass stars with low quantities of what astronomers term "metals" — elements other than hydrogen and helium. They identified SDS J0018-0939, an ancient star only 1,000 light-years from Earth.
"The low abundance of heavy elements suggests that this star is quite old — as old as 13 billion years," Aoki said. (Scientists think the Big Bang that created the universe occurred approximately 13.8 billion years ago.)
The chemical composition of SDS J0018-0939 suggests it gobbled up the material blown off of a single massive ancient star, rather than several smaller bodies. If multiple supernovas had provided the material that constructed the star, the "peculiar abundance ratios" in its interior would have been erased, Aoki said.
Volker Bromm of the University of Texas, Austin agrees, saying that SDS J0018 likely evolved from the material from a single star, which could have been more than 200 times as massive as the sun.
Bromm, who has performed theoretical studies on the properties of the first generation of stars and their supernova explosions, did not participate in the new study. He authored a corresponding "News & Views" article that appeared with the research online today (Aug. 21) in the journal Science.
Signs of low-mass first-generation stars have appeared to be more plentiful in their descendants, which contain large amounts of carbon and other light elements, but until these results, scientists had detected no traces of their very massive siblings. The scarcity suggested that low-mass stars were more numerous in the early universe.
"We have come to understand that the first stars had a range of masses, from a few solar masses, all the way up to 100 solar masses, or even more," Bromm told Space.com via email. "The typical, or average, mass is predicted to be somewhere close to a few tens of solar masses.".
Massive stars burn through their material far faster than their lower-mass relations. Therefore, no high-mass stars should exist today. But Aoki suggested that smaller ones could still be visible.
"In the Milky Way, low-mass Population III stars, which have sufficiently long lifetimes, can be found if they have formed at all," he said.
Such stars would be difficult to detect. According to Bromm, their radiation would have been shifted by the expanding universe into the near-infrared wavelength, which requires sensitive space-based detectors.
"This is one of the main targets for [NASA's] James Webb Space Telescope (JWST), planned for launch in 2018," Bromm said.
More massive stars, such as the one that preceded SDS J10018, would be short-lived, so scientists would have to search back to the early universe. Because distance and time are related — observing a star that is 13 billion years old requires looking out a distance of 13 billion light-years — the search would require a massive and extraordinarily sensitive telescope, such as the upcoming Thirty-Meter Telescope and the Giant Magellan Telescope.
In addition to detecting early stars, JWST should also be able to detect the supernovas that mark the end of their lifetimes, Bromm said.
Read more at Discovery News
The chemical signature of the ancient star suggests that it incorporated material blasted into space by a supernova explosion that marked the death of a huge star in the early universe — one that may have been 200 times more massive than the sun.
"The impact of very-massive stars and their explosions on subsequent star formation and galaxy formation should be significant," lead author Wako Aoki, of the National Astronomical Observatory of Japan, told Space.com by email.
The first stars in the cosmos, known as Population III stars, formed from the hydrogen and helium that dominated the early universe. Through nuclear fusion, other elements were forged in their hearts. At the end of their lifetimes, supernovas scattered these elements into the space around them, where the material was folded into the next generation of stars.
The universe's first massive stars would have been short-lived, so to determine their composition, scientists must examine the makeup of their offspring — stars that formed from the material distributed by their explosive deaths. While numerical simulations have suggested that at least some of the first stars should have reached enormous proportions, no previous observational evidence had managed to confirm their existence.
Aoki and a team of scientists used the Subaru Telescope in Hawaii to perform follow-up observations of a large sample of low-mass stars with low quantities of what astronomers term "metals" — elements other than hydrogen and helium. They identified SDS J0018-0939, an ancient star only 1,000 light-years from Earth.
"The low abundance of heavy elements suggests that this star is quite old — as old as 13 billion years," Aoki said. (Scientists think the Big Bang that created the universe occurred approximately 13.8 billion years ago.)
The chemical composition of SDS J0018-0939 suggests it gobbled up the material blown off of a single massive ancient star, rather than several smaller bodies. If multiple supernovas had provided the material that constructed the star, the "peculiar abundance ratios" in its interior would have been erased, Aoki said.
Volker Bromm of the University of Texas, Austin agrees, saying that SDS J0018 likely evolved from the material from a single star, which could have been more than 200 times as massive as the sun.
Bromm, who has performed theoretical studies on the properties of the first generation of stars and their supernova explosions, did not participate in the new study. He authored a corresponding "News & Views" article that appeared with the research online today (Aug. 21) in the journal Science.
Signs of low-mass first-generation stars have appeared to be more plentiful in their descendants, which contain large amounts of carbon and other light elements, but until these results, scientists had detected no traces of their very massive siblings. The scarcity suggested that low-mass stars were more numerous in the early universe.
"We have come to understand that the first stars had a range of masses, from a few solar masses, all the way up to 100 solar masses, or even more," Bromm told Space.com via email. "The typical, or average, mass is predicted to be somewhere close to a few tens of solar masses.".
Massive stars burn through their material far faster than their lower-mass relations. Therefore, no high-mass stars should exist today. But Aoki suggested that smaller ones could still be visible.
"In the Milky Way, low-mass Population III stars, which have sufficiently long lifetimes, can be found if they have formed at all," he said.
Such stars would be difficult to detect. According to Bromm, their radiation would have been shifted by the expanding universe into the near-infrared wavelength, which requires sensitive space-based detectors.
"This is one of the main targets for [NASA's] James Webb Space Telescope (JWST), planned for launch in 2018," Bromm said.
More massive stars, such as the one that preceded SDS J10018, would be short-lived, so scientists would have to search back to the early universe. Because distance and time are related — observing a star that is 13 billion years old requires looking out a distance of 13 billion light-years — the search would require a massive and extraordinarily sensitive telescope, such as the upcoming Thirty-Meter Telescope and the Giant Magellan Telescope.
In addition to detecting early stars, JWST should also be able to detect the supernovas that mark the end of their lifetimes, Bromm said.
Read more at Discovery News
Aug 20, 2014
Tiny Jurassic Mammals Were Picky Eaters
In the Jurassic Period, when dinosaurs ruled the land, tiny mammals probably had to keep a low profile and survive by gobbling any insects they could find, but new research suggests these early mammals may have been pickier eaters than scientists previously thought.
The Jurassic lasted from about 201 million years ago to 145 million years ago. During this time, dinosaurs were in their heyday, but shrew-size mammals and their immediate ancestors were just starting to emerge.
The Jurassic is also when early mammals began developing better hearing and their teeth began evolving to enable more precise chewing. New techniques of fossil analysis have revealed that some of these mammals were not general insectivores as scientists had believed, but instead probably ate a unique and selective diet of insects.
A team of researchers looked at 200 million-year-old mammal jawbones and teeth discovered in Glamorgan, in South Wales. Based on fossil analysis, the team determined that two of the earliest mammal groups, Morganucodon and Kuehneotherium, likely were discerning about the types of insects they ate.
"None of the fossils of the earliest mammals have the sort of exceptional preservation that includes stomach contents to infer diet, so instead we used a range of new techniques which we applied to our fossil finds of broken jaws and isolated teeth," study lead author Pamela Gill, a research associate in the School of Earth Sciences at the University of Bristol, in the United Kingdom, said in a statement. "Our results confirm that the diversification of mammalian species at the time was linked with differences in diet and ecology."
By examining the microscopic scratches and pits in the fossilized teeth, Gill and her colleagues determined that Morganucodonconsumed mostly harder and crunchier bugs like beetles, while Kuehneotheriumpicked softer insects like scorpion flies. This same technique is used to examine the teeth of present-day bats that feed on insects.
"This is the first time that tooth wear patterns have been used to analyze the diet of mammals this old," Mark Purnell, a professor of palaeobiology at the University of Leicester, in the U.K., said in a statement. "That their tooth wear compares so closely to bats that specialize on different kinds of insects gives us really strong evidence that these early mammals were not generalists when it came to diet, but were quite definite in their food choices."
Read more at Discovery News
The Jurassic lasted from about 201 million years ago to 145 million years ago. During this time, dinosaurs were in their heyday, but shrew-size mammals and their immediate ancestors were just starting to emerge.
The Jurassic is also when early mammals began developing better hearing and their teeth began evolving to enable more precise chewing. New techniques of fossil analysis have revealed that some of these mammals were not general insectivores as scientists had believed, but instead probably ate a unique and selective diet of insects.
A team of researchers looked at 200 million-year-old mammal jawbones and teeth discovered in Glamorgan, in South Wales. Based on fossil analysis, the team determined that two of the earliest mammal groups, Morganucodon and Kuehneotherium, likely were discerning about the types of insects they ate.
"None of the fossils of the earliest mammals have the sort of exceptional preservation that includes stomach contents to infer diet, so instead we used a range of new techniques which we applied to our fossil finds of broken jaws and isolated teeth," study lead author Pamela Gill, a research associate in the School of Earth Sciences at the University of Bristol, in the United Kingdom, said in a statement. "Our results confirm that the diversification of mammalian species at the time was linked with differences in diet and ecology."
By examining the microscopic scratches and pits in the fossilized teeth, Gill and her colleagues determined that Morganucodonconsumed mostly harder and crunchier bugs like beetles, while Kuehneotheriumpicked softer insects like scorpion flies. This same technique is used to examine the teeth of present-day bats that feed on insects.
"This is the first time that tooth wear patterns have been used to analyze the diet of mammals this old," Mark Purnell, a professor of palaeobiology at the University of Leicester, in the U.K., said in a statement. "That their tooth wear compares so closely to bats that specialize on different kinds of insects gives us really strong evidence that these early mammals were not generalists when it came to diet, but were quite definite in their food choices."
Read more at Discovery News
Rock-Eating Microbes Found in Buried Antarctic Lake
A large and diverse family of hearty rock-eating bacteria and other microorganisms live in a freshwater lake buried a half-mile beneath Antarctic ice, new research confirms.
The finding not only adds another extreme environment where life thrives on Earth, but raises the prospect that similar species could have lived or are still living on Mars.
NASA’s ongoing Curiosity rover mission, for example, already has found that the planet most similar to Earth in the solar system once had the chemical constituents needed to support microbial life.
The new research, published in this week’s Nature, confirms initial studies 20 years ago that found microbes in refrozen water samples retrieved from Lake Vostok, the largest subglacial Antarctic lake.
Scientists at that time were not on a life-hunting expedition and the indirect sampling process later raised questions about those results.
“People weren’t really thinking about ecosystems underneath the ice. The conventional wisdom was that they don’t exist, it’s a place that’s too extreme for this kind of thing,” Louisiana State University biologist Brent Christner told Discovery News.
For the new study, Christner and colleagues analyzed samples directly retrieved from another subglacial lake, known as Lake Whillans, which lies beneath about a half-mile of ice on the lower portion of the Whillans Ice Stream in West Antarctica.
The lake is part of an extensive and evolving subglacial drainage network, Christner noted in the Nature paper.
Scientists discovered at least 3,931 microbial species or groups of species in the lake waters, many of which use inorganic compounds as an energy source.
With little surface melt in the area, it is unlikely that water has made its way through the half-mile of ice to reach the lake. Instead, scientists believe the water comes from geothermal heating at the base of the lake and through frictional melting during ice flows.
Any microbes in the water, therefore, most likely survive on energy and nutrients from melting ice, crushed rock, sediment beneath the ice and recycling of materials from dead micro-organisms, glaciologist Martyn Tranter, with the University of Bristol in the United Kingdom, wrote in a related paper in Nature.
“What I find about icy environments on Earth is that, potentially, they are very similar to other icy environments, for example on Mars,” Tranter told Discovery News.
“Conditions are right (on Mars) for there to be liquid water at the bed. The right types of rocks are present which contain reduced (compounds) and if there are oxidizing agents present, then microbes can make a living shuttling electrons between reduced compounds and oxidized compounds,” he said.
Mars scientist Christopher McKay, with NASA’s Ames Research Center in Moffett Field, Calif., isn’t convinced.
“I don't like to be unenthusiastic about these results but I don't see much of any implication for Mars or the ice-covered oceans of the outer solar system,” McKay wrote in an email to Discovery News.
“First it is clear that the water sampled is from a system that is flowing through ice and out to the ocean. Second, and related to this, the results are not indicative of an ecosystem that is growing in a dark nutrient-limited system. They are consistent with debris from the overlying ice -- known to contain micro-organisms -- flowing though and out to the ocean,” McKay said.
“Interesting in its own right, but not a model for an isolated ice-covered ecosystems,” he added.
Scientists don’t know how long ice sheets have covered Antarctica. “Some folks think that within the last half-million years maybe the West Antarctica ice sheet melted away to not very much,” Tranter said.
Read more at Discovery News
The finding not only adds another extreme environment where life thrives on Earth, but raises the prospect that similar species could have lived or are still living on Mars.
NASA’s ongoing Curiosity rover mission, for example, already has found that the planet most similar to Earth in the solar system once had the chemical constituents needed to support microbial life.
The new research, published in this week’s Nature, confirms initial studies 20 years ago that found microbes in refrozen water samples retrieved from Lake Vostok, the largest subglacial Antarctic lake.
Scientists at that time were not on a life-hunting expedition and the indirect sampling process later raised questions about those results.
“People weren’t really thinking about ecosystems underneath the ice. The conventional wisdom was that they don’t exist, it’s a place that’s too extreme for this kind of thing,” Louisiana State University biologist Brent Christner told Discovery News.
For the new study, Christner and colleagues analyzed samples directly retrieved from another subglacial lake, known as Lake Whillans, which lies beneath about a half-mile of ice on the lower portion of the Whillans Ice Stream in West Antarctica.
The lake is part of an extensive and evolving subglacial drainage network, Christner noted in the Nature paper.
Scientists discovered at least 3,931 microbial species or groups of species in the lake waters, many of which use inorganic compounds as an energy source.
With little surface melt in the area, it is unlikely that water has made its way through the half-mile of ice to reach the lake. Instead, scientists believe the water comes from geothermal heating at the base of the lake and through frictional melting during ice flows.
Any microbes in the water, therefore, most likely survive on energy and nutrients from melting ice, crushed rock, sediment beneath the ice and recycling of materials from dead micro-organisms, glaciologist Martyn Tranter, with the University of Bristol in the United Kingdom, wrote in a related paper in Nature.
“What I find about icy environments on Earth is that, potentially, they are very similar to other icy environments, for example on Mars,” Tranter told Discovery News.
“Conditions are right (on Mars) for there to be liquid water at the bed. The right types of rocks are present which contain reduced (compounds) and if there are oxidizing agents present, then microbes can make a living shuttling electrons between reduced compounds and oxidized compounds,” he said.
Mars scientist Christopher McKay, with NASA’s Ames Research Center in Moffett Field, Calif., isn’t convinced.
“I don't like to be unenthusiastic about these results but I don't see much of any implication for Mars or the ice-covered oceans of the outer solar system,” McKay wrote in an email to Discovery News.
“First it is clear that the water sampled is from a system that is flowing through ice and out to the ocean. Second, and related to this, the results are not indicative of an ecosystem that is growing in a dark nutrient-limited system. They are consistent with debris from the overlying ice -- known to contain micro-organisms -- flowing though and out to the ocean,” McKay said.
“Interesting in its own right, but not a model for an isolated ice-covered ecosystems,” he added.
Scientists don’t know how long ice sheets have covered Antarctica. “Some folks think that within the last half-million years maybe the West Antarctica ice sheet melted away to not very much,” Tranter said.
Read more at Discovery News
How the Human Brain Gets Its Wrinkles
The reason our brains have that wrinkly, walnut shape may be that the rapid growth of the brain's outer brain -- the gray matter -- is constrained by the white matter, a new study shows.
Researchers found that the particular pattern of the ridges and crevices of the brain's convoluted surface, which are called gyri and sulci, depends on two simple geometric parameters: the gray matter's growth rate and its thickness. The development of the brain's wrinkles can be mimicked in a lab using a double-layer gel, according to the study published today (Aug. 18) in the journal Proceedings of the National Academy of Sciences.
The researchers noted that along with these physical constraints, genes also have a role in determining the brain's shape, because they regulate how neurons proliferate and migrate to their destinations.
All mammalian species have similar layering in the brain's outer layer -- the cortex -- but only larger mammals have a cortex that is folded. For example, a rat brain has a smooth surface, whereas a considerably larger brain such as a human's, has tens of gyri and sulci. A folded brain surface has a greater surface area -- which means a greater power for processing information, but it's not entirely clear what factors determine the iconic shape of gyri and sulci in the human brain.
Knowing how the brain develops into its folded shape could help scientists better explain what happens in people with congenital conditions such as polymicrogyria (a condition characterized by an excessive number of folds), pachygyria (a condition with unusually thick folds) and lissencephalia (a smooth brain condition, without folds).
Historically, there have been three broad ideas about how gyri and sulci develop. One idea is that some areas of the cortex simply grow more and rise above other areas, creating the gyri. Another idea is that groups of highly interconnected neurons in the cortex are mechanically pulled close to each other by the threadlike axons that make up the white matter. However, evidence suggests that neither of these two ideas is correct.
The third idea is that the gray matter grows more than the white matter, leading to a "buckling" that gives the cortex its shape, the researchers said.
But earlier attempts to model this buckling were not successful, the researchers said. In previous studies, researchers assumed that the gray matter is a thin, stiff layer growing atop of a thick, soft base of white matter, but this assumption yielded wrinkles that aren't like the ones in real human brains.
In the new study, the researchers assumed that the gray and white matter have similar stiffness, but different growth rates. Using mathematical simulations, they showed that depending on the size of the brain, their model results in different shapes of brain surfaces. For example, for a small brain with a diameter of less than half an inch, the brain surface is predicted to be smooth. Intermediate-size brains are predicted to have some sulci that are found within the gray matter, and larger brains become highly folded, with sulci penetrating the white matter.
Read more at Discovery News
Researchers found that the particular pattern of the ridges and crevices of the brain's convoluted surface, which are called gyri and sulci, depends on two simple geometric parameters: the gray matter's growth rate and its thickness. The development of the brain's wrinkles can be mimicked in a lab using a double-layer gel, according to the study published today (Aug. 18) in the journal Proceedings of the National Academy of Sciences.
The researchers noted that along with these physical constraints, genes also have a role in determining the brain's shape, because they regulate how neurons proliferate and migrate to their destinations.
All mammalian species have similar layering in the brain's outer layer -- the cortex -- but only larger mammals have a cortex that is folded. For example, a rat brain has a smooth surface, whereas a considerably larger brain such as a human's, has tens of gyri and sulci. A folded brain surface has a greater surface area -- which means a greater power for processing information, but it's not entirely clear what factors determine the iconic shape of gyri and sulci in the human brain.
Knowing how the brain develops into its folded shape could help scientists better explain what happens in people with congenital conditions such as polymicrogyria (a condition characterized by an excessive number of folds), pachygyria (a condition with unusually thick folds) and lissencephalia (a smooth brain condition, without folds).
Historically, there have been three broad ideas about how gyri and sulci develop. One idea is that some areas of the cortex simply grow more and rise above other areas, creating the gyri. Another idea is that groups of highly interconnected neurons in the cortex are mechanically pulled close to each other by the threadlike axons that make up the white matter. However, evidence suggests that neither of these two ideas is correct.
The third idea is that the gray matter grows more than the white matter, leading to a "buckling" that gives the cortex its shape, the researchers said.
But earlier attempts to model this buckling were not successful, the researchers said. In previous studies, researchers assumed that the gray matter is a thin, stiff layer growing atop of a thick, soft base of white matter, but this assumption yielded wrinkles that aren't like the ones in real human brains.
In the new study, the researchers assumed that the gray and white matter have similar stiffness, but different growth rates. Using mathematical simulations, they showed that depending on the size of the brain, their model results in different shapes of brain surfaces. For example, for a small brain with a diameter of less than half an inch, the brain surface is predicted to be smooth. Intermediate-size brains are predicted to have some sulci that are found within the gray matter, and larger brains become highly folded, with sulci penetrating the white matter.
Read more at Discovery News
Neanderthals and Humans Overlapped for 5,400 Years
The situation might not have been pretty, but Neanderthals and Homo sapiens were both living in Europe at the same time for around 5,400 years, according to a new study that has many other implications.
For starters, it’s now possible that Neanderthals and our species mated and otherwise interacted for some 20,000 years.
“Significant interbreeding between Neanderthals and early modern humans had probably already occurred in Asia more than 50,000 years ago, so the dating evidence now indicates that the two populations could have been in some kind of contact with each other for up to 20,000 years, first in Asia then later in Europe,” Chris Stringer, research leader in Human Origins at the Natural History Museum in London, explained.
“This may support the idea that some of the changes in Neanderthal and early modern human technology after 60,000 years ago can be attributed to a process of acculturation between these two human groups,” Stringer said.
For the study, published in the latest issue of the journal Nature, project leader Thomas Higham of the University of Oxford and his colleagues obtained new radiocarbon dates for around 200 samples of bone, charcoal and shell from 40 key European archaeological sites ranging from Russia in the east to Spain in the west.
The sites were either previously linked to the Neanderthal tool-making industry, known as Mousterian, or were so-called “transitional” sites containing stone tools associated with either our species or Neanderthals.
The results showed that both human groups overlapped for a significant period, giving what Higham and his team say was “ample time” for interaction and interbreeding.
We already know the latter happened.
Stringer said, “Neanderthals are our closest-known relatives, and research has recently shown that nearly all humans alive today have a small percentage of Neanderthal DNA in their genomes. This interbreeding probably occurred soon after small groups of early modern humans began to leave their African homeland about 60,000 years ago.”
The “small percentage” isn’t necessarily because so few interbred. Also, other studies have concluded that one-fifth (and possibly more) of the Neanderthal genome survives in modern humans and influences skin color, hair color and texture, and other traits.
As for what happened to the Neanderthals afterward, researchers still aren’t entirely sure. The new chronology established by the paper suggests that Neanderthals may have survived in dwindling populations in pockets of Europe before they became extinct.
Read more at Discovery News
For starters, it’s now possible that Neanderthals and our species mated and otherwise interacted for some 20,000 years.
“Significant interbreeding between Neanderthals and early modern humans had probably already occurred in Asia more than 50,000 years ago, so the dating evidence now indicates that the two populations could have been in some kind of contact with each other for up to 20,000 years, first in Asia then later in Europe,” Chris Stringer, research leader in Human Origins at the Natural History Museum in London, explained.
“This may support the idea that some of the changes in Neanderthal and early modern human technology after 60,000 years ago can be attributed to a process of acculturation between these two human groups,” Stringer said.
For the study, published in the latest issue of the journal Nature, project leader Thomas Higham of the University of Oxford and his colleagues obtained new radiocarbon dates for around 200 samples of bone, charcoal and shell from 40 key European archaeological sites ranging from Russia in the east to Spain in the west.
The sites were either previously linked to the Neanderthal tool-making industry, known as Mousterian, or were so-called “transitional” sites containing stone tools associated with either our species or Neanderthals.
The results showed that both human groups overlapped for a significant period, giving what Higham and his team say was “ample time” for interaction and interbreeding.
We already know the latter happened.
Stringer said, “Neanderthals are our closest-known relatives, and research has recently shown that nearly all humans alive today have a small percentage of Neanderthal DNA in their genomes. This interbreeding probably occurred soon after small groups of early modern humans began to leave their African homeland about 60,000 years ago.”
The “small percentage” isn’t necessarily because so few interbred. Also, other studies have concluded that one-fifth (and possibly more) of the Neanderthal genome survives in modern humans and influences skin color, hair color and texture, and other traits.
As for what happened to the Neanderthals afterward, researchers still aren’t entirely sure. The new chronology established by the paper suggests that Neanderthals may have survived in dwindling populations in pockets of Europe before they became extinct.
Read more at Discovery News
How Do Black Holes Form? Clue Found
Black holes are some of the strangest objects in the universe, and they typically fall into one of two size extremes: "small" ones that are dozens of times more massive than the sun and other "supermassive" black holes that are billions of times larger than our nearest star. But until now, astronomers had not seen good evidence of anything in between.
A recent discovery of an intermediate-mass black hole in the nearby galaxy Messier 82 (M82) offers the best evidence yet that a class of medium-size black holes exists. The finding may provide a missing link that could explain how supermassive black holes — which are found at the centers of most, if not all, galaxies — come to be, researchers say.
"We know that supermassive black holes exist at the centers of almost every massive galaxy, but we don't know how form," said Dheeraj Pasham, an astronomy graduate student at the University of Maryland, College Park, who led the research.
Insatiable giants
A black hole is a region of space where the gravitational field is so strong that neither matter nor light can escape. Though it can't be seen directly, astronomers can infer a black hole's existence by the way its gravity tugs on nearby matter, and from the radiation it spews out as bits of material falling into the black hole rub against one another, producing friction.
Astronomers have detected stellar-mass black holes, which are 10 to 100 times the mass of the sun, and supermassive black holes, which are hundreds of thousands to billions of solar masses. But the intermediate-mass variety has proved very difficult to detect, causing some to doubt their existence.
The recently identified medium-size specimen has a mass about 400 times that of the sun (give or take 100), according to the study published Sunday (Aug. 17) in the journal Nature. Scientists had hypothesized that such intermediate black holes existed, but this is the first time that one has been measured so precisely, the researchers said.
Astronomers know how stellar-mass black holes form: A massive star collapses under its own gravity. But such a process would seem unable to explain how much larger black holes arise, because they can only gobble material up to a rate known as the Eddington limit, and the universe isn't old enough for them to have grown from stellar mass to supermassive, said Cole Miller, an astronomer also at the University of Maryland.
"If you feed matter to the black hole too fast, it produces so much radiation that it blows away the matter that's trying to ," Miller told Live Science.
Building a black hole
How, then, might supermassive black holes form? Some theories suggest these strange behemoths grew from intermediate-mass black holes — which act as "seeds" — that formed in the early stages of the universe from the collapse of giant clouds of gas.
Others say these black hole giants started out as stellar-mass black holes that somehow gobbled up material at a rate much faster than the typical limit.
Miller has theorized that maybe a dense cluster of stars merged in the early universe, "colliding with each other and sticking together like wet clay," producing a black hole that gathers mass at a rate exceeding the normal limit. "If you can evade that limit, you might be able to build bigger black holes," he said.
Priyamvada Natarajan, a theoretical physicist at Yale University in New Haven, Connecticut, and her colleagues recently developed a new theoretical concept that suggests it is possible to grow black holes from a stellar mass seed faster than the Eddington limit, if the seed is trapped in a star cluster feeding off cold, flowing gas. The research was detailed Aug. 7 in the journal Science.
Read more at Discovery News
A recent discovery of an intermediate-mass black hole in the nearby galaxy Messier 82 (M82) offers the best evidence yet that a class of medium-size black holes exists. The finding may provide a missing link that could explain how supermassive black holes — which are found at the centers of most, if not all, galaxies — come to be, researchers say.
"We know that supermassive black holes exist at the centers of almost every massive galaxy, but we don't know how form," said Dheeraj Pasham, an astronomy graduate student at the University of Maryland, College Park, who led the research.
Insatiable giants
A black hole is a region of space where the gravitational field is so strong that neither matter nor light can escape. Though it can't be seen directly, astronomers can infer a black hole's existence by the way its gravity tugs on nearby matter, and from the radiation it spews out as bits of material falling into the black hole rub against one another, producing friction.
Astronomers have detected stellar-mass black holes, which are 10 to 100 times the mass of the sun, and supermassive black holes, which are hundreds of thousands to billions of solar masses. But the intermediate-mass variety has proved very difficult to detect, causing some to doubt their existence.
The recently identified medium-size specimen has a mass about 400 times that of the sun (give or take 100), according to the study published Sunday (Aug. 17) in the journal Nature. Scientists had hypothesized that such intermediate black holes existed, but this is the first time that one has been measured so precisely, the researchers said.
Astronomers know how stellar-mass black holes form: A massive star collapses under its own gravity. But such a process would seem unable to explain how much larger black holes arise, because they can only gobble material up to a rate known as the Eddington limit, and the universe isn't old enough for them to have grown from stellar mass to supermassive, said Cole Miller, an astronomer also at the University of Maryland.
"If you feed matter to the black hole too fast, it produces so much radiation that it blows away the matter that's trying to ," Miller told Live Science.
Building a black hole
How, then, might supermassive black holes form? Some theories suggest these strange behemoths grew from intermediate-mass black holes — which act as "seeds" — that formed in the early stages of the universe from the collapse of giant clouds of gas.
Others say these black hole giants started out as stellar-mass black holes that somehow gobbled up material at a rate much faster than the typical limit.
Miller has theorized that maybe a dense cluster of stars merged in the early universe, "colliding with each other and sticking together like wet clay," producing a black hole that gathers mass at a rate exceeding the normal limit. "If you can evade that limit, you might be able to build bigger black holes," he said.
Priyamvada Natarajan, a theoretical physicist at Yale University in New Haven, Connecticut, and her colleagues recently developed a new theoretical concept that suggests it is possible to grow black holes from a stellar mass seed faster than the Eddington limit, if the seed is trapped in a star cluster feeding off cold, flowing gas. The research was detailed Aug. 7 in the journal Science.
Read more at Discovery News
Aug 19, 2014
8,000-year-old mutation key to human life at high altitudes: Study identifies genetic basis for Tibetan adaptation
In an environment where others struggle to survive, Tibetans thrive in the thin air of the Tibetan Plateau, with an average elevation of 14,800 feet. A study led by University of Utah scientists is the first to find a genetic cause for the adaptation -- a single DNA base pair change that dates back 8,000 years -- and demonstrate how it contributes to the Tibetans' ability to live in low oxygen conditions.
The work appears online in the journal Nature Genetics on Aug. 17, 2014.
"These findings help us understand the unique aspects of Tibetan adaptation to high altitudes, and to better understand human evolution," said Josef Prchal, M.D., senior author and University of Utah professor of internal medicine.
The story behind the discovery is equally about cultural diplomacy as it is scientific advancement. Prchal traveled several times to Asia to meet with Chinese officials, and representatives of exiled Tibetans in India, to obtain permissions to recruit subjects for the study. But he quickly learned that without the trust of Tibetans, his efforts were futile. Wary of foreigners, they refused to donate blood for his research.
After returning to the U.S., Prchal couldn't believe his luck upon discovering that a native Tibetan, Tsewang Tashi, M.D., had just joined the Huntsman Cancer Institute at the University of Utah as a clinical fellow. When Prchal asked for his help, Tashi quickly agreed. "I realized the implications of his work not only for science as a whole but also for understanding what it means to be Tibetan," said Tashi. In another stroke of luck, Prchal received a long-awaited letter of support from the Dalai Lama. The two factors were instrumental in engaging the Tibetans' trust: more than 90, both from the U.S. and abroad, volunteered for the study.
Their hard work was worth it, for the Tibetans' DNA had a fascinating tale to tell. About 8,000 years ago, the gene EGLN1 changed by a single DNA base pair. Today, a relatively short time later on the scale of human history, 88% of Tibetans have the genetic variation, and it is virtually absent from closely related lowland Asians. The findings indicate the genetic variation endows its carriers with an advantage.
Prchal, collaborated with experts throughout the world, to determine what that advantage is. In those without the adaptation, low oxygen causes their blood to become thick with oxygen-carrying red blood cells -- an attempt to feed starved tissues -- which can cause long-term complications such as heart failure. The researchers found that the newly identified genetic variation protects Tibetans by decreasing the over-response to low oxygen.
These discoveries are but one chapter in a much larger story. The genetic adaptation likely causes other changes to the body that have yet to be understood. Plus, it is one of many as of yet unidentified genetic changes that collectively support life at high altitudes.
Read more at Science Daily
The work appears online in the journal Nature Genetics on Aug. 17, 2014.
"These findings help us understand the unique aspects of Tibetan adaptation to high altitudes, and to better understand human evolution," said Josef Prchal, M.D., senior author and University of Utah professor of internal medicine.
The story behind the discovery is equally about cultural diplomacy as it is scientific advancement. Prchal traveled several times to Asia to meet with Chinese officials, and representatives of exiled Tibetans in India, to obtain permissions to recruit subjects for the study. But he quickly learned that without the trust of Tibetans, his efforts were futile. Wary of foreigners, they refused to donate blood for his research.
After returning to the U.S., Prchal couldn't believe his luck upon discovering that a native Tibetan, Tsewang Tashi, M.D., had just joined the Huntsman Cancer Institute at the University of Utah as a clinical fellow. When Prchal asked for his help, Tashi quickly agreed. "I realized the implications of his work not only for science as a whole but also for understanding what it means to be Tibetan," said Tashi. In another stroke of luck, Prchal received a long-awaited letter of support from the Dalai Lama. The two factors were instrumental in engaging the Tibetans' trust: more than 90, both from the U.S. and abroad, volunteered for the study.
Their hard work was worth it, for the Tibetans' DNA had a fascinating tale to tell. About 8,000 years ago, the gene EGLN1 changed by a single DNA base pair. Today, a relatively short time later on the scale of human history, 88% of Tibetans have the genetic variation, and it is virtually absent from closely related lowland Asians. The findings indicate the genetic variation endows its carriers with an advantage.
Prchal, collaborated with experts throughout the world, to determine what that advantage is. In those without the adaptation, low oxygen causes their blood to become thick with oxygen-carrying red blood cells -- an attempt to feed starved tissues -- which can cause long-term complications such as heart failure. The researchers found that the newly identified genetic variation protects Tibetans by decreasing the over-response to low oxygen.
These discoveries are but one chapter in a much larger story. The genetic adaptation likely causes other changes to the body that have yet to be understood. Plus, it is one of many as of yet unidentified genetic changes that collectively support life at high altitudes.
Read more at Science Daily
Study of African dust transport to South America reveals air quality impacts
A new study that analyzed concentrations of African dust transported to South America shows large seasonal peaks in winter and spring. These research findings offer new insight on the overall human health and air quality impacts of African dust, including the climate change-induced human health effects that are expected to occur from increased African dust emissions in the coming decades.
Researchers from the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science and colleagues analyzed the dust concentrations in aerosol samples from two locations, French Guiana's capital city Cayenne and the Caribbean islands of Guadeloupe, to understand the amount, source regions, and seasonal patterns of airborne dust that travels across the North Atlantic Ocean.
The study showed clear seasonal cycles at both locations -- with peak concentrations at Cayenne from January to May and from May to September at Guadeloupe. In addition, the results showed that dust concentrations during peak periods exceeded World Health Organization (WHO) air quality guidelines. The airborne dust on Guadeloupe exceeded WHO air guidelines on 258 of 2799 days (9.2%) and on Cayenne they were exceeded on 246 of 2765 days (9.0%).
"The dust concentrations measured on Cayenne were far greater than any those of any major European city from pollutants," said Joseph Prospero, UM Rosenstiel School professor emeritus and lead author of the study. "The fine-particle dust concentrations exceed the WHO air quality standard and could have broader implications on respiratory health throughout the region, including in the Caribbean and the southeastern United States."
Persistent winds across Africa's 3.5-million square mile Sahara Desert lifts mineral-rich dust into the atmosphere where it travels the more than 5,000-mile journey towards the U.S., South America and Caribbean. Seasonal dust plumes are linked to changes in dust source regions and changes in largescale weather patterns. The dust can penetrate deep into the human respiratory system due to its fine particle size, according to Prospero.
Read more at Science Daily
Researchers from the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science and colleagues analyzed the dust concentrations in aerosol samples from two locations, French Guiana's capital city Cayenne and the Caribbean islands of Guadeloupe, to understand the amount, source regions, and seasonal patterns of airborne dust that travels across the North Atlantic Ocean.
The study showed clear seasonal cycles at both locations -- with peak concentrations at Cayenne from January to May and from May to September at Guadeloupe. In addition, the results showed that dust concentrations during peak periods exceeded World Health Organization (WHO) air quality guidelines. The airborne dust on Guadeloupe exceeded WHO air guidelines on 258 of 2799 days (9.2%) and on Cayenne they were exceeded on 246 of 2765 days (9.0%).
"The dust concentrations measured on Cayenne were far greater than any those of any major European city from pollutants," said Joseph Prospero, UM Rosenstiel School professor emeritus and lead author of the study. "The fine-particle dust concentrations exceed the WHO air quality standard and could have broader implications on respiratory health throughout the region, including in the Caribbean and the southeastern United States."
Persistent winds across Africa's 3.5-million square mile Sahara Desert lifts mineral-rich dust into the atmosphere where it travels the more than 5,000-mile journey towards the U.S., South America and Caribbean. Seasonal dust plumes are linked to changes in dust source regions and changes in largescale weather patterns. The dust can penetrate deep into the human respiratory system due to its fine particle size, according to Prospero.
Read more at Science Daily
Speed Limits Could Save Rarest Dragonfly
SACRAMENTO, Calif. — Slow down, drivers. You could save America's rarest dragonfly.
The Hine's emerald dragonfly is the only dragonfly on the federal endangered species list. The insect's largest remaining population lives in Door County, Wisconsin, where sandy beaches and cherry and apple orchards draw tourists from Green Bay and beyond.
A 2003 study found these summer drivers kill about 3,300 Hine's emerald dragonflies each year, said Amber Furness, a University of South Dakota graduate student. No one knows exactly how many Hine's emerald dragonflies are left, but there are at least 10,000 in Door County and up to 3,000 in the Chicago region.
Door County has posted two dragonfly warning-signs on roads near critical habitat areas. But can drivers really safely avoid a dragonfly at highway speeds, or even spot one from inside a car?
Searching for a better solution, the South Dakota researchers decided to see if dragonfly death rates were linked with speed.
Furness, a conservation biologist working with USD professor Daniel Soluk, mounted GoPro cameras on a pickup truck and drove the Door County roads in 2012 and 2013, varying her speed from 15 mph (24 km/h) to 55 mph (88 km/h) in increments of 10 mph (16 km/h). The cameras picked up each dragonfly's position before impact. Every time Furness hit a dragonfly, she tried to collect the carcass and verify the kill (a screen kept the insects out of the truck grill.)
At speeds below 35 mph (56 km/h), Hine's emerald dragonflies — and other kinds of dragonflies — survive their tumble over the hood, and fly away to live another day, Furness found. Faster speeds kill, according to Furness' research, presented here Thursday (Aug. 14) at the Ecological Society of America's annual meeting. The dragonflies are either killed on impact or they suffer severe shock and fall to the ground, and are run over by a second vehicle.
Furness plans to publish the results of her research in a scientific journal, so it can serve as a reference for road planners. Her work was partially funded by the Illinois State Toll Highway Authority, which has already altered a bridge to protect the Hine's emerald dragonfly, raising a bridge span on Interstate 355 so the dragonflies can avoid collisions with cars.
"Insects are important too, and there are safer speeds that we can drive to try not to deplete their populations," Furness told Live Science.
While there is no speed that will guarantee a kill-free roadway, a 30 mph (48 km/h) limit would mean a much lower probability of deadly collisions, Furness said.
Read more at Discovery News
The Hine's emerald dragonfly is the only dragonfly on the federal endangered species list. The insect's largest remaining population lives in Door County, Wisconsin, where sandy beaches and cherry and apple orchards draw tourists from Green Bay and beyond.
A 2003 study found these summer drivers kill about 3,300 Hine's emerald dragonflies each year, said Amber Furness, a University of South Dakota graduate student. No one knows exactly how many Hine's emerald dragonflies are left, but there are at least 10,000 in Door County and up to 3,000 in the Chicago region.
Door County has posted two dragonfly warning-signs on roads near critical habitat areas. But can drivers really safely avoid a dragonfly at highway speeds, or even spot one from inside a car?
Searching for a better solution, the South Dakota researchers decided to see if dragonfly death rates were linked with speed.
Furness, a conservation biologist working with USD professor Daniel Soluk, mounted GoPro cameras on a pickup truck and drove the Door County roads in 2012 and 2013, varying her speed from 15 mph (24 km/h) to 55 mph (88 km/h) in increments of 10 mph (16 km/h). The cameras picked up each dragonfly's position before impact. Every time Furness hit a dragonfly, she tried to collect the carcass and verify the kill (a screen kept the insects out of the truck grill.)
At speeds below 35 mph (56 km/h), Hine's emerald dragonflies — and other kinds of dragonflies — survive their tumble over the hood, and fly away to live another day, Furness found. Faster speeds kill, according to Furness' research, presented here Thursday (Aug. 14) at the Ecological Society of America's annual meeting. The dragonflies are either killed on impact or they suffer severe shock and fall to the ground, and are run over by a second vehicle.
Furness plans to publish the results of her research in a scientific journal, so it can serve as a reference for road planners. Her work was partially funded by the Illinois State Toll Highway Authority, which has already altered a bridge to protect the Hine's emerald dragonfly, raising a bridge span on Interstate 355 so the dragonflies can avoid collisions with cars.
"Insects are important too, and there are safer speeds that we can drive to try not to deplete their populations," Furness told Live Science.
While there is no speed that will guarantee a kill-free roadway, a 30 mph (48 km/h) limit would mean a much lower probability of deadly collisions, Furness said.
Read more at Discovery News
More than 100,000 Elephants Killed in 3 Years
The insatiable demand for ivory is causing a dramatic decline in the number of African elephants. Poachers are hunting the animal faster than it can reproduce, with deaths affecting more than half of elephant families in the Samburu National Reserve in Kenya, a new study finds.
In 2011, the worst African elephant poaching year on record since 1998, poachers killed an estimated 40,000 elephants, or about 8 percent of the elephant population in Africa. In the absence of poaching, African elephant populations grow about 4.2 percent each year, the researchers found based on detailed records from Samburu.
African Elephants are an intelligent species; individuals cooperate with one another and console one another in times of distress, but people unfortunately like their ivory tusks, said the study's lead researcher, George Wittemyer, an assistant professor of fish, wildlife and conservation biology at Colorado State University.
Wittemyer has studied African elephants in Kenya for the past 17 years, monitoring their complex social lives. In 2009, a drought led to the deaths of about 12 percent of elephants in Kenya. The animals' numbers dropped further when a wave of poaching, which has been ongoing since that year, upset the population.
"Sadly, in 2009, we had a terrible drought, and we started seeing a lot of illegal killing of elephants as well as natural deaths," Wittemyer told Live Science. "We've been struggling to respond. We've been trying to find solutions to dampen the illegal killing."
His team used data on natural deaths versus poaching deaths in the Samburu National Reserve in Kenya, and then applied these numbers to a continent-wide database called MIKE, or Monitoring the Illegal Killing of Elephants. Started in 2002, MIKE is maintained by communities across Africa that report when, where and how elephants die.
The researchers created two computer models: one that looked at 12 MIKE sites with the best carcass data, and a second that examined all 306 sites, even those with less information about elephant deaths. The researchers did not include areas in West Africa, which is home to about 2 percent of the African elephant population, because data there are sparse, Wittemyer said.
In the past 10 years, elephant numbers at the 12 sites have decreased by 7 percent, which takes into account that elephant numbers were mostly increasing until 2009. Elephants in central Africa decreased by more than 60 percent in the past 10 years, according to an analysis of three locations in the 12-site model. Poaching is so widespread that 75 percent of elephant populations across the continent have been declining since 2009, with only 25 percent showing stable or increasing numbers, Wittemyer said.
"Alarming increases in illegal killing for ivory are driving African elephants rapidly into extinction," said Peter Leimgruber, a conservation biologist at the Smithsonian Conservation Biology Institute, who was not involved in the study.
Poaching rates for ivory are unsustainable and exceed the natural growth rate of wild elephants, Leimgruber said. "This means that elephant populations currently decline by nearly 60 to 70 percent every 10 years, making it likely for the species to go extinct in the near future if poaching and the illegal ivory trade are not stopped," he said.
Much of the ivory demand comes from China and Southeast Asia. Many people see ivory as a status symbol and an artistic investment, especially for religious renditions, whereas others turn to ivory for mass-consumption products, such as bracelets and chopsticks, Wittemyer said.
A similar ivory boom in the late 1970s and 1980s tapered out when 115 countries opted to ban the international trade of ivory in 1989. Today, researchers hope that conservation organizations, as well as high-profile advocates such as Chinese basketball player Yao Ming, will help to stem the ivory demand.
Poachers killed an average of 33,630 elephants every year from 2010 to 2012, resulting in more than 100,000 deaths across the continent, the study found. Illegal killings across Africa decreased somewhat in 2010, but they were still higher than pre-2009 levels, the researchers reported. As more elephants are poached, the number of governmental seizures of illegal ivory increase, and the black market price of ivory goes up.
Read more at Discovery News
In 2011, the worst African elephant poaching year on record since 1998, poachers killed an estimated 40,000 elephants, or about 8 percent of the elephant population in Africa. In the absence of poaching, African elephant populations grow about 4.2 percent each year, the researchers found based on detailed records from Samburu.
African Elephants are an intelligent species; individuals cooperate with one another and console one another in times of distress, but people unfortunately like their ivory tusks, said the study's lead researcher, George Wittemyer, an assistant professor of fish, wildlife and conservation biology at Colorado State University.
Wittemyer has studied African elephants in Kenya for the past 17 years, monitoring their complex social lives. In 2009, a drought led to the deaths of about 12 percent of elephants in Kenya. The animals' numbers dropped further when a wave of poaching, which has been ongoing since that year, upset the population.
"Sadly, in 2009, we had a terrible drought, and we started seeing a lot of illegal killing of elephants as well as natural deaths," Wittemyer told Live Science. "We've been struggling to respond. We've been trying to find solutions to dampen the illegal killing."
His team used data on natural deaths versus poaching deaths in the Samburu National Reserve in Kenya, and then applied these numbers to a continent-wide database called MIKE, or Monitoring the Illegal Killing of Elephants. Started in 2002, MIKE is maintained by communities across Africa that report when, where and how elephants die.
The researchers created two computer models: one that looked at 12 MIKE sites with the best carcass data, and a second that examined all 306 sites, even those with less information about elephant deaths. The researchers did not include areas in West Africa, which is home to about 2 percent of the African elephant population, because data there are sparse, Wittemyer said.
In the past 10 years, elephant numbers at the 12 sites have decreased by 7 percent, which takes into account that elephant numbers were mostly increasing until 2009. Elephants in central Africa decreased by more than 60 percent in the past 10 years, according to an analysis of three locations in the 12-site model. Poaching is so widespread that 75 percent of elephant populations across the continent have been declining since 2009, with only 25 percent showing stable or increasing numbers, Wittemyer said.
"Alarming increases in illegal killing for ivory are driving African elephants rapidly into extinction," said Peter Leimgruber, a conservation biologist at the Smithsonian Conservation Biology Institute, who was not involved in the study.
Poaching rates for ivory are unsustainable and exceed the natural growth rate of wild elephants, Leimgruber said. "This means that elephant populations currently decline by nearly 60 to 70 percent every 10 years, making it likely for the species to go extinct in the near future if poaching and the illegal ivory trade are not stopped," he said.
Much of the ivory demand comes from China and Southeast Asia. Many people see ivory as a status symbol and an artistic investment, especially for religious renditions, whereas others turn to ivory for mass-consumption products, such as bracelets and chopsticks, Wittemyer said.
A similar ivory boom in the late 1970s and 1980s tapered out when 115 countries opted to ban the international trade of ivory in 1989. Today, researchers hope that conservation organizations, as well as high-profile advocates such as Chinese basketball player Yao Ming, will help to stem the ivory demand.
Poachers killed an average of 33,630 elephants every year from 2010 to 2012, resulting in more than 100,000 deaths across the continent, the study found. Illegal killings across Africa decreased somewhat in 2010, but they were still higher than pre-2009 levels, the researchers reported. As more elephants are poached, the number of governmental seizures of illegal ivory increase, and the black market price of ivory goes up.
Read more at Discovery News
Aug 18, 2014
Evolutionary misfit: Misunderstood worm-like fossil finds its place in the Tree of Life
One of the most bizarre-looking fossils ever found -- a worm-like creature with legs, spikes and a head difficult to distinguish from its tail -- has found its place in the evolutionary Tree of Life, definitively linking it with a group of modern animals for the first time.
The animal, known as Hallucigenia due to its otherworldly appearance, had been considered an 'evolutionary misfit' as it was not clear how it related to modern animal groups. Researchers from the University of Cambridge have discovered an important link with modern velvet worms, also known as onychophorans, a relatively small group of worm-like animals that live in tropical forests. The results are published in the advance online edition of the journal Nature.
The affinity of Hallucigenia and other contemporary 'legged worms', collectively known as lobopodians, has been very controversial, as a lack of clear characteristics linking them to each other or to modern animals has made it difficult to determine their evolutionary home.
What is more, early interpretations of Hallucigenia, which was first identified in the 1970s, placed it both backwards and upside-down. The spines along the creature's back were originally thought to be legs, its legs were thought to be tentacles along its back, and its head was mistaken for its tail.
Hallucigenia lived approximately 505 million years ago during the Cambrian Explosion, a period of rapid evolution when most major animal groups first appear in the fossil record. These particular fossils come from the Burgess Shale in Canada's Rocky Mountains, one of the richest Cambrian fossil deposits in the world.
Looking like something from science fiction, Hallucigenia had a row of rigid spines along its back, and seven or eight pairs of legs ending in claws. The animals were between five and 35 millimetres in length, and lived on the floor of the Cambrian oceans.
A new study of the creature's claws revealed an organisation very close to those of modern velvet worms, where layers of cuticle (a hard substance similar to fingernails) are stacked one inside the other, like Russian nesting dolls. The same nesting structure can also be seen in the jaws of velvet worms, which are no more than legs modified for chewing.
"It's often thought that modern animal groups arose fully formed during the Cambrian Explosion," said Dr Martin Smith of the University's Department of Earth Sciences, the paper's lead author. "But evolution is a gradual process: today's complex anatomies emerged step by step, one feature at a time. By deciphering 'in-between' fossils like Hallucigenia, we can determine how different animal groups built up their modern body plans."
While Hallucigenia had been suspected to be an ancestor of velvet worms, definitive characteristics linking them together had been hard to come by, and their claws had never been studied in detail. Through analysing both the prehistoric and living creatures, the researchers found that claws were the connection joining them together. Cambrian fossils continue to produce new information on origins of complex animals, and the use of high-end imaging techniques and data on living organisms further allows researchers to untangle the enigmatic evolution of earliest creatures.
Read more at Science Daily
The animal, known as Hallucigenia due to its otherworldly appearance, had been considered an 'evolutionary misfit' as it was not clear how it related to modern animal groups. Researchers from the University of Cambridge have discovered an important link with modern velvet worms, also known as onychophorans, a relatively small group of worm-like animals that live in tropical forests. The results are published in the advance online edition of the journal Nature.
The affinity of Hallucigenia and other contemporary 'legged worms', collectively known as lobopodians, has been very controversial, as a lack of clear characteristics linking them to each other or to modern animals has made it difficult to determine their evolutionary home.
What is more, early interpretations of Hallucigenia, which was first identified in the 1970s, placed it both backwards and upside-down. The spines along the creature's back were originally thought to be legs, its legs were thought to be tentacles along its back, and its head was mistaken for its tail.
Hallucigenia lived approximately 505 million years ago during the Cambrian Explosion, a period of rapid evolution when most major animal groups first appear in the fossil record. These particular fossils come from the Burgess Shale in Canada's Rocky Mountains, one of the richest Cambrian fossil deposits in the world.
Looking like something from science fiction, Hallucigenia had a row of rigid spines along its back, and seven or eight pairs of legs ending in claws. The animals were between five and 35 millimetres in length, and lived on the floor of the Cambrian oceans.
A new study of the creature's claws revealed an organisation very close to those of modern velvet worms, where layers of cuticle (a hard substance similar to fingernails) are stacked one inside the other, like Russian nesting dolls. The same nesting structure can also be seen in the jaws of velvet worms, which are no more than legs modified for chewing.
"It's often thought that modern animal groups arose fully formed during the Cambrian Explosion," said Dr Martin Smith of the University's Department of Earth Sciences, the paper's lead author. "But evolution is a gradual process: today's complex anatomies emerged step by step, one feature at a time. By deciphering 'in-between' fossils like Hallucigenia, we can determine how different animal groups built up their modern body plans."
While Hallucigenia had been suspected to be an ancestor of velvet worms, definitive characteristics linking them together had been hard to come by, and their claws had never been studied in detail. Through analysing both the prehistoric and living creatures, the researchers found that claws were the connection joining them together. Cambrian fossils continue to produce new information on origins of complex animals, and the use of high-end imaging techniques and data on living organisms further allows researchers to untangle the enigmatic evolution of earliest creatures.
Read more at Science Daily
Jesus Statue Found to Have Real Human Teeth
A Jesus statue that has lived an unassuming life in a small town in Mexico for the last 300 years has been hiding a strange secret: real human teeth.
Exactly how the statue of Jesus awaiting punishment prior to his crucifixion got its set of choppers is a mystery.
But the statue may be an example of a tradition in which human body parts were donated to churches for religious purposes, said Fanny Unikel Santoncini, a restorer at the Escuela Nacional de Restauración, Conservación y Museografía at the Instituto Nacional de Antropología E Historia (INAH) in Mexico, who first discovered the statue's teeth.
"We have to remember that these people were very, very religious. They believed absolutely that there was a life after death and this was important for them," Unikel told Live Science.
Unassuming appearance
At first glance, the Christ of Patience — which depicts a seated, bloody Jesus gazing sadly off into the distance — doesn't look that different from statues found throughout the country. The painted wooden figure, which dates to the 17th or 18th century, wears human clothes and a wig, and was sculpted with a blend of European techniques and local materials, Unikel said.
"In Mexico, there are many statues like this — not only Christ, but Saints, the Virgin Mary," Unikel said.
Using human and animal body parts for statues isn't unusual either. People routinely donated hair to serve as wigs for statues, and artists often used nails fashioned from the shaved horns of bulls, she said.
Statues in Mexico are known to include false teeth made from animal bone — either with all the teeth carved from one solid piece of bone, or with individual, square-shaped teeth. A statue of the devil may be given a set of dog's teeth, and the team has even restored a baby Jesus statue with two baby rabbit teeth sprouting from its gums, Unikel said. But though there were rumors about a few statues containing human teeth, Unikel had never seen one.
God's teeth
The discovery happened by accident, when the Christ of Patience was taken, along with several other statues, to be restored by the INAH researchers.
As part of their restoration work, Unikel and her colleagues took X-rays of the statues. The anthropologist on the team noticed something unusual: real human teeth.
"We said 'Ah, it's not possible!'" Unikel told Live Science. "She said, 'I am absolutely sure about this.'"
The pearly whites seemed to be in good condition, with even the roots present. The finding is even stranger in that someone would donate such healthy teeth for the statue given that the statue's mouth is barely open, and the teeth aren't even visible unless someone peers inside, Unikel said.
Finding the owner
The teeth could have come from living or dead people, but with no available documents describing the statue, scientists and restorers will have a tough time tracking down the original owner. One possibility is that a particularly devout parishioner, or even many different people, donated the teeth. Another possibility is that someone extracted the teeth from an unwilling victim, but if so, the sculptor would never have revealed that fact, Unikel said.
Donating body parts to a church or religious cause was common practice during the late 17th and 18th centuries. For instance, the Bishop of Guadalajara, Obispo Manuel Fernández de Santa Cruz, donated his heart to the nuns of Convento de Santa Mónica de Puebla after he died in 1699. The heart was visible in a monstrance that only the nuns could see, Unikel said. And a Spanish government minister, Viceroy Baltazar de Zúñiga, Marquez de Valero also donated his heart to a convent of nuns, she said.
Read more at Discovery News
Exactly how the statue of Jesus awaiting punishment prior to his crucifixion got its set of choppers is a mystery.
But the statue may be an example of a tradition in which human body parts were donated to churches for religious purposes, said Fanny Unikel Santoncini, a restorer at the Escuela Nacional de Restauración, Conservación y Museografía at the Instituto Nacional de Antropología E Historia (INAH) in Mexico, who first discovered the statue's teeth.
"We have to remember that these people were very, very religious. They believed absolutely that there was a life after death and this was important for them," Unikel told Live Science.
Unassuming appearance
At first glance, the Christ of Patience — which depicts a seated, bloody Jesus gazing sadly off into the distance — doesn't look that different from statues found throughout the country. The painted wooden figure, which dates to the 17th or 18th century, wears human clothes and a wig, and was sculpted with a blend of European techniques and local materials, Unikel said.
"In Mexico, there are many statues like this — not only Christ, but Saints, the Virgin Mary," Unikel said.
Using human and animal body parts for statues isn't unusual either. People routinely donated hair to serve as wigs for statues, and artists often used nails fashioned from the shaved horns of bulls, she said.
Statues in Mexico are known to include false teeth made from animal bone — either with all the teeth carved from one solid piece of bone, or with individual, square-shaped teeth. A statue of the devil may be given a set of dog's teeth, and the team has even restored a baby Jesus statue with two baby rabbit teeth sprouting from its gums, Unikel said. But though there were rumors about a few statues containing human teeth, Unikel had never seen one.
God's teeth
The discovery happened by accident, when the Christ of Patience was taken, along with several other statues, to be restored by the INAH researchers.
As part of their restoration work, Unikel and her colleagues took X-rays of the statues. The anthropologist on the team noticed something unusual: real human teeth.
"We said 'Ah, it's not possible!'" Unikel told Live Science. "She said, 'I am absolutely sure about this.'"
The pearly whites seemed to be in good condition, with even the roots present. The finding is even stranger in that someone would donate such healthy teeth for the statue given that the statue's mouth is barely open, and the teeth aren't even visible unless someone peers inside, Unikel said.
Finding the owner
The teeth could have come from living or dead people, but with no available documents describing the statue, scientists and restorers will have a tough time tracking down the original owner. One possibility is that a particularly devout parishioner, or even many different people, donated the teeth. Another possibility is that someone extracted the teeth from an unwilling victim, but if so, the sculptor would never have revealed that fact, Unikel said.
Donating body parts to a church or religious cause was common practice during the late 17th and 18th centuries. For instance, the Bishop of Guadalajara, Obispo Manuel Fernández de Santa Cruz, donated his heart to the nuns of Convento de Santa Mónica de Puebla after he died in 1699. The heart was visible in a monstrance that only the nuns could see, Unikel said. And a Spanish government minister, Viceroy Baltazar de Zúñiga, Marquez de Valero also donated his heart to a convent of nuns, she said.
Read more at Discovery News
King Richard III Feasted on Wine and Swans
In the last three years of his life, King Richard III consumed up to three liters of alcohol per day and feasted on swan, egret and heron, analysis of the monarch’s teeth and bones has revealed.
Researchers from the British Geological Survey and the University of Leicester examined changes in chemistry in the bones of the last Plantagenet king, whose remains were found buried beneath a parking lot in the English city of Leicester in 2012.
“We applied multi-element isotope techniques to reconstruct a full life history,” Angela Lamb, isotope geochemist at the British Geological Survey, Richard Buckley from the University of Leicester Archaeological Services, and colleagues wrote in the latest issue of the Journal of Archaeological Science.
Born in Northamptonshire in 1452, Richard became King of England in 1483 at the age of 30, ruling for just two years and two months.
The king, depicted by William Shakespeare as a bloodthirsty usurper, was killed in 1485 in the Battle of Bosworth, which was the last act of the decades-long fight over the throne known as War of the Roses. He was defeated by Henry Tudor, who became King Henry VII.
The researchers measured the levels of certain chemicals, such as strontium, nitrogen, oxygen, carbon and lead that relate to geographical location, pollution and diet in three locations on the skeleton of Richard III.
They analyzed bioapatite and collagen from sections of two teeth, which formed during childhood and early adolescence, and from two bones: the femur, which represents an average of the 15 years before death, and the rib, which remodels faster and represents between 2 and 5 years of life before death.
“The isotopes initially concur with Richard’s known origins in Northamptonshire, but suggest that he had moved out of eastern England by age seven, and resided further west, possibly the Welsh Marches,” the researchers wrote.
The isotope changes became evident between Richard’s femur and rib bones, revealing “a significant shift” in the nitrogen isotope values towards the end of Richard life, coinciding directly with his time as King of England.
The shift would correspond to an increase in consumption of luxury items such as game birds (swans, herons, egret) and freshwater fish.
“The Late Medieval diet of an aristocrat consisted of bread, ale, meat, fish, wine and spices with a strong correlation between wealth and the relative proportions of these, with more wine and spices and proportionally less ale and cereals with increasing wealth,” the researchers said.
Another significant shift was recorded in Richard’s oxygen isotope values, which also rose towards the end of his life.
“As we know he did not relocate during this time, we suggest the changes could be brought about by increased wine consumption,” Lamb and colleagues wrote.
The analysis showed there was a 25 percent increase in Richard’s consumption of wine when he became king.
This would equal to a bottle of wine per day, in addition to the large quantities of beer most medieval men consumed at that time, giving Richard an overall alcohol consumption of two to three liters per day.
Indeed, Richard began to indulge in food and wine since his coronation banquet, noted for being particularly long and elaborate. The excesses are likely to have continued throughout his short lived reign.
Read more at Discovery News
Researchers from the British Geological Survey and the University of Leicester examined changes in chemistry in the bones of the last Plantagenet king, whose remains were found buried beneath a parking lot in the English city of Leicester in 2012.
“We applied multi-element isotope techniques to reconstruct a full life history,” Angela Lamb, isotope geochemist at the British Geological Survey, Richard Buckley from the University of Leicester Archaeological Services, and colleagues wrote in the latest issue of the Journal of Archaeological Science.
Born in Northamptonshire in 1452, Richard became King of England in 1483 at the age of 30, ruling for just two years and two months.
The king, depicted by William Shakespeare as a bloodthirsty usurper, was killed in 1485 in the Battle of Bosworth, which was the last act of the decades-long fight over the throne known as War of the Roses. He was defeated by Henry Tudor, who became King Henry VII.
The researchers measured the levels of certain chemicals, such as strontium, nitrogen, oxygen, carbon and lead that relate to geographical location, pollution and diet in three locations on the skeleton of Richard III.
They analyzed bioapatite and collagen from sections of two teeth, which formed during childhood and early adolescence, and from two bones: the femur, which represents an average of the 15 years before death, and the rib, which remodels faster and represents between 2 and 5 years of life before death.
“The isotopes initially concur with Richard’s known origins in Northamptonshire, but suggest that he had moved out of eastern England by age seven, and resided further west, possibly the Welsh Marches,” the researchers wrote.
The isotope changes became evident between Richard’s femur and rib bones, revealing “a significant shift” in the nitrogen isotope values towards the end of Richard life, coinciding directly with his time as King of England.
The shift would correspond to an increase in consumption of luxury items such as game birds (swans, herons, egret) and freshwater fish.
“The Late Medieval diet of an aristocrat consisted of bread, ale, meat, fish, wine and spices with a strong correlation between wealth and the relative proportions of these, with more wine and spices and proportionally less ale and cereals with increasing wealth,” the researchers said.
Another significant shift was recorded in Richard’s oxygen isotope values, which also rose towards the end of his life.
“As we know he did not relocate during this time, we suggest the changes could be brought about by increased wine consumption,” Lamb and colleagues wrote.
The analysis showed there was a 25 percent increase in Richard’s consumption of wine when he became king.
This would equal to a bottle of wine per day, in addition to the large quantities of beer most medieval men consumed at that time, giving Richard an overall alcohol consumption of two to three liters per day.
Indeed, Richard began to indulge in food and wine since his coronation banquet, noted for being particularly long and elaborate. The excesses are likely to have continued throughout his short lived reign.
Read more at Discovery News
Taiwan's Sudden Canyon Disappearing in Record Time
Geologic changes tend to happen in slow motion, over millions of years, so when land formations rise and fall over decades, scientists take notice.
The rocks forming the Daan River gorge in Taiwan rose after a 1999 earthquake -- and the gorge is already in danger of disappearing due to the violent and repeated flooding of the river. While earthquakes mix things up in the region every 300-400 years, reports the BBC, the gorge could be gone from erosion in the next 50.
"The really cool thing about this place is that it's happening so fast, we can watch it," said Kristen Cook, a geologist at the German Research Centre for Geosciences told the BBC. "The river can really efficiently remove all of the evidence. We can see processes that you can't reconstruct."
The 1999 Jiji quake raised the rock table about 30 feet -- three stories -- and left a half-mile dam across the Daan river in Western Taiwan.
In 2004, the river ran over the dam, pulling material along which cut a new gorge, which locals refer to as the Grand Canyon of the Daan River, the BBC reports.
"That's one of the exciting things," Cook said. "We expect the process to be the same, but sped up."
As quickly as it came, so it's eroding, about 55 feet a year.
"As the upstream boundary of the gorge keeps moving downstream, the gorge will get shorter and shorter until the upstream boundary reaches the exit of the gorge, and the whole thing is gone," Cook told New Scientist.
From Discovery News
The rocks forming the Daan River gorge in Taiwan rose after a 1999 earthquake -- and the gorge is already in danger of disappearing due to the violent and repeated flooding of the river. While earthquakes mix things up in the region every 300-400 years, reports the BBC, the gorge could be gone from erosion in the next 50.
"The really cool thing about this place is that it's happening so fast, we can watch it," said Kristen Cook, a geologist at the German Research Centre for Geosciences told the BBC. "The river can really efficiently remove all of the evidence. We can see processes that you can't reconstruct."
The 1999 Jiji quake raised the rock table about 30 feet -- three stories -- and left a half-mile dam across the Daan river in Western Taiwan.
In 2004, the river ran over the dam, pulling material along which cut a new gorge, which locals refer to as the Grand Canyon of the Daan River, the BBC reports.
"That's one of the exciting things," Cook said. "We expect the process to be the same, but sped up."
As quickly as it came, so it's eroding, about 55 feet a year.
"As the upstream boundary of the gorge keeps moving downstream, the gorge will get shorter and shorter until the upstream boundary reaches the exit of the gorge, and the whole thing is gone," Cook told New Scientist.
From Discovery News
Subscribe to:
Posts (Atom)