As peach trees in the Niagara Region of Ontario give up the last of their fruit for the season, their ancestors halfway around the globe are clamouring for attention.
In a study published in PLOS ONE, Gary Crawford, a U of T Mississauga anthropology professor, and two Chinese colleagues propose that the domestic peaches enjoyed worldwide today can trace their ancestry back at least 7,500 years ago to the lower Yangtze River Valley in Southern China, not far from Shanghai. The study, headed by Yunfei Zheng from the Zhejiang Institute of Archeology in China's Zhejiang Province, was done in collaboration with Crawford and X. Chen, another researcher at the Zhejang Institute.
"Previously, no one knew where peaches were domesticated," said Crawford. "None of the botanical literature suggested the Yangtze Valley, although many people thought that it happened somewhere in China."
Radiocarbon dating of ancient peach stones (pits) discovered in the Lower Yangtze River Valley indicates that the peach seems to have been diverged from its wild ancestors as early as 7,500 years ago.
Archeologists have a good understanding of domestication -- conscious breeding for traits preferred by people- of annual plants such as grains (rice, wheat, etc.), but the role of trees in early farming and how trees were domesticated is not well documented. Unlike most trees, the peach matures very quickly, producing fruit within two to three years, so selection for desirable traits could become apparent relatively quickly. The problem that Crawford and his colleagues faced was how to recognize the selection process in the archeological record.
Peach stones are well represented at archeological sites in the Yangtze valley, so they compared the size and structure of the stones from six sites that spanned a period of roughly 5,000 years. By comparing the size of the stones from each site, they were able to discern peaches growing significantly larger over time in the Yangtze valley, demonstrating that domestication was taking place. The first peach stones in China most similar to modern cultivated forms are from the Liangzhu culture, which flourished 4,300 to 5300 years ago.
"We're suggesting that very early on, people understood grafting and vegetative reproduction, because it sped up selection," Crawford said. "They had to have been doing such work, because seeds have a lot of genetic variability, and you don't know if a seed will produce the same fruit as the tree that produced it. It's a gamble. If they simply started grafting, it would guarantee the orchard would have the peaches they wanted."
Crawford and his colleagues think that it took about 3,000 years before the domesticated peach resembled the fruit we know today.
"The peaches we eat today didn't grow in the wild," Crawford added. "Generation after generation kept selecting the peaches they enjoyed. The product went from thinly fleshed, very small fruit to what we have today. Peaches produce fruit over an extended season today but in the wild they have a short season. People must have selected not only for taste and fruit size, but for production time too."
Discovering more about the origins of domesticated peaches tells us more about our human ancestors, too, Crawford noted.
Crops such as domesticated peaches indicate that early people weren't passive in dealing with the environment. Not only did they understand grain production, but the woodlands and certain trees were being manipulated early on.
Read more at Science Daily
Sep 6, 2014
Mosaics Revealed at Alexander the Great-Era Tomb
New photographs of the burial mound complex at the Kasta Hill site at Amphipolis — built during the time of Alexander the Great – show traces of a blue fresco on a wall and a stunning mosaic floor.
Made from irregular pieces of white marble inlaid on a red background, the mosaics add to previous discoveries made by Katerina Peristeri, the archaeologist in charge of the dig, and her team.
The findings include beams decorated with embossed rosettes, a pebbled floor decorated with black and white, diamond-shaped pieces, and a couple of headless and wingless seated sphinxes.
Standing nearly 5 feet high and weighting about 1.5 tons each, the sphinxes guarded the tomb entrance. An impressive 16-foot-tall marble lion statue, currently standing on a nearby road, originally topped the mound.
According to Greek media, the next couple of weeks will be crucial -– excavation might finally reveal who is buried in the unique mound.
Dating between 325 B.C. — two years after the death of warrior king Alexander the Great — and 300 B.C., the tomb lies in the ancient city of Amphipolis, in Greece’s northeastern Macedonia region about 65 miles from the country’s second-biggest city, Thessaloniki.
At 1,600 feet wide, the mound is the largest tomb of its kind ever discovered in Greece.
Wild speculation — that the body of Alexander the Great lies in the mound — continues in Greek media. It’s a claim, or hope, that archaeologists and historians strongly dismiss.
History has it that after Alexander died in Babylon, now in central Iraq, in 323 B.C., his body, en route to Macedon, was hijacked by Ptolemy and taken to Egypt. The sarcophagus of the warrior king was then moved from Memphis to Alexandria, the capital of his kingdom, and there it remained until Late Antiquity.
By the fourth century A.D., the tomb’s location was no longer known.
But scholarly skepticism doesn’t seem to affect Greek hopes for an extraordinary find that might boost the country’s shrinking economy.
“Alexander is helping Greece after 2,400 years,” the Amfipoli News site wrote.
Indeed, a strong investor interest is reported for land in the nearby small Mesolakia village, whose future could dramatically change.
“Residents with land in the area are calculating the expropriation compensations they will receive, while others plan to start new businesses, opening cafes, t-shirt and souvenir shops,” the Greek Reporter wrote.
While proposals are being made to include Amphipolis in ship cruise routes, tourist agencies have already started tours and daily trip to the site, with coaches leaving from Thessaloniki.
Although tourists are only able to see the massive mound from the outer fence, they can visit other parts of the archaeological site and a museum that chronicles the history of the site from prehistoric times down to the Byzantine period.
The city, an Athenian colony, was conquered by Philip II of Macedon, Alexander’s father, in 357 B.C.
Prominent generals and admirals of Alexander had links with Amphipolis. It’s here that Alexander’s wife Roxana and his son Alexander IV were killed in 311 B.C. on the orders of his successor, King Cassander.
According to the Greek Reporter, Amphipolis has now become “archaeological Disneyland,” the tomb discovery treated with much fanfare and well orchestrated announcements.
Read more at Discovery News
Made from irregular pieces of white marble inlaid on a red background, the mosaics add to previous discoveries made by Katerina Peristeri, the archaeologist in charge of the dig, and her team.
The findings include beams decorated with embossed rosettes, a pebbled floor decorated with black and white, diamond-shaped pieces, and a couple of headless and wingless seated sphinxes.
Standing nearly 5 feet high and weighting about 1.5 tons each, the sphinxes guarded the tomb entrance. An impressive 16-foot-tall marble lion statue, currently standing on a nearby road, originally topped the mound.
According to Greek media, the next couple of weeks will be crucial -– excavation might finally reveal who is buried in the unique mound.
Dating between 325 B.C. — two years after the death of warrior king Alexander the Great — and 300 B.C., the tomb lies in the ancient city of Amphipolis, in Greece’s northeastern Macedonia region about 65 miles from the country’s second-biggest city, Thessaloniki.
At 1,600 feet wide, the mound is the largest tomb of its kind ever discovered in Greece.
Wild speculation — that the body of Alexander the Great lies in the mound — continues in Greek media. It’s a claim, or hope, that archaeologists and historians strongly dismiss.
History has it that after Alexander died in Babylon, now in central Iraq, in 323 B.C., his body, en route to Macedon, was hijacked by Ptolemy and taken to Egypt. The sarcophagus of the warrior king was then moved from Memphis to Alexandria, the capital of his kingdom, and there it remained until Late Antiquity.
By the fourth century A.D., the tomb’s location was no longer known.
But scholarly skepticism doesn’t seem to affect Greek hopes for an extraordinary find that might boost the country’s shrinking economy.
“Alexander is helping Greece after 2,400 years,” the Amfipoli News site wrote.
Indeed, a strong investor interest is reported for land in the nearby small Mesolakia village, whose future could dramatically change.
“Residents with land in the area are calculating the expropriation compensations they will receive, while others plan to start new businesses, opening cafes, t-shirt and souvenir shops,” the Greek Reporter wrote.
While proposals are being made to include Amphipolis in ship cruise routes, tourist agencies have already started tours and daily trip to the site, with coaches leaving from Thessaloniki.
Although tourists are only able to see the massive mound from the outer fence, they can visit other parts of the archaeological site and a museum that chronicles the history of the site from prehistoric times down to the Byzantine period.
The city, an Athenian colony, was conquered by Philip II of Macedon, Alexander’s father, in 357 B.C.
Prominent generals and admirals of Alexander had links with Amphipolis. It’s here that Alexander’s wife Roxana and his son Alexander IV were killed in 311 B.C. on the orders of his successor, King Cassander.
According to the Greek Reporter, Amphipolis has now become “archaeological Disneyland,” the tomb discovery treated with much fanfare and well orchestrated announcements.
Read more at Discovery News
Sep 5, 2014
Monet Discovered in Suitcase of 'Nazi Art' Hoarder
Berlin (AFP) - The reclusive son of a Nazi-era art dealer who amassed a giant secret collection snuck a Monet with him into the German hospital where he died in May, investigators said Friday.
The executor of Cornelius Gurlitt's estate discovered the French Impressionist artwork in a suitcase handed over to him by the clinic this week.
"The work on paper shows a landscape in light blue," the government task force investigating the hoard said in a statement.
The executor informed a court in the southern city of Munich of the findings, the task force said.
"An initial look through the Monet catalogue of works indicates that it may have been completed in 1864," given its similarity to the painting "Vue de Sainte-Adresse" finished that year.
Gurlitt had hidden 1,280 paintings, drawings and sketches -- believed to be worth hundreds of millions of dollars and including masterpieces by Picasso and Chagall -- in his Munich flat for decades.
Many of the works, which were seized in early 2012 when they were discovered by chance during a tax evasion probe, are believed to have been stolen or extorted from Jews under a Nazi scheme to systematically plunder valuable art collections.
Gurlitt, who never married or had children, described the works in an interview with a German magazine as "the love of his life".
Before his death at the age of 81, Gurlitt struck a deal with the German government to help track down the rightful owners of the artwork.
In the course of its investigations, the task force has announced spectacular new finds including sculptures thought to be by Degas and Rodin uncovered in Gurlitt's cluttered flat in July.
A day after Gurlitt's May 6 death, Switzerland's Museum of Fine Arts in Bern said it had been astonished to learn it was named as the recipient of Gurlitt's collection in his will, an offer it said it was assessing.
Read more at Discovery News
The executor of Cornelius Gurlitt's estate discovered the French Impressionist artwork in a suitcase handed over to him by the clinic this week.
"The work on paper shows a landscape in light blue," the government task force investigating the hoard said in a statement.
The executor informed a court in the southern city of Munich of the findings, the task force said.
"An initial look through the Monet catalogue of works indicates that it may have been completed in 1864," given its similarity to the painting "Vue de Sainte-Adresse" finished that year.
Gurlitt had hidden 1,280 paintings, drawings and sketches -- believed to be worth hundreds of millions of dollars and including masterpieces by Picasso and Chagall -- in his Munich flat for decades.
Many of the works, which were seized in early 2012 when they were discovered by chance during a tax evasion probe, are believed to have been stolen or extorted from Jews under a Nazi scheme to systematically plunder valuable art collections.
Gurlitt, who never married or had children, described the works in an interview with a German magazine as "the love of his life".
Before his death at the age of 81, Gurlitt struck a deal with the German government to help track down the rightful owners of the artwork.
In the course of its investigations, the task force has announced spectacular new finds including sculptures thought to be by Degas and Rodin uncovered in Gurlitt's cluttered flat in July.
A day after Gurlitt's May 6 death, Switzerland's Museum of Fine Arts in Bern said it had been astonished to learn it was named as the recipient of Gurlitt's collection in his will, an offer it said it was assessing.
Read more at Discovery News
'Last Supper' Papyrus May Be One of Oldest Christian Charms
A 1,500-year-old fragment of Greek papyrus with writing that refers to the biblical Last Supper and "manna from heaven" may be one of the oldest Christian amulets, say researchers.
The fragment was likely folded up and worn inside a locket or pendant as a sort of protective charm, according to Roberta Mazza, who spotted the papyrus while looking through thousands of papyri kept in the library vault at the John Rylands Research Institute at the University of Manchester in the United Kingdom.
"This is an important and unexpected discovery as it's one of the first recorded documents to use magic in the Christian context and the first charm ever found to refer to the Eucharist — the Last Supper — as the manna of the Old Testament," Mazza said in a statement. The fragment likely originated in a town in Egypt.
The text on the papyrus is a mix of passages from Psalm 78:23-24 and Matthew 26:28-30, among others, said Mazza, who is a research fellow at the institute. "To this day, Christians use passages from the Bible as protective charms so our amulet marks the start of an important trend in Christianity."
The translated text on the papyrus reads:
"Fear you all who rule over the earth.
Know you nations and peoples that Christ is our God.
For he spoke and they came to being, he commanded and they were created; he put everything under our feet and delivered us from the wish of our enemies.
Our God prepared a sacred table in the desert for the people and gave manna of the new covenant to eat, the Lord's immortal body and the blood of Christ poured for us in remission of sins."
People of the time believed such passages had magical powers, Mazza told Live Science. Supporting that idea, creases can be seen on the fragment, Mazza said, suggesting the papyrus was folded into a rectangular packet measuring 3 by 10.5 centimeters (1.2 by 4.1 inches), and either placed into a box at home or worn around a person's neck.
The amulet was written on the back of a receipt that seems to be for payment of a grain tax. The nearly illegible text refers to a tax collector from the village of Tertembuthis, located in the countryside of Hermoupolis, an ancient city in what is now the Egyptian town of el-Ashmunein.
"The text says that the receipt was released in the village of Tertembuthis. Therefore we may reasonably guess that the person who re-used the back for writing the amulet was from that same village or the region nearby, although we cannot exclude other hypotheses," Mazza told Live Science.
Carbon analysis dates the fragment to between 574 and 660, Mazza said. And while the creator knew the Bible, he or she made plenty of mistakes. "Some words are misspelled and others are in the wrong order," Mazza said in the statement. "This suggests that he was writing by heart rather than copying it."
The discovery, which Mazza presented this week at an international conference on papyri at the university's research institute, reveals that Christians adopted an ancient Egyptian practice of wearing such charms to ward off danger.
Read more at Discovery News
The fragment was likely folded up and worn inside a locket or pendant as a sort of protective charm, according to Roberta Mazza, who spotted the papyrus while looking through thousands of papyri kept in the library vault at the John Rylands Research Institute at the University of Manchester in the United Kingdom.
"This is an important and unexpected discovery as it's one of the first recorded documents to use magic in the Christian context and the first charm ever found to refer to the Eucharist — the Last Supper — as the manna of the Old Testament," Mazza said in a statement. The fragment likely originated in a town in Egypt.
The text on the papyrus is a mix of passages from Psalm 78:23-24 and Matthew 26:28-30, among others, said Mazza, who is a research fellow at the institute. "To this day, Christians use passages from the Bible as protective charms so our amulet marks the start of an important trend in Christianity."
The translated text on the papyrus reads:
"Fear you all who rule over the earth.
Know you nations and peoples that Christ is our God.
For he spoke and they came to being, he commanded and they were created; he put everything under our feet and delivered us from the wish of our enemies.
Our God prepared a sacred table in the desert for the people and gave manna of the new covenant to eat, the Lord's immortal body and the blood of Christ poured for us in remission of sins."
People of the time believed such passages had magical powers, Mazza told Live Science. Supporting that idea, creases can be seen on the fragment, Mazza said, suggesting the papyrus was folded into a rectangular packet measuring 3 by 10.5 centimeters (1.2 by 4.1 inches), and either placed into a box at home or worn around a person's neck.
The amulet was written on the back of a receipt that seems to be for payment of a grain tax. The nearly illegible text refers to a tax collector from the village of Tertembuthis, located in the countryside of Hermoupolis, an ancient city in what is now the Egyptian town of el-Ashmunein.
"The text says that the receipt was released in the village of Tertembuthis. Therefore we may reasonably guess that the person who re-used the back for writing the amulet was from that same village or the region nearby, although we cannot exclude other hypotheses," Mazza told Live Science.
Carbon analysis dates the fragment to between 574 and 660, Mazza said. And while the creator knew the Bible, he or she made plenty of mistakes. "Some words are misspelled and others are in the wrong order," Mazza said in the statement. "This suggests that he was writing by heart rather than copying it."
The discovery, which Mazza presented this week at an international conference on papyri at the university's research institute, reveals that Christians adopted an ancient Egyptian practice of wearing such charms to ward off danger.
Read more at Discovery News
Ancient Roman Jewelry Found Under Shop
A 2,000-year-old story of terror and devastation has been brought to light during renovation work at an English department store, revealing one of the finest collections of Roman jewelry as well as human remains of people who were slaughtered at the site.
The jewelry had been undisturbed since 61 A.D. in Colchester, some 50 miles northeast of London. It was found in a wooden box and bags under a department store in the town’s high street.
The small treasure includes three gold armlets, a silver chain necklace, two silver bracelets, a silver armlet, a small bag of coins and a small jewelry box containing two sets of gold earrings and four gold finger-rings.
According to Philip Crummy, the director of Colchester Archaeological Trust who excavated the area, the jewelry belonged to a wealthy Roman woman who may not have survived to recover her treasure.
“The find is a particularly poignant one because of its historical context,” Crummy said in a statement.
“It seems likely that the owner or perhaps one of her slaves buried the jewelry inside her house for safe-keeping during the early stages of the Boudican Revolt, when prospects looked bleak,” he added.
The revolt against the Roman rule was led from 60-61 A.D. by the warrior Queen Boudicca of the Iceni, a British tribe. In her unsuccessful attempt to defeat the Romans, Boudicca, also known as Boadicea, managed to burn to the ground three towns.
Colchester was her first target.
“The inhabitants knew a large British army was marching towards them and they knew that they were practically defenseless with only a small force of soldiers on hand and no town defenses,” Crummy said.
“Imagine their panic and desperation when they learned of the massacre of a large part of the Roman Ninth Legion on its way to relieve them,” he added.
Terrified, the Roman woman hastily hid her valuable jewelry in a small pit dug in the floor of her house, hoping to come back and recover her belongings. But after a two day siege, the fate of her home was sealed.
Near the jewelry, Crummy and his team found vivid evidence of the last dramatic moments in the house.
Foodstuff including dates, figs, wheat, peas and grain lay burnt black on the floor with a collapsed wooden shelf. The ingredients were carbonized by the heat of the fire so their shapes were preserved perfectly.
“The dates appeared to have been kept on the shelf in a square wooden bowl or platter,” Crummy said.
In the thick red and black debris layer left by the revolt, the archaeologists also found human remains which include part of a jaw and shin bone. They appear to have been cut by a sword.
“These remains suggest that at least one person fought and died in the building during the revolt,” the archaeologists said.
Read more at Discovery News
The jewelry had been undisturbed since 61 A.D. in Colchester, some 50 miles northeast of London. It was found in a wooden box and bags under a department store in the town’s high street.
The small treasure includes three gold armlets, a silver chain necklace, two silver bracelets, a silver armlet, a small bag of coins and a small jewelry box containing two sets of gold earrings and four gold finger-rings.
According to Philip Crummy, the director of Colchester Archaeological Trust who excavated the area, the jewelry belonged to a wealthy Roman woman who may not have survived to recover her treasure.
“The find is a particularly poignant one because of its historical context,” Crummy said in a statement.
“It seems likely that the owner or perhaps one of her slaves buried the jewelry inside her house for safe-keeping during the early stages of the Boudican Revolt, when prospects looked bleak,” he added.
The revolt against the Roman rule was led from 60-61 A.D. by the warrior Queen Boudicca of the Iceni, a British tribe. In her unsuccessful attempt to defeat the Romans, Boudicca, also known as Boadicea, managed to burn to the ground three towns.
Colchester was her first target.
“The inhabitants knew a large British army was marching towards them and they knew that they were practically defenseless with only a small force of soldiers on hand and no town defenses,” Crummy said.
“Imagine their panic and desperation when they learned of the massacre of a large part of the Roman Ninth Legion on its way to relieve them,” he added.
Terrified, the Roman woman hastily hid her valuable jewelry in a small pit dug in the floor of her house, hoping to come back and recover her belongings. But after a two day siege, the fate of her home was sealed.
Near the jewelry, Crummy and his team found vivid evidence of the last dramatic moments in the house.
Foodstuff including dates, figs, wheat, peas and grain lay burnt black on the floor with a collapsed wooden shelf. The ingredients were carbonized by the heat of the fire so their shapes were preserved perfectly.
“The dates appeared to have been kept on the shelf in a square wooden bowl or platter,” Crummy said.
In the thick red and black debris layer left by the revolt, the archaeologists also found human remains which include part of a jaw and shin bone. They appear to have been cut by a sword.
“These remains suggest that at least one person fought and died in the building during the revolt,” the archaeologists said.
Read more at Discovery News
Island Rising Out of Pacific Could Be Tsunami Hazard
Nine months after a new volcanic island broke through the surface of the western Pacific Ocean and merged with the existing island of Nishimo-shima 600 miles south of Tokyo, the combined island is still giving off smoke as it grows 200,000 cubic meters in volume per day, thanks to lava flow. (That’s enough to fill 80 Olympic-sized swimming pools.)
The birth of a new island might seem like a pretty cool thing, but there are a few distinct downsides.
"If lava continues to mount on the eastern area, it will be deposited on steep slopes," University of Tokyo scientist Fukashi Maeno explained in an email to NASA’s Earth Observatory website. "This could cause instability on the slope, so a partial collapse of the island may occur. We need to carefully observe the growth process."
Maeno told Agence France-Presse that if the new hybrid island collapses, it could unleash a tsunami upon nearby inhabited areas as it does, He calculated that seismic waves from 12 million cubic meters of collapsing volcanic rock would create a tsunami that would send three feet of water slamming into the nearby town of Chichijima and its 2,000 inhabitants within minutes at bullet-train speed.
An official from the Japan Meteorological Agency, which monitors earthquakes and tsunamis in addition to weather, told AFP that the agency’s scientists already are monitoring the island.
"We studied the simulation this morning, and we are thinking of consulting with earthquake prediction experts… about the probability of this actually happening, and what kind of measures we would be able to take," the official told AFP.
There's also apparently the possibility that the island could explode. According to Asahi Shimbun, Japan Coast Guard officials say that a cone-shaped mound of congealed lava inside a volcanic vent there could seal off movement of magma and raise interior pressure within the island, which might eventually result in a large-scale explosion.
From Discovery News
The birth of a new island might seem like a pretty cool thing, but there are a few distinct downsides.
"If lava continues to mount on the eastern area, it will be deposited on steep slopes," University of Tokyo scientist Fukashi Maeno explained in an email to NASA’s Earth Observatory website. "This could cause instability on the slope, so a partial collapse of the island may occur. We need to carefully observe the growth process."
Maeno told Agence France-Presse that if the new hybrid island collapses, it could unleash a tsunami upon nearby inhabited areas as it does, He calculated that seismic waves from 12 million cubic meters of collapsing volcanic rock would create a tsunami that would send three feet of water slamming into the nearby town of Chichijima and its 2,000 inhabitants within minutes at bullet-train speed.
An official from the Japan Meteorological Agency, which monitors earthquakes and tsunamis in addition to weather, told AFP that the agency’s scientists already are monitoring the island.
"We studied the simulation this morning, and we are thinking of consulting with earthquake prediction experts… about the probability of this actually happening, and what kind of measures we would be able to take," the official told AFP.
There's also apparently the possibility that the island could explode. According to Asahi Shimbun, Japan Coast Guard officials say that a cone-shaped mound of congealed lava inside a volcanic vent there could seal off movement of magma and raise interior pressure within the island, which might eventually result in a large-scale explosion.
From Discovery News
Sep 4, 2014
Fish Spits With Impressive Precision to Kill Prey
Archerfish may look like peaceful swimmers, but they spit water at unsuspecting prey with remarkable power and precision.
The fish turn out to be the first known tool-using animal to change the hydrodynamic properties of a free jet of water, according to a new study published in the journal Current Biology.
Even humans who impressively spit things like watermelon seeds cannot compete with the accuracy of these water-shooting fish. Archerfish are found in the open ocean and in water bodies of Thailand, India, Australia, the Philippines, and certain other countries.
A better comparison, where humans are concerned, has to do with our talent for throwing objects with our hands.
"One of the last strongholds of human uniqueness is our ability to powerfully throw stones or spears at distant targets," co-author Stefan Schuster said in a press release. “This is really an impressive capability and requires — among many fascinating aspects — precise time control of movement."
"It is believed that this ability has forced our brains to become bigger, housing many more neurons to afford the precision," continued Schuster, who is a researcher at the University of Bayreuth. "With the many neurons around, they could be used for other tasks apart from applying them for powerful throws. It is remarkable that the same line of reasoning could also be applied to archerfish."
For the study, Schuster and co-author Peggy Gerullis trained the relatively big-brained fish to hit targets ranging in height from approximately 8 to 24 inches from a fixed position. The researchers then monitored various aspects of jet production and propagation (wave movement) as the fish did their thing.
The target training sessions revealed that the time needed before water masses up at the jet tip isn't fixed. Archerfish instead make adjustments to ensure that a nice drop of water forms just before impact.
The fish achieve this by modulating the dynamics of changes in the cross-section of their mouth opening, the researchers report. The timing adjustments that archerfish must make to powerfully hit their targets over an extended range are comparable to the "uniquely human" ability of powerful throwing.
A primary urge led to the evolution of the fish's skill, and that is hunger.
"The predominant impression from our field work in Thailand over several years is that there is very little to actually shoot at, so it's important for the fish to be efficient," says Stefan Schuster of the University of Bayreuth in Germany. “It pays to be able to powerfully hit prey over a wide range of distances."
Usually the fish shoot water at a surprised victim, such as a little lizard on a twig above the water. The stunned lizard is forced off the twig and into the water, where the fish gobbles it up.
Coming soon to a store near you could be water nozzles modeled after the fish's natural mechanism for controlling water. Adjustable jets are big business in many industries, including medicine.
Read more at Discovery News
The fish turn out to be the first known tool-using animal to change the hydrodynamic properties of a free jet of water, according to a new study published in the journal Current Biology.
Even humans who impressively spit things like watermelon seeds cannot compete with the accuracy of these water-shooting fish. Archerfish are found in the open ocean and in water bodies of Thailand, India, Australia, the Philippines, and certain other countries.
A better comparison, where humans are concerned, has to do with our talent for throwing objects with our hands.
"One of the last strongholds of human uniqueness is our ability to powerfully throw stones or spears at distant targets," co-author Stefan Schuster said in a press release. “This is really an impressive capability and requires — among many fascinating aspects — precise time control of movement."
"It is believed that this ability has forced our brains to become bigger, housing many more neurons to afford the precision," continued Schuster, who is a researcher at the University of Bayreuth. "With the many neurons around, they could be used for other tasks apart from applying them for powerful throws. It is remarkable that the same line of reasoning could also be applied to archerfish."
For the study, Schuster and co-author Peggy Gerullis trained the relatively big-brained fish to hit targets ranging in height from approximately 8 to 24 inches from a fixed position. The researchers then monitored various aspects of jet production and propagation (wave movement) as the fish did their thing.
The target training sessions revealed that the time needed before water masses up at the jet tip isn't fixed. Archerfish instead make adjustments to ensure that a nice drop of water forms just before impact.
The fish achieve this by modulating the dynamics of changes in the cross-section of their mouth opening, the researchers report. The timing adjustments that archerfish must make to powerfully hit their targets over an extended range are comparable to the "uniquely human" ability of powerful throwing.
A primary urge led to the evolution of the fish's skill, and that is hunger.
"The predominant impression from our field work in Thailand over several years is that there is very little to actually shoot at, so it's important for the fish to be efficient," says Stefan Schuster of the University of Bayreuth in Germany. “It pays to be able to powerfully hit prey over a wide range of distances."
Usually the fish shoot water at a surprised victim, such as a little lizard on a twig above the water. The stunned lizard is forced off the twig and into the water, where the fish gobbles it up.
Coming soon to a store near you could be water nozzles modeled after the fish's natural mechanism for controlling water. Adjustable jets are big business in many industries, including medicine.
Read more at Discovery News
Ancient Mammal Relatives Were 'Night Owls'
Mammals were long thought to have evolved nocturnal lifestyles as a way to co-exist with dinosaurs, but new research finds that nighttime behavior may have evolved 100 million years earlier than mammals did.
The eye bones of ancestors of modern mammals that lived more than 300 million years ago, such as the sail-finned carnivore Dimetrodon, suggest that at least some of these animals were already active at night, researchers say.
"Traditionally, dinosaurs were considered day-active, and mammals were regarded as living in the shade of them, living at night," said Lars Schmitz, a biologist at Claremont McKenna, Pitzer and Scripps colleges in Claremont, California, and co-author of the study, detailed today (Sept. 3) in the journal Proceedings of the Royal Society B.
But recent studies by Schmitz and others have shown that some dinosaurs were probably nocturnal, so the scientists suspect nighttime behavior arose much earlier than with the first mammals.
Approximately half of all living land mammals are nocturnal, and many more are active at twilight, the researchers said. Humans and other primates are some of the exceptions; they're active primarily during the day. But it would be "a bit self-centered" to assume diurnal behavior is the norm, Schmitz told Live Science.
Mammals are part of a larger group, called synapsids, that includes mammals and all extinct relatives that were more closely related to living mammals than to birds, reptiles or amphibians. The oldest synapsid fossils date back to 315 million years ago, and the first mammals don't appear in the fossil record until about 200 million years ago, the researchers said.
To investigate when nocturnal behavior first evolved, Schmitz and his colleagues from The Field Museum in Chicago examined some of the more ancient synapsid fossils from museum collections in the United States and South Africa. "We tried to look at bones that may be related to their ability to see at night," Schmitz said.
Specifically, the researchers were interested in tiny, ringlike bones in the eye known as scleral ossicles, which provide clues to the size and shape of the eyes. Modern mammals lack these bones, but they are found in the eyes of many other vertebrates. The bones are very fragile, however, so they aren't often preserved in fossils.
The researchers used a statistical technique to analyze the tiny bones, comparing them with the bones of living animals to prove that the method was robust.
The findings suggest that some synapsids were active during the daytime and others during the nighttime, and some were active at twilight. Interestingly, some of the oldest synapsids they looked at, including Dimetrodon, had eyes whose size indicates these animals were likely active at night.
The findings help researchers understand the biology of these mammalian ancestors. "How were they living? How were they dividing resources?" Schmitz said. "With these little puzzle pieces, we can start building this image of what life may have been like 250 million years ago."
Jörg Fröbisch, a biologist at Humboldt University of Berlin, in Germany, who specializes in synapsid evolution, said he was "very excited" about the findings. "All the evidence seems to point to the fact that there was a wide variety of behavior" among these mammalian ancestors, said Fröbisch, who knows the authors of the new study but was not involved in it.
Fröbisch's colleague at Humboldt, evolutionary biologist Christian Kammerer, agreed."Mammals are thought to be ancestrally nocturnal, but this study demonstrates that this was not some crucial novelty in mammalian evolution: There had been nocturnal members of the mammalian stem lineage for a hundred million years," Kammerer told Live Science.
Read more at Discovery News
The eye bones of ancestors of modern mammals that lived more than 300 million years ago, such as the sail-finned carnivore Dimetrodon, suggest that at least some of these animals were already active at night, researchers say.
"Traditionally, dinosaurs were considered day-active, and mammals were regarded as living in the shade of them, living at night," said Lars Schmitz, a biologist at Claremont McKenna, Pitzer and Scripps colleges in Claremont, California, and co-author of the study, detailed today (Sept. 3) in the journal Proceedings of the Royal Society B.
But recent studies by Schmitz and others have shown that some dinosaurs were probably nocturnal, so the scientists suspect nighttime behavior arose much earlier than with the first mammals.
Approximately half of all living land mammals are nocturnal, and many more are active at twilight, the researchers said. Humans and other primates are some of the exceptions; they're active primarily during the day. But it would be "a bit self-centered" to assume diurnal behavior is the norm, Schmitz told Live Science.
Mammals are part of a larger group, called synapsids, that includes mammals and all extinct relatives that were more closely related to living mammals than to birds, reptiles or amphibians. The oldest synapsid fossils date back to 315 million years ago, and the first mammals don't appear in the fossil record until about 200 million years ago, the researchers said.
To investigate when nocturnal behavior first evolved, Schmitz and his colleagues from The Field Museum in Chicago examined some of the more ancient synapsid fossils from museum collections in the United States and South Africa. "We tried to look at bones that may be related to their ability to see at night," Schmitz said.
Specifically, the researchers were interested in tiny, ringlike bones in the eye known as scleral ossicles, which provide clues to the size and shape of the eyes. Modern mammals lack these bones, but they are found in the eyes of many other vertebrates. The bones are very fragile, however, so they aren't often preserved in fossils.
The researchers used a statistical technique to analyze the tiny bones, comparing them with the bones of living animals to prove that the method was robust.
The findings suggest that some synapsids were active during the daytime and others during the nighttime, and some were active at twilight. Interestingly, some of the oldest synapsids they looked at, including Dimetrodon, had eyes whose size indicates these animals were likely active at night.
The findings help researchers understand the biology of these mammalian ancestors. "How were they living? How were they dividing resources?" Schmitz said. "With these little puzzle pieces, we can start building this image of what life may have been like 250 million years ago."
Jörg Fröbisch, a biologist at Humboldt University of Berlin, in Germany, who specializes in synapsid evolution, said he was "very excited" about the findings. "All the evidence seems to point to the fact that there was a wide variety of behavior" among these mammalian ancestors, said Fröbisch, who knows the authors of the new study but was not involved in it.
Fröbisch's colleague at Humboldt, evolutionary biologist Christian Kammerer, agreed."Mammals are thought to be ancestrally nocturnal, but this study demonstrates that this was not some crucial novelty in mammalian evolution: There had been nocturnal members of the mammalian stem lineage for a hundred million years," Kammerer told Live Science.
Read more at Discovery News
Colossal New Dinosaur Was One of World's Largest
A newly unearthed “supermassive” dinosaur, Dreadnoughtus schrani, is the largest known land animal for which mass can be accurately calculated, concludes a new study.
The dinosaur, whose scientific name means “fear nothing” and is nicknamed “Dread,” is described in the latest issue of Scientific Reports. Dread measured 85 feet long and weighed about 65 tons, according to the study, which also reports that the dinosaur’s skeleton is the most complete ever found for its type.
Lead author Kenneth Lacovara said that Dread “weighed as much as a dozen African elephants or more than seven T. rex.” Not surprisingly, this dinosaur had no known predators. If its sheer massiveness did not scare away hungry others, its “weaponized tail” would have.
“No doubt Dread would use its amazingly muscled tail to fend off attack,” Lacovara, an associate professor in Drexel University’s College of Arts and Sciences, told Discovery News. “If an animal made it under its tail, three large claws would be waiting for them on each back foot.”
He added, “I can’t imagine that it was a good idea to attack a full-grown, healthy Dreadnoughtus.”
The otherwise peaceful herbivore likely spent its days fulfilling what Lacovara calls “a life-long obsession with eating.”
“Every day is about taking in enough calories to nourish this house-sized body,” he explained. “I imagine their day consists largely of standing in one place. You have this 37-foot-long neck balanced by a 30-foot-long tail in the back. Without moving your legs, you have access to a giant feeding envelope of trees and fern leaves.”
Once the forest section had been cleared, Dread would probably then take a few steps to the left or right and start feasting all over again.
Lacovara suspects that Dread had a stomach the size of a horse. Food would remain there for long periods, churning in stomach acids that efficiently extracted nutrients.
The dinosaur, represented by two specimens, was unearthed in southern Patagonia. Dread lived approximately 77 million years ago in a temperate forest at the southern tip of South America.
Based on sedimentary deposits at the site, the two dinosaurs were buried quickly after a river flooded and broke through its natural levee, which would have turned the ground into something like quicksand. The dinosaurs’ rapid and deep burial, Lacovara said, accounts for the extraordinary level of fossil completeness.
“Its misfortune was our luck,” he added.
Dread was a titanosaur, meaning a type of four-legged, plant-eating dinosaur from the Middle Jurassic to Late Cretaceous periods.
“Titanosaurs are a remarkable group of dinosaurs, with species ranging from the weight of a cow to the weight of a sperm whale or more,” said paleontologist Matthew Lamanna, a curator at the Carnegie Museum of Natural History.
“But the biggest titanosaurs,” Lamanna continued, “have remained a mystery because, in almost all cases, their fossils are very incomplete.”
Another titanosaur, Argentinosaurus, could knock Dread off its top size spot in the dinosaur record books if more complete specimens for it are found.
Read more at Discovery News
The dinosaur, whose scientific name means “fear nothing” and is nicknamed “Dread,” is described in the latest issue of Scientific Reports. Dread measured 85 feet long and weighed about 65 tons, according to the study, which also reports that the dinosaur’s skeleton is the most complete ever found for its type.
Lead author Kenneth Lacovara said that Dread “weighed as much as a dozen African elephants or more than seven T. rex.” Not surprisingly, this dinosaur had no known predators. If its sheer massiveness did not scare away hungry others, its “weaponized tail” would have.
“No doubt Dread would use its amazingly muscled tail to fend off attack,” Lacovara, an associate professor in Drexel University’s College of Arts and Sciences, told Discovery News. “If an animal made it under its tail, three large claws would be waiting for them on each back foot.”
He added, “I can’t imagine that it was a good idea to attack a full-grown, healthy Dreadnoughtus.”
The otherwise peaceful herbivore likely spent its days fulfilling what Lacovara calls “a life-long obsession with eating.”
“Every day is about taking in enough calories to nourish this house-sized body,” he explained. “I imagine their day consists largely of standing in one place. You have this 37-foot-long neck balanced by a 30-foot-long tail in the back. Without moving your legs, you have access to a giant feeding envelope of trees and fern leaves.”
Once the forest section had been cleared, Dread would probably then take a few steps to the left or right and start feasting all over again.
Lacovara suspects that Dread had a stomach the size of a horse. Food would remain there for long periods, churning in stomach acids that efficiently extracted nutrients.
The dinosaur, represented by two specimens, was unearthed in southern Patagonia. Dread lived approximately 77 million years ago in a temperate forest at the southern tip of South America.
Based on sedimentary deposits at the site, the two dinosaurs were buried quickly after a river flooded and broke through its natural levee, which would have turned the ground into something like quicksand. The dinosaurs’ rapid and deep burial, Lacovara said, accounts for the extraordinary level of fossil completeness.
“Its misfortune was our luck,” he added.
Dread was a titanosaur, meaning a type of four-legged, plant-eating dinosaur from the Middle Jurassic to Late Cretaceous periods.
“Titanosaurs are a remarkable group of dinosaurs, with species ranging from the weight of a cow to the weight of a sperm whale or more,” said paleontologist Matthew Lamanna, a curator at the Carnegie Museum of Natural History.
“But the biggest titanosaurs,” Lamanna continued, “have remained a mystery because, in almost all cases, their fossils are very incomplete.”
Another titanosaur, Argentinosaurus, could knock Dread off its top size spot in the dinosaur record books if more complete specimens for it are found.
Read more at Discovery News
Massive Extinct Volcano Discovered Beneath Pacific
Lurking some 3.2 miles (5.1 kilometers) beneath the Pacific Ocean, a massive mountain rises up from the seafloor, say scientists who discovered the seamount using sonar technology.
The seamount is about two-thirds of a mile high (1.1 kilometers), researchers said. Seamounts, rocky leftovers from extinct, underwater volcanoes, are found on ocean floors around the world. The newly discovered seamount is about 186 miles (300 km) southeast of Jarvis Island, an uninhabited island in a relatively unexplored part of the South Pacific Ocean, experts said.
"These seamounts are very common, but we don't know about them, because most of the places that we go out and map have never been mapped before," James Gardner, a University of New Hampshire research professor who works at the university's NOAA Center for Coastal and Ocean Mapping/Joint Hydrographic Center, said in a statement.
Gardner's team found the seamount on Aug. 13, less than five days into an expedition to map the outer limits of the U.S. continental shelf. They used a 12-kHzmultibeam echo sounder, which uses sonar to detect contours on the ocean floor. Late that night, the seamount appeared "out of the blue," Gardner said in the statement.
The multibeam echo sounder gave the researchers an advantage over other mapping methods. Low-resolution satellite data have revealed images for most of the earth's seafloor, but the technique is not advanced enough to capture most seamounts.
"Satellites just can't see these features and we can," Gardner said.
The researchers have yet to explore the effects of the as-yet-unnamed seamount on the surrounding environment, but these underwater mountains often host diverse marine life, such as commercially important fish species, research finds. However, the newly found seamount is too deep underwater to provide a home for rich fisheries, he said.
Still, because the seamount is so far underwater it won't be a navigational hazard. The United States has jurisdiction over the volcanic seamount and the waters above it, Gardner added.
Read more at Discovery News
The seamount is about two-thirds of a mile high (1.1 kilometers), researchers said. Seamounts, rocky leftovers from extinct, underwater volcanoes, are found on ocean floors around the world. The newly discovered seamount is about 186 miles (300 km) southeast of Jarvis Island, an uninhabited island in a relatively unexplored part of the South Pacific Ocean, experts said.
"These seamounts are very common, but we don't know about them, because most of the places that we go out and map have never been mapped before," James Gardner, a University of New Hampshire research professor who works at the university's NOAA Center for Coastal and Ocean Mapping/Joint Hydrographic Center, said in a statement.
Gardner's team found the seamount on Aug. 13, less than five days into an expedition to map the outer limits of the U.S. continental shelf. They used a 12-kHzmultibeam echo sounder, which uses sonar to detect contours on the ocean floor. Late that night, the seamount appeared "out of the blue," Gardner said in the statement.
The multibeam echo sounder gave the researchers an advantage over other mapping methods. Low-resolution satellite data have revealed images for most of the earth's seafloor, but the technique is not advanced enough to capture most seamounts.
"Satellites just can't see these features and we can," Gardner said.
The researchers have yet to explore the effects of the as-yet-unnamed seamount on the surrounding environment, but these underwater mountains often host diverse marine life, such as commercially important fish species, research finds. However, the newly found seamount is too deep underwater to provide a home for rich fisheries, he said.
Still, because the seamount is so far underwater it won't be a navigational hazard. The United States has jurisdiction over the volcanic seamount and the waters above it, Gardner added.
Read more at Discovery News
Sep 3, 2014
New Mushroom-Shaped Animals Found in Deep Sea
Mushroom-shaped animals that look more like something from a pizza than the animal kingdom have been discovered in deep water off the coast of Australia.
The unusual organisms, described in the latest issue of PLoS ONE, appear to be “living fossils,” a term that refers to animals whose only known relatives are extinct and in fossil form.
Co-author Jørgen Olesen, an associate professor and curator at the Natural History Museum of Denmark, explained in a press release that the creatures are “new mushroom-shaped animals discovered from the deep sea that could not be placed in any recognized group of animals.”
He added, “Two species are recognized and current evidence suggests that they represent an early branch on the tree of life, with similarities to the 600 million-year-old extinct Ediacara fauna.”
The Ediacara fauna (also known as the Ediacara biota) consisted of enigmatic tubular and frond-shaped organisms that were mostly immobile, living their lives attached to something, on par with today’s barnacles on boats. They lived during the Ediacaran Period from around 635–542 million years ago.
The new mushroom-shaped creatures, named Dendrogramma enigmatica and Dendrogramma discoides, are very much alive, though. Olesen, lead author Jean Just from the University of Copenhagen and their colleagues found the animals living around 3,281 feet below the surface on the south-east Australian continental slope.
The researchers describe them as being “multicellular and mostly non-symmetrical, with a dense layer of gelatinous material between the outer skin cell and inner stomach cell layers.”
Read more at Discovery News
The unusual organisms, described in the latest issue of PLoS ONE, appear to be “living fossils,” a term that refers to animals whose only known relatives are extinct and in fossil form.
Co-author Jørgen Olesen, an associate professor and curator at the Natural History Museum of Denmark, explained in a press release that the creatures are “new mushroom-shaped animals discovered from the deep sea that could not be placed in any recognized group of animals.”
He added, “Two species are recognized and current evidence suggests that they represent an early branch on the tree of life, with similarities to the 600 million-year-old extinct Ediacara fauna.”
The Ediacara fauna (also known as the Ediacara biota) consisted of enigmatic tubular and frond-shaped organisms that were mostly immobile, living their lives attached to something, on par with today’s barnacles on boats. They lived during the Ediacaran Period from around 635–542 million years ago.
The new mushroom-shaped creatures, named Dendrogramma enigmatica and Dendrogramma discoides, are very much alive, though. Olesen, lead author Jean Just from the University of Copenhagen and their colleagues found the animals living around 3,281 feet below the surface on the south-east Australian continental slope.
The researchers describe them as being “multicellular and mostly non-symmetrical, with a dense layer of gelatinous material between the outer skin cell and inner stomach cell layers.”
Read more at Discovery News
Stonehenge Was Once A Complete Circle
A short hosepipe may have solved one of the many mysteries of Stonehenge, showing that the iconic monument was once a perfect, complete circle.
Dry weather in summer 2013 at the Wiltshire monument revealed marks of parched grass in an area that had not been watered.
According to a report in the journal Antiquity, the patchmarks represent the position of the missing sarsen stones which once completed the Neolithic circle.
"Despite being one of the most intensively explored prehistoric monuments in western Europe, Stonehenge continues to hold surprises," English Heritage steward Tim Daw and colleagues wrote.
Located in the county of Wiltshire, at the center of England's densest complex of Neolithic and Bronze Age monuments, Stonehenge has been the subject of myth, legend and -- more recently -- scientific research for more than eight centuries.
The monument was likely built in several stages with "the unique lintelled stone circle being erected in the late Neolithic period around 2500 B.C.," according to the English Heritage.
Researchers have long investigated each part of the monument -- the massive central trilithons, the smaller bluestone settings, the sarsen circle capped by lintels, the outer bank and ditch -- debating whether the outer prehistoric stones were once completely round.
Indeed, the possibility that Stonehenge was an intentionally incomplete monument, with the sarsen circle only finished on the north-eastern side, has been suggested since the mid 18th century.
Now, a hosepipe too short to cover the outer part of the circle where no stones still stand, may have provided a definitive answer, revealing what excavations and high-resolution geophysical surveys failed to find.
Spotted by Daw, the patches on the ground -- believed to be "stone holes" -- appeared in the sarsen circle exactly where stones were expected to stand.
Such crop marks are produced when plants grow over features that have been buried in the ground for a long time, even long after they have been removed.
Buried or once-buried structures interfere with plant growth and develop at a different rate to those growing immediately adjacent. In this case, deep stone holes may have changed the earth permeability, affecting the grass growth.
"Plants that may have initially benefited from the easily available rooting and moisture in disturbed ground fail as the soil dries out, whilst the shorter-rooting surrounding plants manage to survive by extracting water directly from the porous chalk," Daw and colleagues wrote.
Read more at Discovery News
Dry weather in summer 2013 at the Wiltshire monument revealed marks of parched grass in an area that had not been watered.
According to a report in the journal Antiquity, the patchmarks represent the position of the missing sarsen stones which once completed the Neolithic circle.
"Despite being one of the most intensively explored prehistoric monuments in western Europe, Stonehenge continues to hold surprises," English Heritage steward Tim Daw and colleagues wrote.
Located in the county of Wiltshire, at the center of England's densest complex of Neolithic and Bronze Age monuments, Stonehenge has been the subject of myth, legend and -- more recently -- scientific research for more than eight centuries.
The monument was likely built in several stages with "the unique lintelled stone circle being erected in the late Neolithic period around 2500 B.C.," according to the English Heritage.
Researchers have long investigated each part of the monument -- the massive central trilithons, the smaller bluestone settings, the sarsen circle capped by lintels, the outer bank and ditch -- debating whether the outer prehistoric stones were once completely round.
Indeed, the possibility that Stonehenge was an intentionally incomplete monument, with the sarsen circle only finished on the north-eastern side, has been suggested since the mid 18th century.
Now, a hosepipe too short to cover the outer part of the circle where no stones still stand, may have provided a definitive answer, revealing what excavations and high-resolution geophysical surveys failed to find.
Spotted by Daw, the patches on the ground -- believed to be "stone holes" -- appeared in the sarsen circle exactly where stones were expected to stand.
Such crop marks are produced when plants grow over features that have been buried in the ground for a long time, even long after they have been removed.
Buried or once-buried structures interfere with plant growth and develop at a different rate to those growing immediately adjacent. In this case, deep stone holes may have changed the earth permeability, affecting the grass growth.
"Plants that may have initially benefited from the easily available rooting and moisture in disturbed ground fail as the soil dries out, whilst the shorter-rooting surrounding plants manage to survive by extracting water directly from the porous chalk," Daw and colleagues wrote.
Read more at Discovery News
Psychedelic Culture Tripped Circa 500 A.D.
Sophisticated drug paraphernalia, complete with a hippy-looking headband, provide evidence that an elite, hallucinogen-using culture flourished at around 500 A.D. in the south-central Andes and lasted there for at least another 600 years.
The items, described in the latest issue of the journal Antiquity, shed light on the lifestyle and belief systems once held by the people of Tiwanaku, an ancient city-state located near Lake Titicaca, Bolivia.
The objects, which include “snuffing tablets,” a wooden snuffing tube, spatulas, a multi-colored textile headband and more, also provide clues to early usage of psychoactive substances.
“Snuffing tablets in the Andes were primarily used by ritual specialists, such as shamans,” lead author Juan Albarracin-Jordan of the Fundación Bartolomé de Las Casas in La Paz, Bolivia, explained to Discovery News. “Psychotropic substances, once extracted from plants, were spread and mixed on the tablets. Inhalation tubes were then used to introduce the substances through the nose into the system.”
Albarracin-Jordan and colleagues Jose ́Capriles and Melanie Miller analyzed the items and related objects unearthed during excavations at the site, called Cueva del Chileno. They also found drinking cups known as “kerus,” used for drinking chicha, an alcoholic brew made from fermented corn.
It is now believed that famous surviving monoliths from the region, such as the Bennett monolith, show individuals holding a keru with the left hand and a snuffing tablet with the right.
Clearly such individuals would have been higher than a kite, but this altered state of mind -- based on archaeological and ethnographic evidence -- had spiritual significance to the Tiwanaku.
The function of psychoactive substance users “was to be mediators between the natural and the supernatural,” Albarracin-Jordan said. “They were also conflict brokers between the living and the dead.”
“Patients” of the individuals might have received tobacco and stimulants meant to treat health conditions, according to the researchers. They added that since the Tiwanaku wore masks and hides depicting predators like pumas and condors, the drugs also could have been taken during ritual ceremonies involving these species. Evidence for both animal and human sacrifice has been found at the site.
The snuffing tablets suggest a more romantic scenario as well.
Capriles explained that “the smaller snuffing tray seems to depict a male-female couple in a tender position, as the male seems to have one hand over the female's belly and the female has an arm holding the male's back.”
“The female-male dyad is a traditionally important in Andean societies as a form of mutual complementarity.”
The researchers further believe that elite members of the Tiwanaku society held tight control over the access and circulation of mind-altering substances, although the general populace might have been given limited access to them during private healing ceremonies or public events.
Read more at Discovery News
The items, described in the latest issue of the journal Antiquity, shed light on the lifestyle and belief systems once held by the people of Tiwanaku, an ancient city-state located near Lake Titicaca, Bolivia.
The objects, which include “snuffing tablets,” a wooden snuffing tube, spatulas, a multi-colored textile headband and more, also provide clues to early usage of psychoactive substances.
“Snuffing tablets in the Andes were primarily used by ritual specialists, such as shamans,” lead author Juan Albarracin-Jordan of the Fundación Bartolomé de Las Casas in La Paz, Bolivia, explained to Discovery News. “Psychotropic substances, once extracted from plants, were spread and mixed on the tablets. Inhalation tubes were then used to introduce the substances through the nose into the system.”
Albarracin-Jordan and colleagues Jose ́Capriles and Melanie Miller analyzed the items and related objects unearthed during excavations at the site, called Cueva del Chileno. They also found drinking cups known as “kerus,” used for drinking chicha, an alcoholic brew made from fermented corn.
It is now believed that famous surviving monoliths from the region, such as the Bennett monolith, show individuals holding a keru with the left hand and a snuffing tablet with the right.
Clearly such individuals would have been higher than a kite, but this altered state of mind -- based on archaeological and ethnographic evidence -- had spiritual significance to the Tiwanaku.
The function of psychoactive substance users “was to be mediators between the natural and the supernatural,” Albarracin-Jordan said. “They were also conflict brokers between the living and the dead.”
“Patients” of the individuals might have received tobacco and stimulants meant to treat health conditions, according to the researchers. They added that since the Tiwanaku wore masks and hides depicting predators like pumas and condors, the drugs also could have been taken during ritual ceremonies involving these species. Evidence for both animal and human sacrifice has been found at the site.
The snuffing tablets suggest a more romantic scenario as well.
Capriles explained that “the smaller snuffing tray seems to depict a male-female couple in a tender position, as the male seems to have one hand over the female's belly and the female has an arm holding the male's back.”
“The female-male dyad is a traditionally important in Andean societies as a form of mutual complementarity.”
The researchers further believe that elite members of the Tiwanaku society held tight control over the access and circulation of mind-altering substances, although the general populace might have been given limited access to them during private healing ceremonies or public events.
Read more at Discovery News
Fantastically Wrong: The Imaginary Radiation That Shocked Science and Ruined Its ‘Discoverer’
One of Blondlot’s supposed photographs of N-rays, which never actually existed. |
Blondlot had been experimenting with X-rays to see if they were in fact waves or a stream of particles, Paul Collins writes in Banvard’s Folly: 13 Tales of People Who Didn’t Change the World. Firing X-rays through a charged electric field, Blondlot expected that if they were waves, the field would shift their path into a detector off to the side and brighten an electric spark within. “And that’s just what they did,” Collins writes. “Blondlot proved, quite correctly, that X-rays are actually waves.”
Next he fired X-rays through a quartz prism, which already had been shown to not reflect such radiation. Problem is, out of the corner of his eye, Blondlot noticed that the electric spark in the detector got brighter like X-rays actually had been deflected into it. Except they couldn’t possibly have been. So Blondlot leapt to a bit of a conclusion that he had discovered something entirely different: N-rays. It was a leap that would prove to be the end of his reputation.
At first, though, his discovery caused a sensation. It didn’t hurt that the scientific community was at the time a bit gaga over radiation, according to Collins. We’d known about X-rays for less than a decade, and the discovery of radio waves and gamma rays soon followed. Huzzah, then, for Blondlot’s discovery of N-rays!
Blondlot’s collection of N-ray experiments. |
The sun seemed to emit the rays, but only until clouds passed over, Blondlot claimed. And anything that basked in the sun’s light, including you and me, would absorb N-rays like we would UV radiation. “Sea water and the stones exposed to solar radiation store up N-rays which they afterwards restore,” he wrote. “Possibly these phenomena play some hitherto unperceived part in certain terrestrial phenomena. Perhaps, also, N-rays are not without influence on certain phenomena of animal and vegetable life.” What phenomena these may be, Blondlot was mum.
Since his initial discovery, Blondlot had graduated from observing the spark of a detector to using phosphorescent screens that lit up, however faintly, when bombarded with N-rays. And he insisted that those scientists interested in replicating his experiment follow his procedures exactly, shutting themselves in a darkened room and allowing their eyes to acclimate for a half hour.
And don’t even dare think about watching the screens head-on. No, you must see them out of the corner of your eye. Some observers will pick it up just fine, but “for others,” Blondlot warned, “these phenomena lie almost at the limit of what they are able to discern, and it is only after a certain amount of practice that they succeed in catching them easily, and in observing them with complete certainty.”
The University of Nancy, where Blondlot made his ill-fated discovery of N-rays. |
But if this experiment is sounding rather subjective to you, you would have been right in league with any number of scientists trying to replicate Blondlot’s results. They couldn’t do it. Well, except the French, it seemed, including a scientist named Augustin Charpentier, who made a rather startling discovery: Our bodies, like the sun, emit N-rays, especially when we’re getting our pump on. “Stand behind a big enough phosphorescent screen in a dark room and flex your arms,” Collins recaps, “and a faint outline of your body would appear, with slightly brighter spots around your biceps and the Broca’s Area of the brain.”
But wait, there’s more. Not only were N-rays a pretty sweet party trick, they were also really good for you. Charpentier started firing the rays at human test subjects, not to mention dogs and frogs. N-rays beamed at the tongue and ears and nose or even frontal lobe would supercharge your senses, he claimed. Whether any of the subjects turned into superheroes, though, is lost to history.
Ray Banned
A whole lot of scientists that didn’t happen to be blessed with a lovely French accent hadn’t the slightest clue what Blondlot was seeing, and were more than slightly skeptical of the supposed health benefits of N-rays. One Canadian physicist wondered how Blondlot could take such precise measurements “with a radiation so feeble that no one outside of France has been able to detect at all.” Adds Collins: “Others wondered aloud whether France was in the grip of a spell of self-hypnosis.”
So the skeptics sent in the cavalry. Well, they sent a guy named Robert W. Wood, “a mischievous fellow,” according to Collins, who’d once taken a joyride on the as-yet-unfinished Trans-Siberian Railway. When he arrived in Blondlot’s lab, he was treated to a demonstration of N-rays illuminating the luminescent paint on a card, which he of course could not perceive.
Robert W. Wood ain’t got time for your crackpot theories. |
Next Blondlot demonstrated an N-ray spectroscope, which used an aluminum prism to split the rays into distinct and measurable wavelengths. In a dark room, Blondlot read off the spectroscope’s measurements of N-rays. Then Wood asked him to repeat his numbers a second time before reaching into the spectroscope and removing the prism. Yet Blondlot read the exact same numbers as before. That, of course, was problematic both for the experiment and for Blondlot’s career. Wood wrote to the journal Nature of his coup, and with that, the theory of N-rays came tumbling down.
Let this serve as a lesson: Be wary of men who forbid you from looking at something straight-on. Not that he was intentionally trying to pull the wool over our eyes, as it were, but Blondlot’s insistence that the observer only view the luminous effects of N-rays with their peripheral vision guaranteed all kinds of error. It already was known in Blondlot’s time that this perspective produces strange effects on our vision, according to Collins, who cites the experiences of one astronomer: “It is a curious circumstance, that when we wish to obtain a sight of a very faint star, such as one of the satellites of Saturn, we can see it most distinctly by looking away from it, and when the eye is turned full upon it, it immediately disappears.” N-rays were Blondlot’s Saturnian moons.
Read more at Wired Science
Sep 2, 2014
Severed Snake Heads Can Still Deliver a Fatal Bite
Venomous snakes are scary when they're alive, but there's also reason to fear these fanged creatures after they're dead, a recent news report suggests.
The tale of a chef in China who was preparing a rare delicacy known as cobra soup and was fatally bitten by the decapitated head of one of the snakes he had chopped up for this unusual stew was reported last week, in the U.K. Daily Mirror.
While this story might sound too weird to be true, scientific evidence suggests it is entirely plausible.
"Snakes in general are well known for retaining reflexes after death," said Steven Beaupré, a biology professor at the University of Arkansas. Many ectothermic, or cold-blooded, vertebrae— including species of reptiles and amphibians— share this quality, he said.
In fact, there have been previous reports, including in the U.S., of people being bitten by the severed heads of snakes.
For venomous snakes, such as cobras and rattlesnakes, biting is one of the reflexes that can be activated in the brain even hours after the animal dies, Beaupré told Live Science.
The bite reflex is stronger in venomous snakes than it is in some other carnivores because these snakes use their bite differently than other meat-eaters, Beaupré said. Unlike a tiger, for instance, which kills prey by sinking its teeth into an animal's flesh and holding on, snakes aim to deliver just one, extremely quick bite and then move away from their prey before getting trampled.
The rapid-fire attack can occur in less than a second, Beaupré said. In fact, rattlesnakes have been known to envenomate (inject venom into) prey in less than two-tenths of a second, he said.
It's likely that the cobra-chopping chef who reportedly died last week in China was a victim of the snake's quick reflexes, Beaupré said.
Unfortunately for the Chinese chef, a cobra's bite reflex can be triggered even hours after the animal dies, Beaupré said. The man reportedly picked up the snake's head just 20 minutes after he had chopped it off.
"Just because the animals has been decapitated, that doesn't mean the nerves have stopped functioning," Beaupré said. The bodies of snakes have been known to continue rising off the ground in a menacing pose, and even to strike out against a perceived threat, after they've suffered a beheading, he added.
These eerie postmortem movements are fueled by the ions, or electrically charged particles, which remain in the nerve cells of a snake for several house after it dies, Beaupré said. When the nerve of a newly dead snake is stimulated, the channels in the nerve will open up, allowing ions to pass through. This creates an electrical impulse that enables the muscle to carry out a reflexive action, like a bite.
"The bite and envenomation reflex is triggered by some kind of information that comes into the mouth cavity," Beaupré said. "My guess is that this guy put his hand in the snake's mouth after he cut it off. He probably put a finger in there or something and it triggered this response."
Read more at Discovery News
The tale of a chef in China who was preparing a rare delicacy known as cobra soup and was fatally bitten by the decapitated head of one of the snakes he had chopped up for this unusual stew was reported last week, in the U.K. Daily Mirror.
While this story might sound too weird to be true, scientific evidence suggests it is entirely plausible.
"Snakes in general are well known for retaining reflexes after death," said Steven Beaupré, a biology professor at the University of Arkansas. Many ectothermic, or cold-blooded, vertebrae— including species of reptiles and amphibians— share this quality, he said.
In fact, there have been previous reports, including in the U.S., of people being bitten by the severed heads of snakes.
For venomous snakes, such as cobras and rattlesnakes, biting is one of the reflexes that can be activated in the brain even hours after the animal dies, Beaupré told Live Science.
The bite reflex is stronger in venomous snakes than it is in some other carnivores because these snakes use their bite differently than other meat-eaters, Beaupré said. Unlike a tiger, for instance, which kills prey by sinking its teeth into an animal's flesh and holding on, snakes aim to deliver just one, extremely quick bite and then move away from their prey before getting trampled.
The rapid-fire attack can occur in less than a second, Beaupré said. In fact, rattlesnakes have been known to envenomate (inject venom into) prey in less than two-tenths of a second, he said.
It's likely that the cobra-chopping chef who reportedly died last week in China was a victim of the snake's quick reflexes, Beaupré said.
Unfortunately for the Chinese chef, a cobra's bite reflex can be triggered even hours after the animal dies, Beaupré said. The man reportedly picked up the snake's head just 20 minutes after he had chopped it off.
"Just because the animals has been decapitated, that doesn't mean the nerves have stopped functioning," Beaupré said. The bodies of snakes have been known to continue rising off the ground in a menacing pose, and even to strike out against a perceived threat, after they've suffered a beheading, he added.
These eerie postmortem movements are fueled by the ions, or electrically charged particles, which remain in the nerve cells of a snake for several house after it dies, Beaupré said. When the nerve of a newly dead snake is stimulated, the channels in the nerve will open up, allowing ions to pass through. This creates an electrical impulse that enables the muscle to carry out a reflexive action, like a bite.
"The bite and envenomation reflex is triggered by some kind of information that comes into the mouth cavity," Beaupré said. "My guess is that this guy put his hand in the snake's mouth after he cut it off. He probably put a finger in there or something and it triggered this response."
Read more at Discovery News
Nightmarish Cricket That Eats Anything Is Now Invading the US
A cricket with a voracious appetite for anything — including members of its own species — is now spreading across the eastern United States with no end to the invasion in sight.
The invader, known as the greenhouse camel cricket (Diestrammena asynamora), is described in the latest issue of the journal PeerJ.
“The good news is that camel crickets don’t bite or pose any kind of threat to humans,” Mary Jane Epps, a postdoctoral researcher at North Carolina State and lead author of the paper, said in a press release.
She was inspired to study the cricket after a colleague experienced a chance encounter with one at home. The cricket was previously known to science, but thought to be prevalent only in its native Asia. It had only been spotted in commercial greenhouses -- hence the name -- but wasn’t thought to live elsewhere in the United States.
Wrong.
Epps and her team conducted a public survey and discovered that the cannibalistic, eat-anything cricket is all over the eastern states.
“We don’t know what kind of impact this species has on local ecosystems though it’s possible that the greenhouse camel cricket could be driving out native camel cricket species in homes,” Epps said.
She and her team also sampled the yards of 10 homes in Raleigh, N.C. They found large numbers of greenhouse camel crickets, with higher numbers in the areas of the yards closest to homes.
Doing the research, they uncovered the possibility of yet another unusual cricket.
“There appears to be a second Asian species, Diestrammena japanica, that hasn’t been formally reported in the U.S. before, but seems to be showing up in homes in the Northeast,” Epps explained. “However, that species has only been identified based on photos. We’d love to get a physical specimen to determine whether it is D. japanica.”
While invasive species are never a good sign, the researchers urge homeowners not to panic. Although the cricket sounds like fodder for a B-movie, there could be a silver lining to its presence.
“Because they are scavengers, camel crickets may actually provide an important service in our basements or garages, eating the dead stuff that accumulates there,” said Holly Menninger, director of public science in the Your Wild Life lab at NC State and a co-author of the paper.
“We know remarkably little about these camel crickets, such as their biology or how they interact with other species,” Menninger added. “We’re interested in continuing to study them, and there’s a lot to learn.”
From Discovery News
The invader, known as the greenhouse camel cricket (Diestrammena asynamora), is described in the latest issue of the journal PeerJ.
“The good news is that camel crickets don’t bite or pose any kind of threat to humans,” Mary Jane Epps, a postdoctoral researcher at North Carolina State and lead author of the paper, said in a press release.
She was inspired to study the cricket after a colleague experienced a chance encounter with one at home. The cricket was previously known to science, but thought to be prevalent only in its native Asia. It had only been spotted in commercial greenhouses -- hence the name -- but wasn’t thought to live elsewhere in the United States.
Wrong.
Epps and her team conducted a public survey and discovered that the cannibalistic, eat-anything cricket is all over the eastern states.
“We don’t know what kind of impact this species has on local ecosystems though it’s possible that the greenhouse camel cricket could be driving out native camel cricket species in homes,” Epps said.
She and her team also sampled the yards of 10 homes in Raleigh, N.C. They found large numbers of greenhouse camel crickets, with higher numbers in the areas of the yards closest to homes.
Doing the research, they uncovered the possibility of yet another unusual cricket.
“There appears to be a second Asian species, Diestrammena japanica, that hasn’t been formally reported in the U.S. before, but seems to be showing up in homes in the Northeast,” Epps explained. “However, that species has only been identified based on photos. We’d love to get a physical specimen to determine whether it is D. japanica.”
While invasive species are never a good sign, the researchers urge homeowners not to panic. Although the cricket sounds like fodder for a B-movie, there could be a silver lining to its presence.
“Because they are scavengers, camel crickets may actually provide an important service in our basements or garages, eating the dead stuff that accumulates there,” said Holly Menninger, director of public science in the Your Wild Life lab at NC State and a co-author of the paper.
“We know remarkably little about these camel crickets, such as their biology or how they interact with other species,” Menninger added. “We’re interested in continuing to study them, and there’s a lot to learn.”
From Discovery News
Could Cave Carving Be First Neanderthal Art?
Around 39,000 years ago, a Neanderthal huddled in the back of a seaside cave at Gibraltar, safe from the hyenas, lions and leopards that might have prowled outside. Under the flickering light of a campfire, he or she used a stone tool to carefully etch what looks like a grid or a hashtag onto a natural platform of bedrock.
Archaeologists discovered this enigmatic carving during an excavation of Gorham's Cave two years ago. They had found Neanderthal cut marks on bones and tools before, but they had never seen anything like this. The researchers used Neanderthal tools to test how this geometric design was made — and to rule out the possibility the "artwork" wasn't just the byproduct of butchery. They found that recreating the grid was painstaking work.
"This was intentional — this was not somebody doodling or scratching on the surface," said study researcher Clive Finlayson, director of the Gibraltar Museum. But the discovery poses much more elusive questions: Did this engraving hold any symbolic meaning? Can it be considered art?
Close cousins
Neanderthals roamed Eurasia from around 200,000 to 30,000 years ago, when they mysteriously went extinct. They were the closest known relatives of modern humans, and recent research has suggested that Neanderthals might have behaved more like Homo sapiens than previously thought: They buried their dead, they used pigments and feathers to decorate their bodies, and they may have even organized their caves.
Despite a growing body of evidence suggesting Neanderthals may have been cognitively similar to modern humans, a lack of art seemed to be the "the last bastion" for the argument that Neanderthals were much different from us, Finlayson said.
"Art is something else — it's an indication of abstract thinking," Finlayson told Live Science.
Archaeologists recently pushed back the date of hand stencil paintings found at El Castillo cave in northern Spain to 40,800 years ago, which opens the possibility that Neanderthals created this artwork. But there is no solid archaeological evidence to link Neanderthals to the paintings.
Gorham's Cave
In Gorham's Cave, Finlayson and colleagues were surprised to find a series of deeply incised parallel and crisscrossing lines when they wiped away the dirt covering a bedrock surface. The rock had been sealed under a layer of soil that was littered with Mousterian stone tools (a style long linked to Neanderthals). Radiocarbon dating indicated that this soil layer was between 38,500 and 30,500 years old, suggesting the rock art buried underneath was created sometime before then.
Gibraltar is one of the most famous sites of Neanderthal occupation. At Gorham's Cave and its surrounding caverns, archaeologists have found evidence that Neanderthals butchered seals, roasted pigeons and plucked feathers off birds of prey. In other parts of Europe, Neanderthals lived alongside humans — and may have even interbred with them. But 40,000 years ago, the southern Iberian Peninsula was a Neanderthal stronghold. Modern humans had not spread into the area yet, Finlayson said.
To test whether they were actually looking at an intentional design, the researchers decided to try to recreate the grid on smooth rock surfaces in the cave using actual stone tools left behind in a spoil heap by archaeologists who had excavated the site in the 1950s. More than 50 stone-tool incisions were needed to mimic the deepest line of the grid, and between 188 and 317 total strokes were probably needed to create the entire pattern, the researchers found. Their findings were described yesterday (Sept. 1) in the journal Proceedings of the National Academy of Sciences.
Finlayson and his colleagues also tried to cut pork skin with the stone tools, to test whether the lines were merely the incidental marks left behind after the Neanderthals had butchered meat. But they couldn't replicate the engraving.
"You cannot control the groove if you're cutting through meat, no matter how hard you try," Finlayson said. "The lines go all over the place."
A simple grid is no Venus figurine
The Neanderthals' brand of abstract expressionism might not have impressed Homo sapiens art critics of the day.
"It's very basic. It's very simple," said Jean-Jacques Hublin, director of the Department of Human Evolution at the Max Planck Institute for Evolutionary Anthropology in Germany. "It's not a Venus. It's not a bison. It's not a horse."
By the late Stone Age, modern humans who settled in Europe were already dabbling in representational art. At least a dozen different species of animals — including horses, mammoths and cave lions — are depicted in the Chauvet Cave paintings, which are up to 32,000 years old. The anatomically explicit Venus figurine discovered at Hohle Fels Cave in southwestern Germany dates back to 35,000 years ago. Other busty female statuettes — the Venus of Galgenberg and the Venus of Dolní V?stonice — date back to about 30,000 years ago.
"There is a huge difference between making three lines that any 3-year-old kid would be able to make and sculpting a Venus," Hublin, who was not involved in the study, told Live Science.
Hublin said this discovery doesn't close the question of Neanderthals' cognitive skills. Proof that Neanderthals were capable of making a deliberate rock carving isn't evidence that they were regularly making art, he said.
Read more at Discovery News
Archaeologists discovered this enigmatic carving during an excavation of Gorham's Cave two years ago. They had found Neanderthal cut marks on bones and tools before, but they had never seen anything like this. The researchers used Neanderthal tools to test how this geometric design was made — and to rule out the possibility the "artwork" wasn't just the byproduct of butchery. They found that recreating the grid was painstaking work.
"This was intentional — this was not somebody doodling or scratching on the surface," said study researcher Clive Finlayson, director of the Gibraltar Museum. But the discovery poses much more elusive questions: Did this engraving hold any symbolic meaning? Can it be considered art?
Close cousins
Neanderthals roamed Eurasia from around 200,000 to 30,000 years ago, when they mysteriously went extinct. They were the closest known relatives of modern humans, and recent research has suggested that Neanderthals might have behaved more like Homo sapiens than previously thought: They buried their dead, they used pigments and feathers to decorate their bodies, and they may have even organized their caves.
Despite a growing body of evidence suggesting Neanderthals may have been cognitively similar to modern humans, a lack of art seemed to be the "the last bastion" for the argument that Neanderthals were much different from us, Finlayson said.
"Art is something else — it's an indication of abstract thinking," Finlayson told Live Science.
Archaeologists recently pushed back the date of hand stencil paintings found at El Castillo cave in northern Spain to 40,800 years ago, which opens the possibility that Neanderthals created this artwork. But there is no solid archaeological evidence to link Neanderthals to the paintings.
Gorham's Cave
In Gorham's Cave, Finlayson and colleagues were surprised to find a series of deeply incised parallel and crisscrossing lines when they wiped away the dirt covering a bedrock surface. The rock had been sealed under a layer of soil that was littered with Mousterian stone tools (a style long linked to Neanderthals). Radiocarbon dating indicated that this soil layer was between 38,500 and 30,500 years old, suggesting the rock art buried underneath was created sometime before then.
Gibraltar is one of the most famous sites of Neanderthal occupation. At Gorham's Cave and its surrounding caverns, archaeologists have found evidence that Neanderthals butchered seals, roasted pigeons and plucked feathers off birds of prey. In other parts of Europe, Neanderthals lived alongside humans — and may have even interbred with them. But 40,000 years ago, the southern Iberian Peninsula was a Neanderthal stronghold. Modern humans had not spread into the area yet, Finlayson said.
To test whether they were actually looking at an intentional design, the researchers decided to try to recreate the grid on smooth rock surfaces in the cave using actual stone tools left behind in a spoil heap by archaeologists who had excavated the site in the 1950s. More than 50 stone-tool incisions were needed to mimic the deepest line of the grid, and between 188 and 317 total strokes were probably needed to create the entire pattern, the researchers found. Their findings were described yesterday (Sept. 1) in the journal Proceedings of the National Academy of Sciences.
Finlayson and his colleagues also tried to cut pork skin with the stone tools, to test whether the lines were merely the incidental marks left behind after the Neanderthals had butchered meat. But they couldn't replicate the engraving.
"You cannot control the groove if you're cutting through meat, no matter how hard you try," Finlayson said. "The lines go all over the place."
A simple grid is no Venus figurine
The Neanderthals' brand of abstract expressionism might not have impressed Homo sapiens art critics of the day.
"It's very basic. It's very simple," said Jean-Jacques Hublin, director of the Department of Human Evolution at the Max Planck Institute for Evolutionary Anthropology in Germany. "It's not a Venus. It's not a bison. It's not a horse."
By the late Stone Age, modern humans who settled in Europe were already dabbling in representational art. At least a dozen different species of animals — including horses, mammoths and cave lions — are depicted in the Chauvet Cave paintings, which are up to 32,000 years old. The anatomically explicit Venus figurine discovered at Hohle Fels Cave in southwestern Germany dates back to 35,000 years ago. Other busty female statuettes — the Venus of Galgenberg and the Venus of Dolní V?stonice — date back to about 30,000 years ago.
"There is a huge difference between making three lines that any 3-year-old kid would be able to make and sculpting a Venus," Hublin, who was not involved in the study, told Live Science.
Hublin said this discovery doesn't close the question of Neanderthals' cognitive skills. Proof that Neanderthals were capable of making a deliberate rock carving isn't evidence that they were regularly making art, he said.
Read more at Discovery News
Sex Geckos Died in Orbit, Probably Didn't Have Sex
It was shaping up to be the ultimate story of horny reptiles, space adventure and high drama. But sadly for the Russian ‘gecko sex’ space experiment, the story has a very definite anticlimax.
In July, the world became aware of the Foton-M4 satellite that was not responding to commands being sent from ground control. Although the satellite’s systems appeared to be working in an automatic mode, commands from the ground were being ignored, spelling ultimate doom for the spacecraft that would eventually reenter the Earth’s atmosphere.
This fact alone made the story interesting, but when we found out the Foton-M4 had a collection of reptilian space travelers on board, there was an added sense of urgency. Fortunately, communications were reestablished with the satellite and the experiment seemed safe.
The reptiles — geckos that were a part of the Russia’s Institute of Medico-Biological Problems experiment investigating sexual reproduction in microgravity — were sealed inside a small habitat and it was hoped that once they’d become accustomed to their weightless environment, nature would take its course and they’d start having sex, or at least start trying to.
This weekend, the capsule containing the gecko experiment returned to Earth after a controlled reentry over Russia and scientists were able to access the geckos. Sadly, all space passengers were dead. So dead in fact that the five little guys may not have even had the chance to enjoy orbit, let alone try to copulate.
“According to preliminary data, it becomes clear that the geckos (froze to death),” said an agency spokesperson (translated from Russian). “(I)t was due to the failure of the equipment, ensure the necessary temperature in the box with the animals.”
Scientists seem unsure when the experiment failed, but the problem was rooted in the spacecraft’s life support systems that could have malfunctioned at any time during the flight. The experiment wasn’t linked via a live video feed, instead favoring a camera that would record footage on board for scientists to analyze when the mission returned to Earth.
Read more at Discovery News
In July, the world became aware of the Foton-M4 satellite that was not responding to commands being sent from ground control. Although the satellite’s systems appeared to be working in an automatic mode, commands from the ground were being ignored, spelling ultimate doom for the spacecraft that would eventually reenter the Earth’s atmosphere.
This fact alone made the story interesting, but when we found out the Foton-M4 had a collection of reptilian space travelers on board, there was an added sense of urgency. Fortunately, communications were reestablished with the satellite and the experiment seemed safe.
The reptiles — geckos that were a part of the Russia’s Institute of Medico-Biological Problems experiment investigating sexual reproduction in microgravity — were sealed inside a small habitat and it was hoped that once they’d become accustomed to their weightless environment, nature would take its course and they’d start having sex, or at least start trying to.
This weekend, the capsule containing the gecko experiment returned to Earth after a controlled reentry over Russia and scientists were able to access the geckos. Sadly, all space passengers were dead. So dead in fact that the five little guys may not have even had the chance to enjoy orbit, let alone try to copulate.
“According to preliminary data, it becomes clear that the geckos (froze to death),” said an agency spokesperson (translated from Russian). “(I)t was due to the failure of the equipment, ensure the necessary temperature in the box with the animals.”
Scientists seem unsure when the experiment failed, but the problem was rooted in the spacecraft’s life support systems that could have malfunctioned at any time during the flight. The experiment wasn’t linked via a live video feed, instead favoring a camera that would record footage on board for scientists to analyze when the mission returned to Earth.
Read more at Discovery News
Sep 1, 2014
Training your brain to prefer healthy foods
It may be possible to train the brain to prefer healthy low-calorie foods over unhealthy higher-calorie foods, according to new research by scientists at the Jean Mayer USDA Human Nutrition Research Center on Aging (USDA HNRCA) at Tufts University and at Massachusetts General Hospital. Published online today in the journal Nutrition & Diabetes, a brain scan study in adult men and women suggests that it is possible to reverse the addictive power of unhealthy food while also increasing preference for healthy foods.
"We don't start out in life loving French fries and hating, for example, whole wheat pasta," said senior and co-corresponding author Susan B. Roberts, Ph.D., director of the Energy Metabolism Laboratory at the USDA HNRCA, who is also a professor at the Friedman School of Nutrition Science and Policy at Tufts University and an adjunct professor of psychiatry at Tufts University School of Medicine. "This conditioning happens over time in response to eating -- repeatedly! -- what is out there in the toxic food environment."
Scientists have suspected that, once unhealthy food addiction circuits are established, they may be hard or impossible to reverse, subjecting people who have gained weight to a lifetime of unhealthy food cravings and temptation. To find out whether the brain can be re-trained to support healthy food choices, Roberts and colleagues studied the reward system in thirteen overweight and obese men and women, eight of whom were participants in a new weight loss program designed by Tufts University researchers and five who were in a control group and were not enrolled in the program.
Both groups underwent magnetic resonance imaging (MRI) brain scans at the beginning and end of a six-month period. Among those who participated in the weight loss program, the brain scans revealed changes in areas of the brain reward center associated with learning and addiction. After six months, this area had increased sensitivity to healthy, lower-calorie foods, indicating an increased reward and enjoyment of healthier food cues. The area also showed decreased sensitivity to the unhealthy higher-calorie foods.
"The weight loss program is specifically designed to change how people react to different foods, and our study shows those who participated in it had an increased desire for healthier foods along with a decreased preference for unhealthy foods, the combined effects of which are probably critical for sustainable weight control," said co-author Sai Krupa Das, Ph.D., a scientist in the Energy Metabolism Laboratory at the USDA HNRCA and an assistant professor at the Friedman School. "To the best of our knowledge this is the first demonstration of this important switch." The authors hypothesize that several features of the weight loss program were important, including behavior change education and high-fiber, low glycemic menu plans.
"Although other studies have shown that surgical procedures like gastric bypass surgery can decrease how much people enjoy food generally, this is not very satisfactory because it takes away food enjoyment generally rather than making healthier foods more appealing," said first author and co-corresponding author Thilo Deckersbach, Ph.D., a psychologist at Massachusetts General Hospital. "We show here that it is possible to shift preferences from unhealthy food to healthy food without surgery, and that MRI is an important technique for exploring the brain's role in food cues."
Read more at Science Daily
"We don't start out in life loving French fries and hating, for example, whole wheat pasta," said senior and co-corresponding author Susan B. Roberts, Ph.D., director of the Energy Metabolism Laboratory at the USDA HNRCA, who is also a professor at the Friedman School of Nutrition Science and Policy at Tufts University and an adjunct professor of psychiatry at Tufts University School of Medicine. "This conditioning happens over time in response to eating -- repeatedly! -- what is out there in the toxic food environment."
Scientists have suspected that, once unhealthy food addiction circuits are established, they may be hard or impossible to reverse, subjecting people who have gained weight to a lifetime of unhealthy food cravings and temptation. To find out whether the brain can be re-trained to support healthy food choices, Roberts and colleagues studied the reward system in thirteen overweight and obese men and women, eight of whom were participants in a new weight loss program designed by Tufts University researchers and five who were in a control group and were not enrolled in the program.
Both groups underwent magnetic resonance imaging (MRI) brain scans at the beginning and end of a six-month period. Among those who participated in the weight loss program, the brain scans revealed changes in areas of the brain reward center associated with learning and addiction. After six months, this area had increased sensitivity to healthy, lower-calorie foods, indicating an increased reward and enjoyment of healthier food cues. The area also showed decreased sensitivity to the unhealthy higher-calorie foods.
"The weight loss program is specifically designed to change how people react to different foods, and our study shows those who participated in it had an increased desire for healthier foods along with a decreased preference for unhealthy foods, the combined effects of which are probably critical for sustainable weight control," said co-author Sai Krupa Das, Ph.D., a scientist in the Energy Metabolism Laboratory at the USDA HNRCA and an assistant professor at the Friedman School. "To the best of our knowledge this is the first demonstration of this important switch." The authors hypothesize that several features of the weight loss program were important, including behavior change education and high-fiber, low glycemic menu plans.
"Although other studies have shown that surgical procedures like gastric bypass surgery can decrease how much people enjoy food generally, this is not very satisfactory because it takes away food enjoyment generally rather than making healthier foods more appealing," said first author and co-corresponding author Thilo Deckersbach, Ph.D., a psychologist at Massachusetts General Hospital. "We show here that it is possible to shift preferences from unhealthy food to healthy food without surgery, and that MRI is an important technique for exploring the brain's role in food cues."
Read more at Science Daily
New way to diagnose malaria by detecting parasite's waste in infected blood cells
Over the past several decades, malaria diagnosis has changed very little. After taking a blood sample from a patient, a technician smears the blood across a glass slide, stains it with a special dye, and looks under a microscope for the Plasmodium parasite, which causes the disease. This approach gives an accurate count of how many parasites are in the blood -- an important measure of disease severity -- but is not ideal because there is potential for human error.
A research team from the Singapore-MIT Alliance for Research and Technology (SMART) has now come up with a possible alternative. The researchers have devised a way to use magnetic resonance relaxometry (MRR), a close cousin of magnetic resonance imaging (MRI), to detect a parasitic waste product in the blood of infected patients. This technique could offer a more reliable way to detect malaria, says Jongyoon Han, a professor of electrical engineering and biological engineering at MIT.
"There is real potential to make this into a field-deployable system, especially since you don't need any kind of labels or dye. It's based on a naturally occurring biomarker that does not require any biochemical processing of samples" says Han, one of the senior authors of a paper describing the technique in the Aug. 31 issue of Nature Medicine.
Peter Rainer Preiser of SMART and Nanyang Technical University in Singapore is also a senior author. The paper's lead author is Weng Kung Peng, a research scientist at SMART.
Hunting malaria with magnets
With the traditional blood-smear technique, a technician stains the blood with a reagent that dyes cell nuclei. Red blood cells don't have nuclei, so any that show up are presumed to belong to parasite cells. However, the technology and expertise needed to identify the parasite are not always available in some of the regions most affected by malaria, and technicians don't always agree in their interpretations of the smears, Han says.
"There's a lot of human-to-human variation regarding what counts as infected red blood cells versus some dust particles stuck on the plate. It really takes a lot of practice," he says.
The new SMART system detects a parasitic waste product called hemozoin. When the parasites infect red blood cells, they feed on the nutrient-rich hemoglobin carried by the cells. As hemoglobin breaks down, it releases iron, which can be toxic, so the parasite converts the iron into hemozoin -- a weakly paramagnetic crystallite.
Those crystals interfere with the normal magnetic spins of hydrogen atoms. When exposed to a powerful magnetic field, hydrogen atoms align their spins in the same direction. When a second, smaller field perturbs the atoms, they should all change their spins in synchrony -- but if another magnetic particle, such as hemozoin, is present, this synchrony is disrupted through a process called relaxation. The more magnetic particles are present, the more quickly the synchrony is disrupted.
"What we are trying to really measure is how the hydrogen's nuclear magnetic resonance is affected by the proximity of other magnetic particles," Han says.
For this study, the researchers used a 0.5-tesla magnet, much less expensive and powerful than the 2- or 3-tesla magnets typically required for MRI diagnostic imaging, which can cost up to $2 million. The current device prototype is small enough to sit on a table or lab bench, but the team is also working on a portable version that is about the size of a small electronic tablet.
After taking a blood sample and spinning it down to concentrate the red blood cells, the sample analysis takes less than a minute. Only about 10 microliters of blood is required, which can be obtained with a finger prick, making the procedure minimally invasive and much easier for health care workers than drawing blood intravenously.
"This system can be built at a very low cost, relative to the million-dollar MRI machines used in a hospital," Peng says. "Furthermore, since this technique does not rely on expensive labeling with chemical reagents, we are able to get each diagnostic test done at a cost of less than 10 cents."
Read more at Science Daily
A research team from the Singapore-MIT Alliance for Research and Technology (SMART) has now come up with a possible alternative. The researchers have devised a way to use magnetic resonance relaxometry (MRR), a close cousin of magnetic resonance imaging (MRI), to detect a parasitic waste product in the blood of infected patients. This technique could offer a more reliable way to detect malaria, says Jongyoon Han, a professor of electrical engineering and biological engineering at MIT.
"There is real potential to make this into a field-deployable system, especially since you don't need any kind of labels or dye. It's based on a naturally occurring biomarker that does not require any biochemical processing of samples" says Han, one of the senior authors of a paper describing the technique in the Aug. 31 issue of Nature Medicine.
Peter Rainer Preiser of SMART and Nanyang Technical University in Singapore is also a senior author. The paper's lead author is Weng Kung Peng, a research scientist at SMART.
Hunting malaria with magnets
With the traditional blood-smear technique, a technician stains the blood with a reagent that dyes cell nuclei. Red blood cells don't have nuclei, so any that show up are presumed to belong to parasite cells. However, the technology and expertise needed to identify the parasite are not always available in some of the regions most affected by malaria, and technicians don't always agree in their interpretations of the smears, Han says.
"There's a lot of human-to-human variation regarding what counts as infected red blood cells versus some dust particles stuck on the plate. It really takes a lot of practice," he says.
The new SMART system detects a parasitic waste product called hemozoin. When the parasites infect red blood cells, they feed on the nutrient-rich hemoglobin carried by the cells. As hemoglobin breaks down, it releases iron, which can be toxic, so the parasite converts the iron into hemozoin -- a weakly paramagnetic crystallite.
Those crystals interfere with the normal magnetic spins of hydrogen atoms. When exposed to a powerful magnetic field, hydrogen atoms align their spins in the same direction. When a second, smaller field perturbs the atoms, they should all change their spins in synchrony -- but if another magnetic particle, such as hemozoin, is present, this synchrony is disrupted through a process called relaxation. The more magnetic particles are present, the more quickly the synchrony is disrupted.
"What we are trying to really measure is how the hydrogen's nuclear magnetic resonance is affected by the proximity of other magnetic particles," Han says.
For this study, the researchers used a 0.5-tesla magnet, much less expensive and powerful than the 2- or 3-tesla magnets typically required for MRI diagnostic imaging, which can cost up to $2 million. The current device prototype is small enough to sit on a table or lab bench, but the team is also working on a portable version that is about the size of a small electronic tablet.
After taking a blood sample and spinning it down to concentrate the red blood cells, the sample analysis takes less than a minute. Only about 10 microliters of blood is required, which can be obtained with a finger prick, making the procedure minimally invasive and much easier for health care workers than drawing blood intravenously.
"This system can be built at a very low cost, relative to the million-dollar MRI machines used in a hospital," Peng says. "Furthermore, since this technique does not rely on expensive labeling with chemical reagents, we are able to get each diagnostic test done at a cost of less than 10 cents."
Read more at Science Daily
Memory in silent neurons: How do unconnected neurons communicate?
When we learn, we associate a sensory experience either with other stimuli or with a certain type of behaviour. The neurons in the cerebral cortex that transmit the information modify the synaptic connections that they have with the other neurons. According to a generally-accepted model of synaptic plasticity, a neuron that communicates with others of the same kind emits an electrical impulse as well as activating its synapses transiently. This electrical pulse, combined with the signal received from other neurons, acts to stimulate the synapses. How is it that some neurons are caught up in the communication interplay even when they are barely connected? This is the crucial chicken-or-egg puzzle of synaptic plasticity that a team led by Anthony Holtmaat, professor in the Department of Basic Neurosciences in the Faculty of Medicine at UNIGE, is aiming to solve. The results of their research into memory in silent neurons can be found in the latest edition of Nature.
Learning and memory are governed by a mechanism of sustainable synaptic strengthening. When we embark on a learning experience, our brain associates a sensory experience either with other stimuli or with a certain form of behaviour. The neurons in the cerebral cortex responsible for ensuring the transmission of the relevant information, then modify the synaptic connections that they have with other neurons. This is the very arrangement that subsequently enables the brain to optimize the way information is processed when it is met again, as well as predicting its consequences.
Neuroscientists typically induce electrical pulses in the neurons artificially in order to perform research on synaptic mechanisms.
The neuroscientists from UNIGE, however, chose a different approach in their attempt to discover what happens naturally in the neurons when they receive sensory stimuli. They observed the cerebral cortices of mice whose whiskers were repeatedly stimulated mechanically without an artificially-induced electrical pulse. The rodents use their whiskers as a sensor for navigating and interacting; they are, therefore, a key element for perception in mice.
An extremely low signal is enough
By observing these natural stimuli, professor Holtmaat's team was able to demonstrate that sensory stimulus alone can generate long-term synaptic strengthening without the neuron discharging either an induced or natural electrical pulse. As a result -- and contrary to what was previously believed -- the synapses will be strengthened even when the neurons involved in a stimulus remain silent.In addition, if the sensory stimulation lasts over time, the synapses become so strong that the neuron in turn is activated and becomes fully engaged in the neural network. Once activated, the neuron can then further strengthen the synapses in a forwards and backwards movement. These findings could solve the brain's "What came first?" mystery, as they make it possible to examine all the synaptic pathways that contribute to memory, rather than focusing on whether it is the synapsis or the neuron that activates the other.
Read more at Science Daily
Learning and memory are governed by a mechanism of sustainable synaptic strengthening. When we embark on a learning experience, our brain associates a sensory experience either with other stimuli or with a certain form of behaviour. The neurons in the cerebral cortex responsible for ensuring the transmission of the relevant information, then modify the synaptic connections that they have with other neurons. This is the very arrangement that subsequently enables the brain to optimize the way information is processed when it is met again, as well as predicting its consequences.
Neuroscientists typically induce electrical pulses in the neurons artificially in order to perform research on synaptic mechanisms.
The neuroscientists from UNIGE, however, chose a different approach in their attempt to discover what happens naturally in the neurons when they receive sensory stimuli. They observed the cerebral cortices of mice whose whiskers were repeatedly stimulated mechanically without an artificially-induced electrical pulse. The rodents use their whiskers as a sensor for navigating and interacting; they are, therefore, a key element for perception in mice.
An extremely low signal is enough
By observing these natural stimuli, professor Holtmaat's team was able to demonstrate that sensory stimulus alone can generate long-term synaptic strengthening without the neuron discharging either an induced or natural electrical pulse. As a result -- and contrary to what was previously believed -- the synapses will be strengthened even when the neurons involved in a stimulus remain silent.In addition, if the sensory stimulation lasts over time, the synapses become so strong that the neuron in turn is activated and becomes fully engaged in the neural network. Once activated, the neuron can then further strengthen the synapses in a forwards and backwards movement. These findings could solve the brain's "What came first?" mystery, as they make it possible to examine all the synaptic pathways that contribute to memory, rather than focusing on whether it is the synapsis or the neuron that activates the other.
Read more at Science Daily
Antarctic sea level rising faster than global rate
A new study of satellite data from the last 19 years reveals that fresh water from melting glaciers has caused the sea-level around the coast of Antarctica to rise by 2cm more than the global average of 6cm.
Researchers at the University of Southampton detected the rapid rise in sea-level by studying satellite scans of a region that spans more than a million square kilometres.
The melting of the Antarctic ice sheet and the thinning of floating ice shelves has contributed an excess of around 350 gigatonnes of freshwater to the surrounding ocean. This has led to a reduction in the salinity of the surrounding oceans that has been corroborated by ship-based studies of the water.
"Freshwater is less dense than salt water and so in regions where an excess of freshwater has accumulated we expect a localised rise in sea level," says Craig Rye, lead author of the paper that has been published in the journal Nature Geoscience.
In addition to satellite observations, the researchers also conducted computer simulations of the effect of melting glaciers on the Antarctic Ocean. The results of the simulation closely mirrored the real-world picture presented by the satellite data.
"The computer model supports our theory that the sea-level rise we see in our satellite data is almost entirely caused by freshening (a reduction in the salinity of the water) from the melting of the ice sheet and its fringing ice shelves," says Rye.
"The interaction between air, sea and ice in these seas is central to the stability of the Antarctic Ice Sheet and global sea levels, as well as other environmental processes, such as the generation of Antarctic bottom water, which cools and ventilates much of the global ocean abyss."
The research was carried out in close collaboration with researchers at the National Oceanography Centre and the British Antarctic Survey.
From Science Daily
Researchers at the University of Southampton detected the rapid rise in sea-level by studying satellite scans of a region that spans more than a million square kilometres.
The melting of the Antarctic ice sheet and the thinning of floating ice shelves has contributed an excess of around 350 gigatonnes of freshwater to the surrounding ocean. This has led to a reduction in the salinity of the surrounding oceans that has been corroborated by ship-based studies of the water.
"Freshwater is less dense than salt water and so in regions where an excess of freshwater has accumulated we expect a localised rise in sea level," says Craig Rye, lead author of the paper that has been published in the journal Nature Geoscience.
In addition to satellite observations, the researchers also conducted computer simulations of the effect of melting glaciers on the Antarctic Ocean. The results of the simulation closely mirrored the real-world picture presented by the satellite data.
"The computer model supports our theory that the sea-level rise we see in our satellite data is almost entirely caused by freshening (a reduction in the salinity of the water) from the melting of the ice sheet and its fringing ice shelves," says Rye.
"The interaction between air, sea and ice in these seas is central to the stability of the Antarctic Ice Sheet and global sea levels, as well as other environmental processes, such as the generation of Antarctic bottom water, which cools and ventilates much of the global ocean abyss."
The research was carried out in close collaboration with researchers at the National Oceanography Centre and the British Antarctic Survey.
From Science Daily
Aug 31, 2014
Why sibling stars look alike: Early, fast mixing in star-birth clouds
Early, fast, turbulent mixing of gas within giant molecular clouds -- the birthplaces of stars -- means all stars formed from a single cloud bear the same unique chemical 'tag' or 'DNA fingerprint,' write astrophysicists. Could such chemical tags help astronomers identify our own Sun's long-lost sibling stars?
Stars are made mostly of hydrogen and helium, but they also contain trace amounts of other elements, such as carbon, oxygen, iron, and even more exotic substances. By carefully measuring the wavelengths (colors) of light coming from a star, astronomers can determine how abundant each of these trace elements is. For any two stars at random, the abundances of their trace elements will slightly differ: one star may have a bit more iron, the other a bit more carbon, etc.
However, astronomers have known for more than a decade that any two stars within the same gravitationally bound star cluster always show the same abundances. "The pattern of abundances is like a DNA fingerprint, where all the members of a family share a common set of genes," said Mark Krumholz, associate professor of astronomy and astrophysics at University of California, Santa Cruz (UCSC).
Being able to measure this "fingerprint" is potentially very useful, because stellar families usually do not stay together. Most stars are born as members of star cluster, but over time they drift apart and migrate across the galaxy. Their abundances, however, are set at birth. Thus, astronomers have long wondered if it might be possible to tell if two stars that are now on opposite sides of the galaxy were born billions of years ago from the same giant molecular cloud. In fact, they further wondered, might it be possible even to find our own Sun's long-lost siblings?
Just one big problem: "Although stars that are part of the same long-lived star cluster today are chemically identical, we had no good reason to think that such family resemblance would hold true of stars that were born together but then dispersed immediately," explained Krumholz. "The underlying problem was that we didn't really know why stars are chemically homogeneous." For example, in a cloud where stars formed rapidly, might the cloud not have had enough time to homogenize thoroughly, thus giving rise to stars born at the same time but not uniform in chemical composition? "Without a real understanding of the physical mechanism that produces uniformity, everything was at best a speculation," he added.
Surprising violence
So Krumholz and his graduate student Yi Feng turned to UCSC's Hyades supercomputer to run a fluid dynamics simulation. They simulated two streams of interstellar gas coming together to form a cloud that, over a few million years, collapsed under its own gravity to make a cluster of stars. "We added tracer dyes to the two streams in the simulations, which let us watch how the gas mixed together during this process," Krumholz recounted. They put red dye in one stream and blue dye in the other, but by the time the cloud started to collapse and form stars, everything was purple -- and the resulting stars were purple as well. "We found that, as the streams came together, they became extremely turbulent, and the turbulence very effectively mixed together the tracer dyes," he said.
"The simulation revealed exactly why stars that are born together end up having the same trace element abundances: as the cloud that forms them is assembled, it gets thoroughly mixed very fast," Krumholz said. "This was actually a surprise: I didn't expect the turbulence to be as violent as it was, and so I didn't expect the mixing to be as rapid or efficient. I thought we'd get some blue stars and some red stars, instead of getting all purple stars."
Read more at Science Daily
Stars are made mostly of hydrogen and helium, but they also contain trace amounts of other elements, such as carbon, oxygen, iron, and even more exotic substances. By carefully measuring the wavelengths (colors) of light coming from a star, astronomers can determine how abundant each of these trace elements is. For any two stars at random, the abundances of their trace elements will slightly differ: one star may have a bit more iron, the other a bit more carbon, etc.
However, astronomers have known for more than a decade that any two stars within the same gravitationally bound star cluster always show the same abundances. "The pattern of abundances is like a DNA fingerprint, where all the members of a family share a common set of genes," said Mark Krumholz, associate professor of astronomy and astrophysics at University of California, Santa Cruz (UCSC).
Being able to measure this "fingerprint" is potentially very useful, because stellar families usually do not stay together. Most stars are born as members of star cluster, but over time they drift apart and migrate across the galaxy. Their abundances, however, are set at birth. Thus, astronomers have long wondered if it might be possible to tell if two stars that are now on opposite sides of the galaxy were born billions of years ago from the same giant molecular cloud. In fact, they further wondered, might it be possible even to find our own Sun's long-lost siblings?
Just one big problem: "Although stars that are part of the same long-lived star cluster today are chemically identical, we had no good reason to think that such family resemblance would hold true of stars that were born together but then dispersed immediately," explained Krumholz. "The underlying problem was that we didn't really know why stars are chemically homogeneous." For example, in a cloud where stars formed rapidly, might the cloud not have had enough time to homogenize thoroughly, thus giving rise to stars born at the same time but not uniform in chemical composition? "Without a real understanding of the physical mechanism that produces uniformity, everything was at best a speculation," he added.
Surprising violence
So Krumholz and his graduate student Yi Feng turned to UCSC's Hyades supercomputer to run a fluid dynamics simulation. They simulated two streams of interstellar gas coming together to form a cloud that, over a few million years, collapsed under its own gravity to make a cluster of stars. "We added tracer dyes to the two streams in the simulations, which let us watch how the gas mixed together during this process," Krumholz recounted. They put red dye in one stream and blue dye in the other, but by the time the cloud started to collapse and form stars, everything was purple -- and the resulting stars were purple as well. "We found that, as the streams came together, they became extremely turbulent, and the turbulence very effectively mixed together the tracer dyes," he said.
"The simulation revealed exactly why stars that are born together end up having the same trace element abundances: as the cloud that forms them is assembled, it gets thoroughly mixed very fast," Krumholz said. "This was actually a surprise: I didn't expect the turbulence to be as violent as it was, and so I didn't expect the mixing to be as rapid or efficient. I thought we'd get some blue stars and some red stars, instead of getting all purple stars."
Read more at Science Daily
Changing global diets is vital to reducing climate change
Healthier diets and reducing food waste are part of a combination of solutions needed to ensure food security and avoid dangerous climate change, say the team behind a new study.
A new study, published today in Nature Climate Change, suggests that -- if current trends continue -- food production alone will reach, if not exceed, the global targets for total greenhouse gas (GHG) emissions in 2050.
The study's authors say we should all think carefully about the food we choose and its environmental impact. A shift to healthier diets across the world is just one of a number of actions that need to be taken to avoid dangerous climate change and ensure there is enough food for all.
As populations rise and global tastes shift towards meat-heavy Western diets, increasing agricultural yields will not meet projected food demands of what is expected to be 9.6 billion people -- making it necessary to bring more land into cultivation.
This will come at a high price, warn the authors, as the deforestation will increase carbon emissions as well as biodiversity loss, and increased livestock production will raise methane levels. They argue that current food demand trends must change through reducing waste and encouraging balanced diets.
If we maintain 'business as usual', say the authors, then by 2050 cropland will have expanded by 42% and fertiliser use increased sharply by 45% over 2009 levels. A further tenth of the world's pristine tropical forests would disappear over the next 35 years.
The study shows that increased deforestation, fertilizer use and livestock methane emissions are likely to cause GHG from food production to increase by almost 80%. This will put emissions from food production alone roughly equal to the target greenhouse gas emissions in 2050 for the entire global economy.
The study's authors write that halving the amount of food waste and managing demand for particularly environmentally-damaging food products by changing global diets should be key aims that, if achieved, might mitigate some of the greenhouse gases causing climate change.
"There are basic laws of biophysics that we cannot evade," said lead researcher Bojana Bajzelj from the University of Cambridge's Department of Engineering, who authored the study with colleagues from Cambridge's departments of Geography and Plant Sciences as well as the University of Aberdeen's Institute of Biological and Environmental Sciences.
"The average efficiency of livestock converting plant feed to meat is less than 3%, and as we eat more meat, more arable cultivation is turned over to producing feedstock for animals that provide meat for humans. The losses at each stage are large, and as humans globally eat more and more meat, conversion from plants to food becomes less and less efficient, driving agricultural expansion and land cover conversion, and releasing more greenhouse gases. Agricultural practices are not necessarily at fault here -- but our choice of food is," said Bajzelj.
"It is imperative to find ways to achieve global food security without expanding crop or pastureland. Food production is a main driver of biodiversity loss and a large contributor to climate change and pollution, so our food choices matter."
The team analysed evidence such as land use, land suitability and agricultural biomass data to create a robust model that compares different scenarios for 2050, including scenarios based on maintaining current trends.
One scenario investigated by the team is on the supply side: the closing of 'yield gaps'. Gaps between crop yields achieved in 'best practice' farming and the actual average yields exist all over the world, but are widest in developing countries -- particularly in Sub-Saharan Africa. The researchers say that closing these gaps through sustainable intensification of farming should be actively pursued.
But even with the yield gaps closed, projected food demand will still require additional land -- so the impact on GHG emissions and biodiversity remains. Bajzelj points out that higher yields will also require more mineral fertiliser use and increased water demand for irrigation.
Food waste, another scenario analysed by the team, occurs at all stages in the food chain. In developing countries, poor storage and transportation cause waste; in the west, wasteful consumption is rife. "The latter is in many ways worse because the wasted food products have already undergone various transformations that require input of other resources, especially energy," said Bajzelj.
Yield gap closure alone still showed a greenhouse gas increase of just over 40% by 2050. Closing yield gaps and halving food waste still showed a small increase of 2% in greenhouse gas emissions. When healthy diets were added, the model suggests that all three measures combined result in agricultural GHG levels almost halving from their 2009 level -- dropping 48%.
"Western diets are increasingly characterised by excessive consumption of food, including that of emission-intensive meat and dairy products. We tested a scenario where all countries were assumed to achieve an average balanced diet -- without excessive consumption of sugars, fats, and meat products. This significantly reduced the pressures on the environment even further," said the team.
The 'average' balanced diet used in the study is a relatively achievable goal for most. For example, the figures included two 85g portions of red meat and five eggs per week, as well as a portion of poultry a day.
Read more at Science Daily
A new study, published today in Nature Climate Change, suggests that -- if current trends continue -- food production alone will reach, if not exceed, the global targets for total greenhouse gas (GHG) emissions in 2050.
The study's authors say we should all think carefully about the food we choose and its environmental impact. A shift to healthier diets across the world is just one of a number of actions that need to be taken to avoid dangerous climate change and ensure there is enough food for all.
As populations rise and global tastes shift towards meat-heavy Western diets, increasing agricultural yields will not meet projected food demands of what is expected to be 9.6 billion people -- making it necessary to bring more land into cultivation.
This will come at a high price, warn the authors, as the deforestation will increase carbon emissions as well as biodiversity loss, and increased livestock production will raise methane levels. They argue that current food demand trends must change through reducing waste and encouraging balanced diets.
If we maintain 'business as usual', say the authors, then by 2050 cropland will have expanded by 42% and fertiliser use increased sharply by 45% over 2009 levels. A further tenth of the world's pristine tropical forests would disappear over the next 35 years.
The study shows that increased deforestation, fertilizer use and livestock methane emissions are likely to cause GHG from food production to increase by almost 80%. This will put emissions from food production alone roughly equal to the target greenhouse gas emissions in 2050 for the entire global economy.
The study's authors write that halving the amount of food waste and managing demand for particularly environmentally-damaging food products by changing global diets should be key aims that, if achieved, might mitigate some of the greenhouse gases causing climate change.
"There are basic laws of biophysics that we cannot evade," said lead researcher Bojana Bajzelj from the University of Cambridge's Department of Engineering, who authored the study with colleagues from Cambridge's departments of Geography and Plant Sciences as well as the University of Aberdeen's Institute of Biological and Environmental Sciences.
"The average efficiency of livestock converting plant feed to meat is less than 3%, and as we eat more meat, more arable cultivation is turned over to producing feedstock for animals that provide meat for humans. The losses at each stage are large, and as humans globally eat more and more meat, conversion from plants to food becomes less and less efficient, driving agricultural expansion and land cover conversion, and releasing more greenhouse gases. Agricultural practices are not necessarily at fault here -- but our choice of food is," said Bajzelj.
"It is imperative to find ways to achieve global food security without expanding crop or pastureland. Food production is a main driver of biodiversity loss and a large contributor to climate change and pollution, so our food choices matter."
The team analysed evidence such as land use, land suitability and agricultural biomass data to create a robust model that compares different scenarios for 2050, including scenarios based on maintaining current trends.
One scenario investigated by the team is on the supply side: the closing of 'yield gaps'. Gaps between crop yields achieved in 'best practice' farming and the actual average yields exist all over the world, but are widest in developing countries -- particularly in Sub-Saharan Africa. The researchers say that closing these gaps through sustainable intensification of farming should be actively pursued.
But even with the yield gaps closed, projected food demand will still require additional land -- so the impact on GHG emissions and biodiversity remains. Bajzelj points out that higher yields will also require more mineral fertiliser use and increased water demand for irrigation.
Food waste, another scenario analysed by the team, occurs at all stages in the food chain. In developing countries, poor storage and transportation cause waste; in the west, wasteful consumption is rife. "The latter is in many ways worse because the wasted food products have already undergone various transformations that require input of other resources, especially energy," said Bajzelj.
Yield gap closure alone still showed a greenhouse gas increase of just over 40% by 2050. Closing yield gaps and halving food waste still showed a small increase of 2% in greenhouse gas emissions. When healthy diets were added, the model suggests that all three measures combined result in agricultural GHG levels almost halving from their 2009 level -- dropping 48%.
"Western diets are increasingly characterised by excessive consumption of food, including that of emission-intensive meat and dairy products. We tested a scenario where all countries were assumed to achieve an average balanced diet -- without excessive consumption of sugars, fats, and meat products. This significantly reduced the pressures on the environment even further," said the team.
The 'average' balanced diet used in the study is a relatively achievable goal for most. For example, the figures included two 85g portions of red meat and five eggs per week, as well as a portion of poultry a day.
Read more at Science Daily
Subscribe to:
Posts (Atom)