Some 4.567 billion years ago, our solar system's planets spawned from an expansive disc of gas and dust rotating around the sun. While similar processes are witnessed in younger solar systems throughout the Milky Way, the formative stages of our own solar system were believed to have taken twice as long to occur. Now, new research lead by the Centre for Star and Planet Formation at the Natural History Museum of Denmark, University of Copenhagen, suggests otherwise. Indeed, our solar system is not quite as special as once believed.
Using improved methods of analysis of uranium and lead isotopes, the current study of primitive meteorites has enabled researchers to date the formation of two very different types of materials, so-called calcium-aluminum-rich inclusions (or CAI's for short) and chondrules, found within the same meteorite. By doing so, the chronology and therefore overall understanding of our solar system's development has been altered. The study has just been published in the scientific journal Science.
4.567 billion years -- this is how far back we must travel to experience our nascent solar system. The researchers at the University of Copenhagen Centre for Star and Planet Formation took a closer look at the first three million years of the solar system's development by analysing primitive meteorites composed of a blend of our solar system's very oldest materials. In part, the study confirmed previous analyses demonstrating that CAI's were formed during a very short period of time. The new discovery is that the so-called chondrules were formed during the first three million years of the solar system's development as well. This stands in contrast with previous assumptions asserting that chondrules only started forming roughly two million years after CAIs.
Painting a new picture of the Solar System
"By using this process to date the formation of these two very different types of materials found in the same meteorite, we are not only able to alter the chronology of our solar system's historical development, we are able to paint a new picture of our solar system's development, which is very much like the picture that other researchers have observed in other planetary systems," says James Connelly of the Centre for Star and Planet Formation.
We aren't that special...
Showing that chondrules are as old as CAIs addresses a long-standing question of why chondrule formation should be delayed by up to 2 million years after CAIs. The answer -- it is not.
Read more at Science Daily
Nov 2, 2012
Were Dinosaurs Destined to Be Big? Testing Cope's Rule
In the evolutionary long run, small critters tend to evolve into bigger beasts -- at least according to the idea attributed to paleontologist Edward Cope, now known as Cope's Rule. Using the latest advanced statistical modeling methods, a new test of this rule as it applies dinosaurs shows that Cope was right -- sometimes.
"For a long time, dinosaurs were thought to be the example of Cope's Rule," says Gene Hunt, curator in the Department of Paleobiology at the National Museum of Natural History (NMNH) in Washington, D.C. Other groups, particularly mammals, also provide plenty of classic examples of the rule, Hunt says.
To see if Cope's rule really applies to dinosaurs, Hunt and colleagues Richard FitzJohn of the University of British Columbia and Matthew Carrano of the NMNH used dinosaur thigh bones (aka femurs) as proxies for animal size. They then used that femur data in their statistical model to look for two things: directional trends in size over time and whether there were any detectable upper limits for body size.
"What we did then was explore how constant a rule is this Cope's Rule trend within dinosaurs," said Hunt. They looked across the "family tree" of dinosaurs and found that some groups, or clades, of dinosaurs do indeed trend larger over time, following Cope's Rule. Ceratopsids and hadrosaurs, for instance, show more increases in size than decreases over time, according to Hunt. Although birds evolved from theropod dinosaurs, the team excluded them from the study because of the evolutionary pressure birds faced to lighten up and get smaller so they could fly better.
As for the upper limits to size, the results were sometimes yes, sometimes no. The four-legged sauropods (i.e., long-necked, small-headed herbivores) and ornithopod (i.e., iguanodons, ceratopsids) clades showed no indication of upper limits to how large they could evolve. And indeed, these groups contain the largest land animals that ever lived.
Theropods, which include the famous Tyrannosaurus rex, on the other hand, did show what appears to be an upper limit on body size. This may not be particularly surprising, says Hunt, because theropods were bipedal, and there are physical limits to how massive you can get while still being able to move around on two legs.
Hunt, FitzJohn, and Carrano will be presenting the results of their study on Nov. 4, at the annual meeting of The Geological Society of America in Charlotte, North Carolina, USA.
Read more at Science Daily
"For a long time, dinosaurs were thought to be the example of Cope's Rule," says Gene Hunt, curator in the Department of Paleobiology at the National Museum of Natural History (NMNH) in Washington, D.C. Other groups, particularly mammals, also provide plenty of classic examples of the rule, Hunt says.
To see if Cope's rule really applies to dinosaurs, Hunt and colleagues Richard FitzJohn of the University of British Columbia and Matthew Carrano of the NMNH used dinosaur thigh bones (aka femurs) as proxies for animal size. They then used that femur data in their statistical model to look for two things: directional trends in size over time and whether there were any detectable upper limits for body size.
"What we did then was explore how constant a rule is this Cope's Rule trend within dinosaurs," said Hunt. They looked across the "family tree" of dinosaurs and found that some groups, or clades, of dinosaurs do indeed trend larger over time, following Cope's Rule. Ceratopsids and hadrosaurs, for instance, show more increases in size than decreases over time, according to Hunt. Although birds evolved from theropod dinosaurs, the team excluded them from the study because of the evolutionary pressure birds faced to lighten up and get smaller so they could fly better.
As for the upper limits to size, the results were sometimes yes, sometimes no. The four-legged sauropods (i.e., long-necked, small-headed herbivores) and ornithopod (i.e., iguanodons, ceratopsids) clades showed no indication of upper limits to how large they could evolve. And indeed, these groups contain the largest land animals that ever lived.
Theropods, which include the famous Tyrannosaurus rex, on the other hand, did show what appears to be an upper limit on body size. This may not be particularly surprising, says Hunt, because theropods were bipedal, and there are physical limits to how massive you can get while still being able to move around on two legs.
Hunt, FitzJohn, and Carrano will be presenting the results of their study on Nov. 4, at the annual meeting of The Geological Society of America in Charlotte, North Carolina, USA.
Read more at Science Daily
Talking Elephant Learns 5-Word Vocabulary
Annyong! The Korean word for “hello” is part of 22-year-old Asian elephant Koshik’s five-word vocabulary. He can also utter the words for sit down (anja), lie down (nuo), good (choah) and no (aniya), according to a study published today in the journal Current Biology.
The frequency patterns of Koshik’s human-like mutterings were also more similar to his trainers’ speech than to the calls other Asian elephants make, the scientists claim.
“This is remarkable considering the huge size, the long vocal tract, and other anatomical differences between an elephant and a human,” lead author and animal behaviorist Angela Stoeger of the University of Vienna said in a press release.
Koshik may have learned to manipulate formants, or the frequency components humans use to discern sounds, in his “speech” by putting his trunk in his mouth, which “is a wholly novel method of vocal production,” the authors wrote in the study. By doing that, he’s able to make higher-pitched sounds than he would normally, according to the study.
Trainers taught Koshik, who was born in captivity, to respond to these five words, but he started imitating humans on his own, Stoeger told Wired in an email. She and her team think he may have used vocal mimicry to bond with his human companions, especially during the seven years he spent alone at Everland Zoo in South Korea.
Trainers at Everland, where Koshik lives, initially told the researchers he could say six words. Stoeger and her team put the elephant’s lexicon to the test by playing his “speech” to human Koreans and asking them to write down what they heard. The researchers didn’t tell them what words Koshik was imitating. After analyzing the data, they concluded the big-eared beast had only picked up five words. The study didn’t say which word was axed from Koshik’s official verbal repertoire.
Mimicry of human speech among other mammals is not very common, but it’s not unheard of. The researchers mention Hoover the seal, who was raised by a fisherman and could speak a few words of English, a beluga who could say its name, and another male Asian elephant who could mutter some Russian and Kazakh.
Koshik may understand what the words he’s “speaking” mean, but he probably doesn’t intend them as commands or feedback, wrote Tecumseh Fitch, one of the study’s authors, in an e-mail to Wired. ”Or at least when he says ‘lie down,’ he doesn’t seem to get upset if you don’t.”
Read more at Wired Science
Who Didn't Have Sex with Neanderthals?
The only modern humans whose ancestors did not interbreed with Neanderthals are apparently sub-Saharan Africans, researchers say.
New findings suggest modern North Africans carry genetic traces from Neanderthals, modern humanity's closest known extinct relatives.
Although modern humans are the only surviving members of the human lineage, others once roamed the Earth, including the Neanderthals. Genetic analysis of these extinct lineages' fossils has revealed they once interbred with our ancestors, with recent estimates suggesting that Neanderthal DNA made up 1 percent to 4 percent of modern Eurasian genomes. Although this sex apparently only rarely produced offspring, this mixing was enough to endow some people with the robust immune systems they enjoy today.
The Neanderthal genome revealed that people outside Africa share more genetic mutations with Neanderthalsthan Africans do. One possible explanation is that modern humans interbred with Neanderthals mostly after the modern lineage began appearing outside Africa at least 100,000 years ago. Another, more complex scenario is that an African group ancestral to both Neanderthals and certain modern human populations genetically split from other Africans beginning about 230,000 years ago. This group then stayed genetically distinct until it eventually left Africa.
To shed light on why Neanderthals appear most closely related to people outside Africa, scientists analyzed North Africans. Some researchers had suggested these groups were the sources of the out-of-Africa migrations that ultimately spread humans around the globe.
The researchers focused on 780,000 genetic variants in 125 people representing seven different North African locations. They found North Africans had dramatically more genetic variants linked with Neanderthals than sub-Saharan Africans did. The level of genetic variants that North Africans share with Neanderthals is on par with that seen in modern Eurasians.
The scientists also found this Neanderthal genetic signal was higher in North African populations whose ancestors had relatively little recent interbreeding with modern Near Eastern or European peoples. That suggests the signal came directly from ancient mixing with Neanderthals, and not recent interbreeding with other modern humans whose ancestors might have interbred with Neanderthals.
Read more at Discovery News
New findings suggest modern North Africans carry genetic traces from Neanderthals, modern humanity's closest known extinct relatives.
Although modern humans are the only surviving members of the human lineage, others once roamed the Earth, including the Neanderthals. Genetic analysis of these extinct lineages' fossils has revealed they once interbred with our ancestors, with recent estimates suggesting that Neanderthal DNA made up 1 percent to 4 percent of modern Eurasian genomes. Although this sex apparently only rarely produced offspring, this mixing was enough to endow some people with the robust immune systems they enjoy today.
The Neanderthal genome revealed that people outside Africa share more genetic mutations with Neanderthalsthan Africans do. One possible explanation is that modern humans interbred with Neanderthals mostly after the modern lineage began appearing outside Africa at least 100,000 years ago. Another, more complex scenario is that an African group ancestral to both Neanderthals and certain modern human populations genetically split from other Africans beginning about 230,000 years ago. This group then stayed genetically distinct until it eventually left Africa.
To shed light on why Neanderthals appear most closely related to people outside Africa, scientists analyzed North Africans. Some researchers had suggested these groups were the sources of the out-of-Africa migrations that ultimately spread humans around the globe.
The researchers focused on 780,000 genetic variants in 125 people representing seven different North African locations. They found North Africans had dramatically more genetic variants linked with Neanderthals than sub-Saharan Africans did. The level of genetic variants that North Africans share with Neanderthals is on par with that seen in modern Eurasians.
The scientists also found this Neanderthal genetic signal was higher in North African populations whose ancestors had relatively little recent interbreeding with modern Near Eastern or European peoples. That suggests the signal came directly from ancient mixing with Neanderthals, and not recent interbreeding with other modern humans whose ancestors might have interbred with Neanderthals.
Read more at Discovery News
Nov 1, 2012
'Island of Blue Dolphins' Cave Possibly Found
Archaeologists might have finally found the cave of the Lone Woman of San Nicolas Island, whose solitary 18-year stay on a tiny island off the California coast inspired the children's classic "Island of the Blue Dolphins."
"The cave had been completely buried under several meters of sand. It is quite large and would have made a very comfortable home, especially in inclement weather," Navy archaeologist Steven Schwartz said at the California Islands Symposium last week in Ventura.
One of the most famous people associated with the Channel Islands, the Lone Woman belonged to the Nicoleno, a Native American tribe who lived on the remote wind-blasted island of San Nicolas off the Southern California coast.
The tribe was decimated in 1814 by sea otter hunters from Alaska. By 1835, less than a dozen Nicolenos lived on the island. At that time, the Santa Barbara Mission arranged a rescue operation which brought to the mainland all Nicoleños but the Lone Woman.
The most likely explanation for the abandonment is that a panicked crew, caught by a storm, turned the rescue schooner, named Peor es Nada ("Better Than nothing"), toward the mainland without much head counting.
The woman lived alone on the island until a fisherman and sea otter hunter found her in 1853 and brought her to the Santa Barbara Mission.
"She was found in a brush enclosure on the west end of the island, but she is believed to have lived in a cave during most of her 18 years of isolation," Schwartz, who has been investigating the island for more than 20 years, said.
Since there is no known habitation cave on the tiny island -- which is now a Navy base -- the archaeologist concluded that the cavern must have collapsed and been buried.
The search took a new twist recently, when Schwartz obtained a unique document: a government survey map of the island dated from 1879 which pointed to an "Indian Cave" on the southwest coast.
Preliminary excavation revealed a cave which is at least 75 feet long and 10 feet high.
According to Schwartz, ground-penetrating radar might show a layer of relics from the Lone Woman's era -- "perhaps even the markings she was said to have made on the walls," he told the Los Angeles Times.
Further evidence for some sort of Robinson Crusoe struggling existence emerged two years ago from a steep cliff on the north coast of the island in the form of two redwood boxes.
They contained more than 200 objects, including shells, bone tools, harpoon points, bone fishhooks and even a smoking pipe.
It may never be known just who left the boxes, but "it's at least a reasonable hypothesis" that it was the Lone Woman, University of Oregon archaeologist Jon M. Erlandson told the Los Angeles Times.
Although the Lone Woman managed to live alone for 18 years on the wild, tiny island, she did not last long when she came back to "civilization" at a possible age of 50.
She died from dysentery only seven weeks after she arrived to the Santa Barbara Mission, unable to communicate, but totally fascinated by the new life she was discovering.
Read more at Discovery News
"The cave had been completely buried under several meters of sand. It is quite large and would have made a very comfortable home, especially in inclement weather," Navy archaeologist Steven Schwartz said at the California Islands Symposium last week in Ventura.
One of the most famous people associated with the Channel Islands, the Lone Woman belonged to the Nicoleno, a Native American tribe who lived on the remote wind-blasted island of San Nicolas off the Southern California coast.
The tribe was decimated in 1814 by sea otter hunters from Alaska. By 1835, less than a dozen Nicolenos lived on the island. At that time, the Santa Barbara Mission arranged a rescue operation which brought to the mainland all Nicoleños but the Lone Woman.
The most likely explanation for the abandonment is that a panicked crew, caught by a storm, turned the rescue schooner, named Peor es Nada ("Better Than nothing"), toward the mainland without much head counting.
The woman lived alone on the island until a fisherman and sea otter hunter found her in 1853 and brought her to the Santa Barbara Mission.
"She was found in a brush enclosure on the west end of the island, but she is believed to have lived in a cave during most of her 18 years of isolation," Schwartz, who has been investigating the island for more than 20 years, said.
Since there is no known habitation cave on the tiny island -- which is now a Navy base -- the archaeologist concluded that the cavern must have collapsed and been buried.
The search took a new twist recently, when Schwartz obtained a unique document: a government survey map of the island dated from 1879 which pointed to an "Indian Cave" on the southwest coast.
Preliminary excavation revealed a cave which is at least 75 feet long and 10 feet high.
According to Schwartz, ground-penetrating radar might show a layer of relics from the Lone Woman's era -- "perhaps even the markings she was said to have made on the walls," he told the Los Angeles Times.
Further evidence for some sort of Robinson Crusoe struggling existence emerged two years ago from a steep cliff on the north coast of the island in the form of two redwood boxes.
They contained more than 200 objects, including shells, bone tools, harpoon points, bone fishhooks and even a smoking pipe.
It may never be known just who left the boxes, but "it's at least a reasonable hypothesis" that it was the Lone Woman, University of Oregon archaeologist Jon M. Erlandson told the Los Angeles Times.
Although the Lone Woman managed to live alone for 18 years on the wild, tiny island, she did not last long when she came back to "civilization" at a possible age of 50.
She died from dysentery only seven weeks after she arrived to the Santa Barbara Mission, unable to communicate, but totally fascinated by the new life she was discovering.
Read more at Discovery News
Sistine Chapel at 500 Years: Threatened by Tourism
Michelangelo's Sistine chapel frescoes are threatened by the effects of too many visitors, experts have warned on Wednesday, as the masterly painted ceiling celebrated its 500th anniversary.
"The anthropic pressure with dust, the humidity of bodies, carbon dioxide produced by perspiration can cause discomfort for the visitors and, in the long run, damage to the paintings," Antonio Paolucci, director of the Vatican Museums, wrote in the Vatican newspaper L'Osservatore Romano.
"We might limit access, putting a cap on the number of visitors. We will do this, if the pressure from tourism were to increase beyond a reasonable level and if we were to fail in resolving the problem efficiently," Paolucci said.
Some 20,000 people a day –- 5 million a year –- make the Sistine Chapel the most visited room in the world.
"It has a fatal attraction; it is an object of desire, that essential point of arrival for international museum people, for migrants of the so-called cultural tourism," Paolucci said.
Many visitors just stare, tranfixed, at one of the most notable artwork ever created. Indeed, Pope Julius II and 17 cardinals reacted in the same way when the vaulted ceiling was revealed in all its blue glory on the Eve of All Saints, 31 October, 1512, during a vesper Mass.
But others are "drunken tourist herds" disrespectful of the unique setting they are visiting, according to leading literary critic Pietro Citati. The "herds" might soon verify the efforts made during a 14-year-long restoration project in the 1990s, he said.
"I believe it will soon be necessary to restore the Sistine Chapel again; and this will go on and on, as long as the vault is filled by heavy human breath," Citati wrote in the daily Corriere della Sera.
At a vespers service which commemorated the unveiling of the frescoed ceiling exactly 500 years ago, Pope Benedict XVI appeared to agree with Citati.
"When contemplated in prayer, the Sistine Chapel is even more beautiful, more authentic. It reveals itself in all its richness," he said.
A 135 by 44 foot rectangle ringed by 12 windows, the fifteenth-century chapel takes its name from Pope Sixtus IV who commissioned it in 1475-83, to give the Vatican a place for solemn ceremonies.
Indeed, the Pope engaged notable artists such as Botticelli, Ghirlandaio, and Perugino to decorate its walls.
However, the ceiling featured only a blue sky dotted by golden stars. Pope Julius II decided to change the rather dull scene and commissioned the work to a reluctant Michelangelo.
The artist worked on the colorful ceiling frescoes between 1508 and 1512, producing over 300 figures to tell the story of the book of Genesis.
The scenes, including the iconic image of the Creation of Adam in which the fingers of man and God are just inches apart, "radically changed the art world," Paolucci said.
"The ceiling was to become a beacon destined to light the styles of many future generations of artists," he added.
Read more at Discovery News
"The anthropic pressure with dust, the humidity of bodies, carbon dioxide produced by perspiration can cause discomfort for the visitors and, in the long run, damage to the paintings," Antonio Paolucci, director of the Vatican Museums, wrote in the Vatican newspaper L'Osservatore Romano.
"We might limit access, putting a cap on the number of visitors. We will do this, if the pressure from tourism were to increase beyond a reasonable level and if we were to fail in resolving the problem efficiently," Paolucci said.
Some 20,000 people a day –- 5 million a year –- make the Sistine Chapel the most visited room in the world.
"It has a fatal attraction; it is an object of desire, that essential point of arrival for international museum people, for migrants of the so-called cultural tourism," Paolucci said.
Many visitors just stare, tranfixed, at one of the most notable artwork ever created. Indeed, Pope Julius II and 17 cardinals reacted in the same way when the vaulted ceiling was revealed in all its blue glory on the Eve of All Saints, 31 October, 1512, during a vesper Mass.
But others are "drunken tourist herds" disrespectful of the unique setting they are visiting, according to leading literary critic Pietro Citati. The "herds" might soon verify the efforts made during a 14-year-long restoration project in the 1990s, he said.
"I believe it will soon be necessary to restore the Sistine Chapel again; and this will go on and on, as long as the vault is filled by heavy human breath," Citati wrote in the daily Corriere della Sera.
At a vespers service which commemorated the unveiling of the frescoed ceiling exactly 500 years ago, Pope Benedict XVI appeared to agree with Citati.
"When contemplated in prayer, the Sistine Chapel is even more beautiful, more authentic. It reveals itself in all its richness," he said.
A 135 by 44 foot rectangle ringed by 12 windows, the fifteenth-century chapel takes its name from Pope Sixtus IV who commissioned it in 1475-83, to give the Vatican a place for solemn ceremonies.
Indeed, the Pope engaged notable artists such as Botticelli, Ghirlandaio, and Perugino to decorate its walls.
However, the ceiling featured only a blue sky dotted by golden stars. Pope Julius II decided to change the rather dull scene and commissioned the work to a reluctant Michelangelo.
The artist worked on the colorful ceiling frescoes between 1508 and 1512, producing over 300 figures to tell the story of the book of Genesis.
The scenes, including the iconic image of the Creation of Adam in which the fingers of man and God are just inches apart, "radically changed the art world," Paolucci said.
"The ceiling was to become a beacon destined to light the styles of many future generations of artists," he added.
Read more at Discovery News
Lab Gamifies Einstein's Relativity
Relativity is a hard concept to grasp. So to make it easier to understand, researchers from the Massachusetts Institute of Technology's Game Lab decided to turn it into a game. The developed "A Slower Speed of Light." The game itself is really simple: run around a landscape and collect multicolored orbs until you acquire 100 of them.
But it's the world of the game that gets interesting: as you collect more orbs, the speed of light slows down. The player sees more extreme effects of this as she walks around.
What are some of these effects? For one thing, you'd be able to see beyond visible light into the infrared and ultraviolet spectrum. This is because as an observer moves forward, the light waves she sees coming toward her from other objects get compressed -- they get shorter. Anything producing infrared light will become visible. Eventually it would possible to see radio waves.
Meanwhile, the light waves from any objects she is passing by will stretch out, making the objects look redder. As she looks behind her, the visible light will all eventually move to the infrared. Eventually, even gamma and X-rays would become visible. The phenomenon is known as the Doppler effect.
Another consequence of relativity is concentrating the light from objects in front of you -- and to the sides. It's called relativistic aberration. Objects on either side start to enter the field of view in the front, and the world starts to look like it does through a fish-eye lens. It also means anything in front of you looks brighter. Move backwards, and the world seems to go darker as the light waves you can see come from a narrower and narrower field.
The game also adds in time dilation and altering the length of space dimensions in the direction of motion. As one approaches the speed of light, time slows down –- the observer's clock moves more slowly than a stationary one. This is the source of the "twin paradox" in which one twin who travels near the speed of light for years ages more slowly than her sister on Earth, though the slowly aging twin doesn't notice until she returns to her now-older sisiter. (It's also a common plot device in science fiction novels, notably Joe Haldeman's The Forever War).
Read more at Discovery News
But it's the world of the game that gets interesting: as you collect more orbs, the speed of light slows down. The player sees more extreme effects of this as she walks around.
What are some of these effects? For one thing, you'd be able to see beyond visible light into the infrared and ultraviolet spectrum. This is because as an observer moves forward, the light waves she sees coming toward her from other objects get compressed -- they get shorter. Anything producing infrared light will become visible. Eventually it would possible to see radio waves.
Meanwhile, the light waves from any objects she is passing by will stretch out, making the objects look redder. As she looks behind her, the visible light will all eventually move to the infrared. Eventually, even gamma and X-rays would become visible. The phenomenon is known as the Doppler effect.
Another consequence of relativity is concentrating the light from objects in front of you -- and to the sides. It's called relativistic aberration. Objects on either side start to enter the field of view in the front, and the world starts to look like it does through a fish-eye lens. It also means anything in front of you looks brighter. Move backwards, and the world seems to go darker as the light waves you can see come from a narrower and narrower field.
The game also adds in time dilation and altering the length of space dimensions in the direction of motion. As one approaches the speed of light, time slows down –- the observer's clock moves more slowly than a stationary one. This is the source of the "twin paradox" in which one twin who travels near the speed of light for years ages more slowly than her sister on Earth, though the slowly aging twin doesn't notice until she returns to her now-older sisiter. (It's also a common plot device in science fiction novels, notably Joe Haldeman's The Forever War).
Read more at Discovery News
Curiosity Finds Some Aloha Spirit in Mars Soil
A fascinating thing to come from NASA's Mars Science Laboratory mission since it landed inside Gale Crater on Aug. 5 is how familiar this region of Mars looks. The mountains look like they belong in Arizona and the surrounding plain appears to have once been a flowing river.
And now, mission managers have announced that even the mineralogy of soil samples bare a striking resemblance to the volcanic soil you'd find in one particular location on Earth.
Over the past few weeks, Curiosity has been scooping samples of soil from a sandy patch in a location known as "Rocknest." The rover's scooper -- called the Collection and Handling for In-Situ Martian Rock Analysis (CHIMRA) -- has been undergoing the long process of giving itself a Martian 'dust bath' (collecting a sample, shaking and then dumping the soil to remove any Earthly contaminants) and most recently deposited a small sample into one of its on board laboratories.
The Chemistry and Mineralogy (CheMin) instrument was then used to carry out X-ray diffraction on the sample. This was the first time X-ray diffraction has ever been done on another planet.
The technique works by firing a beam of X-rays through a sample that has been previously sieved to remove any large lumps. The largest grain allowed into the CheMin instrument is no bigger than twice the width of a human hair -- 150 micrometres.
As the X-rays blast through the grains of Mars soil, they are scattered, or diffracted, at varying angles depending on the minerals present. The technique can also detect the alignment of atoms inside the particles, giving scientists an idea as to whether crystalline structures present.
"We had many previous inferences and discussions about the mineralogy of Martian soil," said David Blake of NASA Ames Research Center in Moffett Field, Calif., the principal investigator for CheMin, during Tuesday's press briefing. "Our quantitative results provide refined and in some cases new identifications of the minerals in this first X-ray diffraction analysis on Mars."
In the diffraction "fingerprint" from the X-ray analysis (shown top), the scientists detected feldspar, olivine and pyroxene -- all of which are basaltic minerals that originate from volcanic activity. These types of minerals are typically found on the Hawaiian islands.
"So far, the materials Curiosity has analyzed are consistent with our initial ideas of the deposits in Gale Crater, recording a transition through time from a wet to dry environment," added David Bish of Indiana University in Bloomington, co-investigator on the CheMin experiment. "The ancient rocks, such as the conglomerates, suggest flowing water, while the minerals in the younger soil are consistent with limited interaction with water."
Read more at Discovery News
And now, mission managers have announced that even the mineralogy of soil samples bare a striking resemblance to the volcanic soil you'd find in one particular location on Earth.
Over the past few weeks, Curiosity has been scooping samples of soil from a sandy patch in a location known as "Rocknest." The rover's scooper -- called the Collection and Handling for In-Situ Martian Rock Analysis (CHIMRA) -- has been undergoing the long process of giving itself a Martian 'dust bath' (collecting a sample, shaking and then dumping the soil to remove any Earthly contaminants) and most recently deposited a small sample into one of its on board laboratories.
The Chemistry and Mineralogy (CheMin) instrument was then used to carry out X-ray diffraction on the sample. This was the first time X-ray diffraction has ever been done on another planet.
The technique works by firing a beam of X-rays through a sample that has been previously sieved to remove any large lumps. The largest grain allowed into the CheMin instrument is no bigger than twice the width of a human hair -- 150 micrometres.
As the X-rays blast through the grains of Mars soil, they are scattered, or diffracted, at varying angles depending on the minerals present. The technique can also detect the alignment of atoms inside the particles, giving scientists an idea as to whether crystalline structures present.
"We had many previous inferences and discussions about the mineralogy of Martian soil," said David Blake of NASA Ames Research Center in Moffett Field, Calif., the principal investigator for CheMin, during Tuesday's press briefing. "Our quantitative results provide refined and in some cases new identifications of the minerals in this first X-ray diffraction analysis on Mars."
In the diffraction "fingerprint" from the X-ray analysis (shown top), the scientists detected feldspar, olivine and pyroxene -- all of which are basaltic minerals that originate from volcanic activity. These types of minerals are typically found on the Hawaiian islands.
"So far, the materials Curiosity has analyzed are consistent with our initial ideas of the deposits in Gale Crater, recording a transition through time from a wet to dry environment," added David Bish of Indiana University in Bloomington, co-investigator on the CheMin experiment. "The ancient rocks, such as the conglomerates, suggest flowing water, while the minerals in the younger soil are consistent with limited interaction with water."
Read more at Discovery News
Vesta Peppered with Carbon from 'Dark' Asteroids
The bright surface of Vesta -- an asteroid so huge that researchers consider it a dwarf planet or protoplanet -- is peppered with carbon materials that are likely from "dark" asteroids that gently hit the surface, according to a new study.
It's the first time researchers have found such extensive evidence of this type of asteroid material across a large body's surface.
The study seeks to explain a curious pattern of materials that researchers saw in observations from the Dawn spacecraft, which orbited Vesta between July 2011 and September 2012.
"The earliest images we had of the surface -- shortly after going into orbit -- were sometimes spectacular examples of very bright and very dark material on the surface," said researcher Tom McCord of the Bear Fight Institute, a science research facility in Washington state. McCord is the lead author of a study reporting the findings that will be published in the Nov. 1 issue of the journal Nature.
Researchers looked at three scenarios -- that the dark stuff on Vesta was volcanic basalts, that it came from dark asteroids made up of carbon and primitive organic materials, or that it was "shock-melted and darkened" material melted on the surface from the heat of asteroid impacts, McCord said.
The light spectrum reflected from the materials gives a strong indication that the dark stuff came from asteroids, McCord said. The scientists found lots of hydrogen and hydroxyl in the materials, which tends to be present in carbonaceous asteroids.
"All of that is consistent, but it doesn't (definitively) prove carbonaceous chondrite material," he said. "There are pieces of material, and there is no evidence of any other source that we can think of, at least."
A Pioneering Find
At 325 miles (523 km) in diameter, Vesta is big enough to have experienced some of the stages of planetary evolution. For example, when Vesta was formed, it melted and heavier materials sank towards its center, similar to how our dense core formed on Earth. By contrast, most asteroids are loosely held collections of rubble.
An asteroid slamming into Earth's moon tends to see most of its materials ripped off as it crashes into the surface. But Vesta's weak gravity compared to the moon, and lower relative velocity with respect to the asteroids hitting it, makes impacts happen more slowly.
The dark asteroid materials we see scattered on Vesta's bright basalt surface could have implications for how life got started on the Earth. McCord cited a long-standing theory that the Earth's water and organic material could have come from asteroids or comets elsewhere in the solar system.
"We have, apparently, a dramatic example of the surface of an object being contaminated by material from other objects," McCord said of Vesta. "It forces one to (suppose) most objects are contaminated this way, and this is the way the Earth got its water and organic material. It not only has implications for the surface of Vesta, but for most other airless inter-solar system objects."
No Space Weathering Found
In a separate paper published in the same issue of Nature, researchers examined why "space weathering" from solar and cosmic radiation, as well as micrometeroid impacts, is not seen on the surface of Vesta. McCord was a co-author of the study, which was led by Brown University's Carle Pieters.
Read more at Discovery News
It's the first time researchers have found such extensive evidence of this type of asteroid material across a large body's surface.
The study seeks to explain a curious pattern of materials that researchers saw in observations from the Dawn spacecraft, which orbited Vesta between July 2011 and September 2012.
"The earliest images we had of the surface -- shortly after going into orbit -- were sometimes spectacular examples of very bright and very dark material on the surface," said researcher Tom McCord of the Bear Fight Institute, a science research facility in Washington state. McCord is the lead author of a study reporting the findings that will be published in the Nov. 1 issue of the journal Nature.
Researchers looked at three scenarios -- that the dark stuff on Vesta was volcanic basalts, that it came from dark asteroids made up of carbon and primitive organic materials, or that it was "shock-melted and darkened" material melted on the surface from the heat of asteroid impacts, McCord said.
The light spectrum reflected from the materials gives a strong indication that the dark stuff came from asteroids, McCord said. The scientists found lots of hydrogen and hydroxyl in the materials, which tends to be present in carbonaceous asteroids.
"All of that is consistent, but it doesn't (definitively) prove carbonaceous chondrite material," he said. "There are pieces of material, and there is no evidence of any other source that we can think of, at least."
A Pioneering Find
At 325 miles (523 km) in diameter, Vesta is big enough to have experienced some of the stages of planetary evolution. For example, when Vesta was formed, it melted and heavier materials sank towards its center, similar to how our dense core formed on Earth. By contrast, most asteroids are loosely held collections of rubble.
An asteroid slamming into Earth's moon tends to see most of its materials ripped off as it crashes into the surface. But Vesta's weak gravity compared to the moon, and lower relative velocity with respect to the asteroids hitting it, makes impacts happen more slowly.
The dark asteroid materials we see scattered on Vesta's bright basalt surface could have implications for how life got started on the Earth. McCord cited a long-standing theory that the Earth's water and organic material could have come from asteroids or comets elsewhere in the solar system.
"We have, apparently, a dramatic example of the surface of an object being contaminated by material from other objects," McCord said of Vesta. "It forces one to (suppose) most objects are contaminated this way, and this is the way the Earth got its water and organic material. It not only has implications for the surface of Vesta, but for most other airless inter-solar system objects."
No Space Weathering Found
In a separate paper published in the same issue of Nature, researchers examined why "space weathering" from solar and cosmic radiation, as well as micrometeroid impacts, is not seen on the surface of Vesta. McCord was a co-author of the study, which was led by Brown University's Carle Pieters.
Read more at Discovery News
Oct 31, 2012
Stars Ancient and Modern?
A colourful new view of the globular star cluster NGC 6362 was captured by the Wide Field Imager attached to the MPG/ESO 2.2-metre telescope at the European Southern Observatory's La Silla Observatory in Chile. This new picture, along with a new image of the central region from the NASA/ESA Hubble Space Telescope, provide the best view of this little-known cluster ever obtained. Globular clusters are mainly composed of tens of thousands of very ancient stars, but they also contain some stars that look suspiciously young.
Globular star clusters are among the oldest objects in the Universe, and NGC 6362 cannot hide its age in this picture. The many yellowish stars in the cluster have already run through much of their lives and become red giant stars. But globular clusters are not static relics from the past -- some curious stellar activities are still going on in these dense star cities.
For instance, NGC 6362 is home to many blue stragglers -- old stars that really do succeed in passing for a younger age. All of the stars in a globular cluster formed from the same material at roughly the same time (typically, about 10 billion years ago for most globulars). Yet blue stragglers are bluer and more luminous -- and hence more massive -- than they should be after ten billion years of stellar evolution. Blue stars are hot and consume their fuel quickly, so if these stars had formed about ten billion years ago, then they should have fizzled out long ago. How did they survive?
Astronomers are keen to understand the secret of the youthful appearance of blue stragglers. Currently, there are two main theories: stars colliding and merging, and a transfer of material between two companion stars. The basic idea behind both of these options is that the stars were not born as big as we see them today, but that they received an injection of extra material at some point during their lifetimes and this then gave them a new lease of life.
Although less well known than some brighter globular clusters, NGC 6362 holds much that is of interest to astronomers and has been well studied over the years. It was selected as one of the 160 stellar fields for the Pre-FLAMES Survey -- a preliminary survey conducted between 1999 and 2002 using the 2.2-metre telescope at La Silla to find suitable stars for follow-up observations with the VLT's spectroscopic instrument FLAMES. The picture here comes from data collected as part of this survey.
The new image shows the entire cluster against a rich background of the carpet of stars in the Milky Way. The central parts of NGC 6362 have also been (studied in detail) by the NASA/ESA Hubble Space Telescope. The Hubble view shows a much smaller area of sky in much greater detail. The two views -- one wide-angle and one zoomed in -- complement each other perfectly.
Read more at Science Daily
Globular star clusters are among the oldest objects in the Universe, and NGC 6362 cannot hide its age in this picture. The many yellowish stars in the cluster have already run through much of their lives and become red giant stars. But globular clusters are not static relics from the past -- some curious stellar activities are still going on in these dense star cities.
For instance, NGC 6362 is home to many blue stragglers -- old stars that really do succeed in passing for a younger age. All of the stars in a globular cluster formed from the same material at roughly the same time (typically, about 10 billion years ago for most globulars). Yet blue stragglers are bluer and more luminous -- and hence more massive -- than they should be after ten billion years of stellar evolution. Blue stars are hot and consume their fuel quickly, so if these stars had formed about ten billion years ago, then they should have fizzled out long ago. How did they survive?
Astronomers are keen to understand the secret of the youthful appearance of blue stragglers. Currently, there are two main theories: stars colliding and merging, and a transfer of material between two companion stars. The basic idea behind both of these options is that the stars were not born as big as we see them today, but that they received an injection of extra material at some point during their lifetimes and this then gave them a new lease of life.
Although less well known than some brighter globular clusters, NGC 6362 holds much that is of interest to astronomers and has been well studied over the years. It was selected as one of the 160 stellar fields for the Pre-FLAMES Survey -- a preliminary survey conducted between 1999 and 2002 using the 2.2-metre telescope at La Silla to find suitable stars for follow-up observations with the VLT's spectroscopic instrument FLAMES. The picture here comes from data collected as part of this survey.
The new image shows the entire cluster against a rich background of the carpet of stars in the Milky Way. The central parts of NGC 6362 have also been (studied in detail) by the NASA/ESA Hubble Space Telescope. The Hubble view shows a much smaller area of sky in much greater detail. The two views -- one wide-angle and one zoomed in -- complement each other perfectly.
Read more at Science Daily
Exhaustive Family Tree for Birds Shows Recent, Rapid Diversification
A Yale-led scientific team has produced the most comprehensive family tree for birds to date, connecting all living bird species -- nearly 10,000 in total -- and revealing surprising new details about their evolutionary history and its geographic context.
Analysis of the family tree shows when and where birds diversified -- and that birds' diversification rate has increased over the last 50 million years, challenging the conventional wisdom of biodiversity experts.
"It's the first time that we have -- for such a large group of species and with such a high degree of confidence -- the full global picture of diversification in time and space," said biologist Walter Jetz of Yale, lead author of the team's research paper, published Oct. 31 online in the journal Nature.
He continued: "The research highlights how heterogeneously fast diversifying species groups are distributed throughout the family tree and over geographic space. Many parts of the globe have seen a variety of species groups diversify rapidly and recently. All this leads to a diversification rate in birds that has been increasing over the past 50 million years."
The researchers relied heavily on fossil and DNA data, combining them with geographical information to produce the exhaustive family tree, which includes 9,993 species known to be alive now.
"The current zeitgeist in biodiversity science is that the world can fill up quickly," says biologist and co-author Arne Mooers of Simon Fraser University in Canada. "A new distinctive group, like bumblebees or tunafish, first evolves, and, if conditions are right, it quickly radiates to produce a large number of species. These species fill up all the available niches, and then there is nowhere to go. Extinction catches up, and things begin to slow down or stall. For birds the pattern is the opposite: Speciation is actually speeding up, not slowing down."
The researchers attribute the growing rate of avian diversity to an abundance of group-specific adaptations. They hypothesize that the evolution of physical or behavioral innovations in certain groups, combined with the opening of new habitats, has enabled repeated bursts of diversification. Another likely factor has been birds' exceptional mobility, researchers said, which time and again has allowed them to colonize new regions and exploit novel ecological opportunities.
In their analysis, the researchers also expose significant geographic differences in diversification rates. They are higher in the Western Hemisphere than in the Eastern, and higher on islands than mainlands. But surprisingly, they said, there is little difference in rates between the tropics and high latitudes. Regions of especially intense recent diversification include northern North American and Eurasia and southern South America.
"This was one of the big surprises," Jetz said. "For a long time biologists have thought that the vast diversity of tropical species must at least partly be due to greater rates of net species production there. For birds we find no support for this, and groups with fast and slow diversification appear to occur there as much as in the high latitudes. Instead, the answer may lie in the tropics' older age, leading to a greater accumulation of species over time. Global phylogenies like ours will allow further tests of this and other basic hypotheses about life on Earth."
Read more at Science Daily
Analysis of the family tree shows when and where birds diversified -- and that birds' diversification rate has increased over the last 50 million years, challenging the conventional wisdom of biodiversity experts.
"It's the first time that we have -- for such a large group of species and with such a high degree of confidence -- the full global picture of diversification in time and space," said biologist Walter Jetz of Yale, lead author of the team's research paper, published Oct. 31 online in the journal Nature.
He continued: "The research highlights how heterogeneously fast diversifying species groups are distributed throughout the family tree and over geographic space. Many parts of the globe have seen a variety of species groups diversify rapidly and recently. All this leads to a diversification rate in birds that has been increasing over the past 50 million years."
The researchers relied heavily on fossil and DNA data, combining them with geographical information to produce the exhaustive family tree, which includes 9,993 species known to be alive now.
"The current zeitgeist in biodiversity science is that the world can fill up quickly," says biologist and co-author Arne Mooers of Simon Fraser University in Canada. "A new distinctive group, like bumblebees or tunafish, first evolves, and, if conditions are right, it quickly radiates to produce a large number of species. These species fill up all the available niches, and then there is nowhere to go. Extinction catches up, and things begin to slow down or stall. For birds the pattern is the opposite: Speciation is actually speeding up, not slowing down."
The researchers attribute the growing rate of avian diversity to an abundance of group-specific adaptations. They hypothesize that the evolution of physical or behavioral innovations in certain groups, combined with the opening of new habitats, has enabled repeated bursts of diversification. Another likely factor has been birds' exceptional mobility, researchers said, which time and again has allowed them to colonize new regions and exploit novel ecological opportunities.
In their analysis, the researchers also expose significant geographic differences in diversification rates. They are higher in the Western Hemisphere than in the Eastern, and higher on islands than mainlands. But surprisingly, they said, there is little difference in rates between the tropics and high latitudes. Regions of especially intense recent diversification include northern North American and Eurasia and southern South America.
"This was one of the big surprises," Jetz said. "For a long time biologists have thought that the vast diversity of tropical species must at least partly be due to greater rates of net species production there. For birds we find no support for this, and groups with fast and slow diversification appear to occur there as much as in the high latitudes. Instead, the answer may lie in the tropics' older age, leading to a greater accumulation of species over time. Global phylogenies like ours will allow further tests of this and other basic hypotheses about life on Earth."
Read more at Science Daily
Help Discover Dark Matter in the Universe, Win Money
Why not spend Halloween looking for something truly ghostly and mysterious? Astronomers have created a new competition that asks the general public to generate algorithms that can help spot dark matter in Hubble images.
The contest, called Observing Dark Worlds, aims to capitalize on the ever-growing field of citizen science, where non-experts are asked to sift through data to help make discoveries. While most citizen science projects rely on people’s free time, this contest is looking to give out cash prizes, to the tune of $20,000.
Dark matter is thought to be a strange form of matter that doesn’t interact with electromagnetism and light. It is all-pervasive in the universe, accounting for roughly 85 percent of the matter in the cosmos. While astronomers can’t directly see dark matter, its effects can still be observed.
When a large amount of dark matter gathers in one place, such as the halo around a galaxy, it will exert a massive gravitational force. This field is so strong it can bend the path of light beams passing near it. Stars behind this dark-matter clump will appear distorted, their light stretched and warped like in a funhouse mirror reflection.
Astronomers have been able to infer the presence of many dark-matter agglomerations by looking for this distorted light, but they haven’t been able to find a technique that can unfailingly uncover dark matter in telescope images. This contest is looking to throw more brain power at this problem in order to solve it once and for all.
“We challenge YOU to detect the most elusive, mysterious and yet most abundant matter in all existence,” write the contest’s organizers on their website.
The organizers are looking for people with backgrounds in science, data engineering, and statistics, but anyone is free to try coming up with a good dark-matter search technique. People or groups whose code best predicts the center of 120 test dark-matter halos will receive the $20,000 in prize money, split three ways: First place will get $12,000, second will get $5,000 and third place will receive $3,000.
Read more at Wired Science
The contest, called Observing Dark Worlds, aims to capitalize on the ever-growing field of citizen science, where non-experts are asked to sift through data to help make discoveries. While most citizen science projects rely on people’s free time, this contest is looking to give out cash prizes, to the tune of $20,000.
Dark matter is thought to be a strange form of matter that doesn’t interact with electromagnetism and light. It is all-pervasive in the universe, accounting for roughly 85 percent of the matter in the cosmos. While astronomers can’t directly see dark matter, its effects can still be observed.
When a large amount of dark matter gathers in one place, such as the halo around a galaxy, it will exert a massive gravitational force. This field is so strong it can bend the path of light beams passing near it. Stars behind this dark-matter clump will appear distorted, their light stretched and warped like in a funhouse mirror reflection.
Astronomers have been able to infer the presence of many dark-matter agglomerations by looking for this distorted light, but they haven’t been able to find a technique that can unfailingly uncover dark matter in telescope images. This contest is looking to throw more brain power at this problem in order to solve it once and for all.
“We challenge YOU to detect the most elusive, mysterious and yet most abundant matter in all existence,” write the contest’s organizers on their website.
The organizers are looking for people with backgrounds in science, data engineering, and statistics, but anyone is free to try coming up with a good dark-matter search technique. People or groups whose code best predicts the center of 120 test dark-matter halos will receive the $20,000 in prize money, split three ways: First place will get $12,000, second will get $5,000 and third place will receive $3,000.
Read more at Wired Science
Monsters Are People Too
Humans are hard-wired to process monsters, humanoids and other such fictional creatures as we do other humans, a new study has found.
The research demonstrates that we look for social, behaviorally relevant information in the eyes of others, even if those individuals are not like any actual species.
The paper was co-authored by 14-year-old Julian Levy, son of co-author Alan Kingstone. Levy was 12 when he first came up with the idea.
"This is a project that we would never have done in my lab if he hadn't suggested it to me," Kingstone told Discovery News.
Kingstone, a professor in the Department of Psychology at the University of British Colombia, explained that Julian "suggested it over dinner one day while I was commenting that people believed that it might be impossible to discriminate whether people look at the eyes or just the center of the face."
The answer is important for understanding what brain regions are handling the processing, he continued, with implications for training kids with social deficits, as well as theoretical and computational implications, since many scientific models often assume that the head, rather than the eyes, are being targeted.
Levy, Kingstone and co-author Tom Foulsham presented observers with images of people and characters from the popular fantasy game "Dungeons and Dragons." The latter consisted of "humanoids" (non-human creatures with eyes in the middle of their faces) and "monsters" (bizarre-looking fabrications with eyes positioned elsewhere).
There was a tremendous bias toward looking early and often at the eyes of humans, humanoids and even the monsters. People therefore immediately pay attention to the eyes of others, and not just the middle of heads.
Kingstone explained that "the eyes for the humans and humanoids were in the face of the images, and so people would target the middle of the image and then quickly make an eye movement left, right, diagonally, up or down, or wherever the eye or eyes happened to be," he said. "So spatially the looks for eyes for humans and humanoids versus monsters were different, but what they looked at, eyes, was the same."
Other primates are believed to do this too, so the hard wiring could go beyond even our earliest primate ancestors.
Outside of primates, other animals have also been found to follow an individual's gaze. These include birds, dogs, seals, goats and dolphins.
"Eyes seem to capture one's attention, and also be able to act as a cue to direct another's attention. Understanding the neurological underpinnings of gaze behavior is important, but to date this has been a very difficult question to address," explained Scott Sinnett of the University of Hawaii.
The new paper, he continued, "provides a simple and clever approach to disentangling this by using 'monsters' that have eyes in different areas of the body."
He was also impressed that Levy was so young when he came up with the concept.
Read more at Discovery News
The research demonstrates that we look for social, behaviorally relevant information in the eyes of others, even if those individuals are not like any actual species.
The paper was co-authored by 14-year-old Julian Levy, son of co-author Alan Kingstone. Levy was 12 when he first came up with the idea.
"This is a project that we would never have done in my lab if he hadn't suggested it to me," Kingstone told Discovery News.
Kingstone, a professor in the Department of Psychology at the University of British Colombia, explained that Julian "suggested it over dinner one day while I was commenting that people believed that it might be impossible to discriminate whether people look at the eyes or just the center of the face."
The answer is important for understanding what brain regions are handling the processing, he continued, with implications for training kids with social deficits, as well as theoretical and computational implications, since many scientific models often assume that the head, rather than the eyes, are being targeted.
Levy, Kingstone and co-author Tom Foulsham presented observers with images of people and characters from the popular fantasy game "Dungeons and Dragons." The latter consisted of "humanoids" (non-human creatures with eyes in the middle of their faces) and "monsters" (bizarre-looking fabrications with eyes positioned elsewhere).
There was a tremendous bias toward looking early and often at the eyes of humans, humanoids and even the monsters. People therefore immediately pay attention to the eyes of others, and not just the middle of heads.
Kingstone explained that "the eyes for the humans and humanoids were in the face of the images, and so people would target the middle of the image and then quickly make an eye movement left, right, diagonally, up or down, or wherever the eye or eyes happened to be," he said. "So spatially the looks for eyes for humans and humanoids versus monsters were different, but what they looked at, eyes, was the same."
Other primates are believed to do this too, so the hard wiring could go beyond even our earliest primate ancestors.
Outside of primates, other animals have also been found to follow an individual's gaze. These include birds, dogs, seals, goats and dolphins.
"Eyes seem to capture one's attention, and also be able to act as a cue to direct another's attention. Understanding the neurological underpinnings of gaze behavior is important, but to date this has been a very difficult question to address," explained Scott Sinnett of the University of Hawaii.
The new paper, he continued, "provides a simple and clever approach to disentangling this by using 'monsters' that have eyes in different areas of the body."
He was also impressed that Levy was so young when he came up with the concept.
Read more at Discovery News
Oct 30, 2012
The Monsters Among Us
With Halloween approaching, people turn their attention to the spooky and the scary, reveling in stories and images of ghosts, ghouls and witches for the holiday. However, while some monstrous characters only come out to play in October; others enjoy attention year round.
For example, in recent years, vampire media has gained popularity, from Stephanie Meyer's "Twilight" series of books and films to HBO's "True Blood," which finished its fifth season this summer. Zombies have recently seen a resurgence in popularity as well, evidenced by new takes on the genre, such as Zach Synder's 2004 remake of "Dawn of the Dead," Danny Boyle's "28 Days Later" and Edgar Wright's "Shaun of the Dead." Zombies have even shambled onto the television screen with AMC's "The Walking Dead."
Hollywood is quick to cash in on what's popular, but why do themes gain popularity in the first place? Does the prevalence of a certain monster reflect what's going on in our society today?
"I would argue that monsters in literature, in general, are almost always indicative of things we fear in a sort of collective sense," says Cajsa Baldini, a senior lecturer in the English Department of the College of Liberal Arts and Sciences.
Baldini is well-versed in classic monsters and their cultural significance. She teaches a course on 19th century fiction, which covers monstrous tales such as Mary Shelley's "Frankenstein" and "The Island of Doctor Moreau" by H.G. Wells. Both novels are steeped in themes of technology out of control and the ethical implications of science.
"Jurassic Park" is a great example of the "technology out of control" trope. It's a modern-day Frankenstein story, says principal lecturer Paul Cook, who teaches and writes science fiction in the English Department.
In the original "Frankenstein," after Victor Frankenstein creates his monster, he abandons it to be persecuted and ostracized. Once the monster understands what his creator did to him, he seeks out the doctor.
"I think that's what it's about -- to be confronted with our creations," says Baldini of the novel. "What responsibilities do we have to what we create? It essentially posits the question, do scientists have ethical responsibilities, or is the only responsibility towards further discovery? And I think that's the reason we read that novel today."
Baldini points to Ridley Scott's "Blade Runner," in which one man hunts down rogue human-looking androids, as a more modern interpretation of these ideas.
"The android turns around and says: 'Hey, I know you built in a flaw in me, I'm going to die, I need to know when' -- a question most of us ask, as does Victor Frankenstein in Shelley's novel," says Baldini.
Just as 19th century fiction reflected common fears and anxieties, science fiction in the 1950s served the same purpose. Films such as "The Day the Earth Stood Still" or "Invasion of the Body Snatchers" reflected Americans' fear of communism.
"Science fiction of the 50s not only reflected the culture, but criticized it as well," says Cook.
Cook believes that some monsters in fiction are simply manifestations of the worst parts of us, or a trait that is out of control.
"When ideas get out of control, you get monsters," says Cook. "Monsters, as an archetype, are simply a reflection of some aspect of our human nature greatly magnified to the level of destruction. That is where you get the werewolf, Dr. Jekyll and Mr. Hyde, or the Hulk -- something that's inside of us that comes out."
Baldini thinks that the theme of the embattled force within us points to humanity's desire to rise above the forces of nature.
"I think the werewolf is more of a psychological monster," she says. "Like any monster, it has to be reflective of us to be interesting. I think it's about the animal within, the aspects of us we think we've grown away from or that we don't want to acknowledge we ever had. We're not in control of nature, even if we like to think we are. Just look at an Ebola outbreak, or tsunamis. We think we can control nature, but we don't. We're subject to it like any other species on Earth."
While some monsters are reflections of humanity's struggle with internal, natural forces, others, such as the vampire, express a fear of external influences. Baldini's course explores the first appearance of the modern vampire, in the 1819 novella "The Vampyre" by John Polidori.
"Today, it's almost ridiculous because it's so stereotypical -- it's about a vampire that's aristocratic and evil, but he's also strangely mesmerizing and attractive to people. But of course, everyone who associates with him ends badly," says Baldini of Polidori's story.
Even though not all modern interpretations of vampires pose them as aristocrats, Baldini sees these creatures as always being the elite.
"If you look at Polidori and Stoker's vampires, they are aristocratic and evil," says Baldini. "They are themselves special and set apart -- not everyone can be them. And also whoever they seek out as their victim, even though it's violent and it's deadly, there's a sense of being the elect -- vampires don't just go for anyone. I think this is part of the attraction, the erotic appeal of the vampire."
Baldini cites that attraction to the elite nature of the vampire as part of their popularity in the 1980s, when Anne Rice's novels and films like "The Lost Boys" portrayed vampires as evil but also glamorous and cool.
"That was the time period of glam and the early yuppies and Gordon Gekko saying 'greed is good.' It was okay to be selfish, to prioritize number one, to strive toward an elite status," says Baldini.
Popular vampires today still have that elitism and admiration, but they are also tragic figures.
"It's okay to want to be elite to the point were we start valorizing such characters, such as Edward Cullen," says Baldini. "It's actually a good thing to want to be like them and to be elected by them, and now there's a humility trope in there too."
While vampires represent the upper crust, a monster that is anything but has recently become incredibly popular: the zombie.
"The zombie is the underdog of the monsters, sort of the underachiever of monsters as well," Baldini says of the stumbling, rotting creatures. "You don't have to do much to become a zombie. You're bitten by one and you become one. There's minimal grooming involved. It's the blue-collar monster."
And being a zombie is cool today. Hundreds, sometimes thousands of people turn out for zombie walks, or zombie pub crawls. Hordes of people dress up as the living dead and shuffle through cities across the world, sometimes to promote a cause, give to charity or just for fun. But what does the popularity of zombies say about society today?
"We're looking at a monster that's a collective body that consumes everything," says Baldini. "That's western culture, that's what we are. We have over-consumed throughout the 1990s. We over-borrowed on credit, we took all the equity out of our homes and then some, we consumed indiscriminately, we didn't think, because like zombies, we don't think. We just followed the herd in consumption. I don't think people sit around and think about this, but I think on some level, the zombie is relatable in this particular time in history."
As Baldini points out, the cultural significance of monsters probably isn't something most people consider on a conscious level. But that doesn't make the themes embedded in monster stories any less important.
Read more at Science Daily
For example, in recent years, vampire media has gained popularity, from Stephanie Meyer's "Twilight" series of books and films to HBO's "True Blood," which finished its fifth season this summer. Zombies have recently seen a resurgence in popularity as well, evidenced by new takes on the genre, such as Zach Synder's 2004 remake of "Dawn of the Dead," Danny Boyle's "28 Days Later" and Edgar Wright's "Shaun of the Dead." Zombies have even shambled onto the television screen with AMC's "The Walking Dead."
Hollywood is quick to cash in on what's popular, but why do themes gain popularity in the first place? Does the prevalence of a certain monster reflect what's going on in our society today?
"I would argue that monsters in literature, in general, are almost always indicative of things we fear in a sort of collective sense," says Cajsa Baldini, a senior lecturer in the English Department of the College of Liberal Arts and Sciences.
Baldini is well-versed in classic monsters and their cultural significance. She teaches a course on 19th century fiction, which covers monstrous tales such as Mary Shelley's "Frankenstein" and "The Island of Doctor Moreau" by H.G. Wells. Both novels are steeped in themes of technology out of control and the ethical implications of science.
"Jurassic Park" is a great example of the "technology out of control" trope. It's a modern-day Frankenstein story, says principal lecturer Paul Cook, who teaches and writes science fiction in the English Department.
In the original "Frankenstein," after Victor Frankenstein creates his monster, he abandons it to be persecuted and ostracized. Once the monster understands what his creator did to him, he seeks out the doctor.
"I think that's what it's about -- to be confronted with our creations," says Baldini of the novel. "What responsibilities do we have to what we create? It essentially posits the question, do scientists have ethical responsibilities, or is the only responsibility towards further discovery? And I think that's the reason we read that novel today."
Baldini points to Ridley Scott's "Blade Runner," in which one man hunts down rogue human-looking androids, as a more modern interpretation of these ideas.
"The android turns around and says: 'Hey, I know you built in a flaw in me, I'm going to die, I need to know when' -- a question most of us ask, as does Victor Frankenstein in Shelley's novel," says Baldini.
Just as 19th century fiction reflected common fears and anxieties, science fiction in the 1950s served the same purpose. Films such as "The Day the Earth Stood Still" or "Invasion of the Body Snatchers" reflected Americans' fear of communism.
"Science fiction of the 50s not only reflected the culture, but criticized it as well," says Cook.
Cook believes that some monsters in fiction are simply manifestations of the worst parts of us, or a trait that is out of control.
"When ideas get out of control, you get monsters," says Cook. "Monsters, as an archetype, are simply a reflection of some aspect of our human nature greatly magnified to the level of destruction. That is where you get the werewolf, Dr. Jekyll and Mr. Hyde, or the Hulk -- something that's inside of us that comes out."
Baldini thinks that the theme of the embattled force within us points to humanity's desire to rise above the forces of nature.
"I think the werewolf is more of a psychological monster," she says. "Like any monster, it has to be reflective of us to be interesting. I think it's about the animal within, the aspects of us we think we've grown away from or that we don't want to acknowledge we ever had. We're not in control of nature, even if we like to think we are. Just look at an Ebola outbreak, or tsunamis. We think we can control nature, but we don't. We're subject to it like any other species on Earth."
While some monsters are reflections of humanity's struggle with internal, natural forces, others, such as the vampire, express a fear of external influences. Baldini's course explores the first appearance of the modern vampire, in the 1819 novella "The Vampyre" by John Polidori.
"Today, it's almost ridiculous because it's so stereotypical -- it's about a vampire that's aristocratic and evil, but he's also strangely mesmerizing and attractive to people. But of course, everyone who associates with him ends badly," says Baldini of Polidori's story.
Even though not all modern interpretations of vampires pose them as aristocrats, Baldini sees these creatures as always being the elite.
"If you look at Polidori and Stoker's vampires, they are aristocratic and evil," says Baldini. "They are themselves special and set apart -- not everyone can be them. And also whoever they seek out as their victim, even though it's violent and it's deadly, there's a sense of being the elect -- vampires don't just go for anyone. I think this is part of the attraction, the erotic appeal of the vampire."
Baldini cites that attraction to the elite nature of the vampire as part of their popularity in the 1980s, when Anne Rice's novels and films like "The Lost Boys" portrayed vampires as evil but also glamorous and cool.
"That was the time period of glam and the early yuppies and Gordon Gekko saying 'greed is good.' It was okay to be selfish, to prioritize number one, to strive toward an elite status," says Baldini.
Popular vampires today still have that elitism and admiration, but they are also tragic figures.
"It's okay to want to be elite to the point were we start valorizing such characters, such as Edward Cullen," says Baldini. "It's actually a good thing to want to be like them and to be elected by them, and now there's a humility trope in there too."
While vampires represent the upper crust, a monster that is anything but has recently become incredibly popular: the zombie.
"The zombie is the underdog of the monsters, sort of the underachiever of monsters as well," Baldini says of the stumbling, rotting creatures. "You don't have to do much to become a zombie. You're bitten by one and you become one. There's minimal grooming involved. It's the blue-collar monster."
And being a zombie is cool today. Hundreds, sometimes thousands of people turn out for zombie walks, or zombie pub crawls. Hordes of people dress up as the living dead and shuffle through cities across the world, sometimes to promote a cause, give to charity or just for fun. But what does the popularity of zombies say about society today?
"We're looking at a monster that's a collective body that consumes everything," says Baldini. "That's western culture, that's what we are. We have over-consumed throughout the 1990s. We over-borrowed on credit, we took all the equity out of our homes and then some, we consumed indiscriminately, we didn't think, because like zombies, we don't think. We just followed the herd in consumption. I don't think people sit around and think about this, but I think on some level, the zombie is relatable in this particular time in history."
As Baldini points out, the cultural significance of monsters probably isn't something most people consider on a conscious level. But that doesn't make the themes embedded in monster stories any less important.
Read more at Science Daily
How Silver Turns People Blue
Ingesting silver -- in antimicrobial health tonics or for extensive medical treatments involving silver -- can cause argyria, condition in which the skin turns grayish-blue. Brown researchers have discovered how that happens. The process is similar to developing black-and-white photographs, and it's not just the silver.
Researchers from Brown University have shown for the first time how ingesting too much silver can cause argyria, a rare condition in which patients' skin turns a striking shade of grayish blue.
"It's the first conceptual model giving the whole picture of how one develops this condition," said Robert Hurt, professor of engineering at Brown and part of the research team. "What's interesting here is that the particles someone ingests aren't the particles that ultimately cause the disorder."
Scientists have known for years argyria had something to do with silver. The condition has been documented in people who (ill advisedly) drink antimicrobial health tonics containing silver nanoparticles and in people who have had extensive medical treatments involving silver. Tissue samples from patients showed silver particles actually lodged deep in the skin, but it wasn't clear how they got there.
As it turns out, argyria is caused by a complex series of chemical reactions, Hurt said. His paper on the subject, authored with Brown colleagues Jingyu Liu, Zhongying Wang, Frances Liu, and Agnes Kane, is published in the journal ACS Nano.
"The particles someone ingests aren't the particles that ultimately cause the disorder."Hurt and his team show that nanosilver is broken down in the stomach, absorbed into the bloodstream as a salt and finally deposited in the skin, where exposure to light turns the salt back into elemental silver and creates the telltale bluish hue. That final stage, oddly, involves the same photochemical reaction used to develop black-and-white photographs.
From silver to salt and back again
Hurt and his team have been studying the environmental impact of silver, specifically silver nanoparticles, for years. They've found that nanosilver tends to corrode in acidic environments, giving off charged ions -- silver salts -- that can be toxic in large amounts. Hurt's graduate student, Jingyu Liu (now a postdoctoral fellow at the National Institute of Standards and Technology), thought those same toxic ions might also be produced when silver enters the body, and could play a role in argyria.
To find out, the researchers mixed a series chemical treatments that could simulate what might happen to silver inside the body. One treatment simulated the acidic environment in the gastrointestinal tract; one mimicked the protein content of the bloodstream; and a collagen gel replicated the base membranes of the skin.
They found that nanosilver corrodes in stomach acid in much the same way it does in other acidic environments. Corrosion strips silver atoms of electrons, forming positively charged silver salt ions. Those ions can easily be taken into the bloodstream through channels that absorb other types of salt. That's a crucial step, Hurt said. Silver metal particles themselves aren't terribly likely to make it from the GI tract to the blood, but when they're transformed into a salt, they're ushered right through.
From there, Hurt and his team showed that silver ions bind easily with sulfur present in blood proteins, which would give them a free ride through the bloodstream. Some of those ions would eventually end up in the skin, where they'd be exposed to light.
To re-create this end stage, the researchers shined ultraviolet light on collagen gel containing silver ions. The light caused electrons from the surrounding materials to jump onto the unstable ions, returning them to their original state -- elemental silver. This final reaction is ultimately what turns patients' skin blue. The photoreaction is similar to the way silver is used in black and white photography. When exposed to light, silver salts on a photographic film reduce to elemental silver and darken, creating an image.
Implications for nanosilver
Despite its potential toxicity, silver has been valued for centuries for its ability to kill germs, which is why silver nanoparticles are used today in everything from food packaging to bandages. There are concerns however that this nanoparticle form of silver might pose a unique health threat all its own.
This research, however, "would be one piece of evidence that you could treat nanoparticles in the same way as other forms of silver," Hurt says.
That's because the bioavailable form of silver -- the form that is absorbed into the bloodstream -- is the silver salt that's made in the stomach. Any elemental silver that's ingested is just the raw material to make that bioavailable salt. So ingesting silver in any form, be it nano or not, would have basically the same effect, Hurt said.
Read more at Science Daily
Researchers from Brown University have shown for the first time how ingesting too much silver can cause argyria, a rare condition in which patients' skin turns a striking shade of grayish blue.
"It's the first conceptual model giving the whole picture of how one develops this condition," said Robert Hurt, professor of engineering at Brown and part of the research team. "What's interesting here is that the particles someone ingests aren't the particles that ultimately cause the disorder."
Scientists have known for years argyria had something to do with silver. The condition has been documented in people who (ill advisedly) drink antimicrobial health tonics containing silver nanoparticles and in people who have had extensive medical treatments involving silver. Tissue samples from patients showed silver particles actually lodged deep in the skin, but it wasn't clear how they got there.
As it turns out, argyria is caused by a complex series of chemical reactions, Hurt said. His paper on the subject, authored with Brown colleagues Jingyu Liu, Zhongying Wang, Frances Liu, and Agnes Kane, is published in the journal ACS Nano.
"The particles someone ingests aren't the particles that ultimately cause the disorder."Hurt and his team show that nanosilver is broken down in the stomach, absorbed into the bloodstream as a salt and finally deposited in the skin, where exposure to light turns the salt back into elemental silver and creates the telltale bluish hue. That final stage, oddly, involves the same photochemical reaction used to develop black-and-white photographs.
From silver to salt and back again
Hurt and his team have been studying the environmental impact of silver, specifically silver nanoparticles, for years. They've found that nanosilver tends to corrode in acidic environments, giving off charged ions -- silver salts -- that can be toxic in large amounts. Hurt's graduate student, Jingyu Liu (now a postdoctoral fellow at the National Institute of Standards and Technology), thought those same toxic ions might also be produced when silver enters the body, and could play a role in argyria.
To find out, the researchers mixed a series chemical treatments that could simulate what might happen to silver inside the body. One treatment simulated the acidic environment in the gastrointestinal tract; one mimicked the protein content of the bloodstream; and a collagen gel replicated the base membranes of the skin.
They found that nanosilver corrodes in stomach acid in much the same way it does in other acidic environments. Corrosion strips silver atoms of electrons, forming positively charged silver salt ions. Those ions can easily be taken into the bloodstream through channels that absorb other types of salt. That's a crucial step, Hurt said. Silver metal particles themselves aren't terribly likely to make it from the GI tract to the blood, but when they're transformed into a salt, they're ushered right through.
From there, Hurt and his team showed that silver ions bind easily with sulfur present in blood proteins, which would give them a free ride through the bloodstream. Some of those ions would eventually end up in the skin, where they'd be exposed to light.
To re-create this end stage, the researchers shined ultraviolet light on collagen gel containing silver ions. The light caused electrons from the surrounding materials to jump onto the unstable ions, returning them to their original state -- elemental silver. This final reaction is ultimately what turns patients' skin blue. The photoreaction is similar to the way silver is used in black and white photography. When exposed to light, silver salts on a photographic film reduce to elemental silver and darken, creating an image.
Implications for nanosilver
Despite its potential toxicity, silver has been valued for centuries for its ability to kill germs, which is why silver nanoparticles are used today in everything from food packaging to bandages. There are concerns however that this nanoparticle form of silver might pose a unique health threat all its own.
This research, however, "would be one piece of evidence that you could treat nanoparticles in the same way as other forms of silver," Hurt says.
That's because the bioavailable form of silver -- the form that is absorbed into the bloodstream -- is the silver salt that's made in the stomach. Any elemental silver that's ingested is just the raw material to make that bioavailable salt. So ingesting silver in any form, be it nano or not, would have basically the same effect, Hurt said.
Read more at Science Daily
First Direct Detection Sheds Light On Dark Galaxies
Most people think of galaxies as huge islands of stars, gas and dust that populate the universe in visual splendor. Theory, however, has predicted there are other types of galaxies that are devoid of stars and made predominately of dense gas. These "dark" galaxies would be unseen against the black backdrop of the universe.
Now, an international team of astronomers has detected several dark galaxies by observing the fluorescent glow of their hydrogen gas, illuminated by the ultraviolet light of a nearby quasar. But what exactly are dark galaxies, and what role do they play in the evolution of our universe?
"Dark galaxies are composed of dark matter and gas, but for some reason they have not been able to form stars," said Martin Haehnelt, Kavli Institute for Cosmology at the University of Cambridge. "Some theoretical models have predicted that dark galaxies were common in the early universe when galaxies had more difficulty forming stars -- partly because their density of gas was not sufficient to form stars -- and only later did galaxies begin to ignite stars, becoming like the galaxies we see today."
Haehnelt is a member of the scientific team that detected these galaxies. According to Haehnelt, one can begin to understand the importance of dark galaxies by looking at our own Milky Way. "We expect the precursor to the Milky Way was a smaller bright galaxy that merged with dark galaxies nearby. They all came together to form our Milky Way that we see today."
Another member of the team, Sebastiano Cantalupo of the University of California, Santa Cruz, agreed that dark galaxies are the building blocks of modern galaxies. "In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies."
The techniques used for detecting dark galaxies also may provide a new way to learn about other phenomena in the universe, including what some call the "cosmic web" -- unseen filaments of gas and dark matter believed to permeate the universe, feeding and building galaxies and galaxy clusters where the filaments intersect.
Read more at Science Daily
Now, an international team of astronomers has detected several dark galaxies by observing the fluorescent glow of their hydrogen gas, illuminated by the ultraviolet light of a nearby quasar. But what exactly are dark galaxies, and what role do they play in the evolution of our universe?
"Dark galaxies are composed of dark matter and gas, but for some reason they have not been able to form stars," said Martin Haehnelt, Kavli Institute for Cosmology at the University of Cambridge. "Some theoretical models have predicted that dark galaxies were common in the early universe when galaxies had more difficulty forming stars -- partly because their density of gas was not sufficient to form stars -- and only later did galaxies begin to ignite stars, becoming like the galaxies we see today."
Haehnelt is a member of the scientific team that detected these galaxies. According to Haehnelt, one can begin to understand the importance of dark galaxies by looking at our own Milky Way. "We expect the precursor to the Milky Way was a smaller bright galaxy that merged with dark galaxies nearby. They all came together to form our Milky Way that we see today."
Another member of the team, Sebastiano Cantalupo of the University of California, Santa Cruz, agreed that dark galaxies are the building blocks of modern galaxies. "In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies."
The techniques used for detecting dark galaxies also may provide a new way to learn about other phenomena in the universe, including what some call the "cosmic web" -- unseen filaments of gas and dark matter believed to permeate the universe, feeding and building galaxies and galaxy clusters where the filaments intersect.
Read more at Science Daily
How Animals Deal with Downpours
With the rainy season now in full force across much of the United States, animals both in the wild and at zoos are coming up with some unique ways to cope.
How animals react can depend on three basic things: the species, the individual's personality and the animal's access to human-made shelter and goods.
Outside of humans, orangutans have come up with some of the primate world's best ways of dealing with rain. Orangutans live in the rainforests of Borneo and Sumatra, where dampness is part of the landscape.
In the forest during storms, orangutans will make protective canopies and "hats" out of leaves, but in zoos, they have found a more attractive material.
"Here in Seattle, rain is virtually an everyday occurrence," Gigi Allianic, spokesperson for Seattle's Woodland Park Zoo, told Discovery News. "Our orangutans wrap burlap bags around themselves and often just sit out storms. Like many of our other animals, they also have the option to retreat to an enclosure."
Most terrestrial animals do seek shelter. In nature, that can happen in tree or log holes, under rocks or leaves, or underground. Smaller animals like squirrels and mice will huddle together in such shelters, attempting to stay warm.
Rain seems to annoy most species, however, even aquatic animals. During torrential downpours, animals such as frogs, turtles and fish may retreat to lower levels of lakes and ponds, with some seeking added shelter under things like fallen rocks or driftwood.
Numerous animals, however, often remain out in the open and try to tolerate the wet.
"Grizzly bears at our zoo will often do that, even though they can go into an enclosure," Allianic said, adding that bears are very good swimmers.
Reptiles possess scaly skin composed of a protein called keratin. This allows reptile skin to have waterproof qualities while still remaining properly hydrated.
"Crocodiles are pretty good at dealing with inclement weather," Nick Hanna, assistant curator at Audubon Nature Institute in New Orleans, told Discovery News. "They remain calm and cool and never freak out, even during hard rains."
Some animals do lose their cool, but not necessarily because of rain.
"We've learned that, for certain animals, it's better and safer to leave them out instead of in," Hanna said. "When confined, ostriches tend to run into walls. African antelopes sometimes get so spooked that they will also run into walls."
These species, though, still have access to off exhibit areas and canopies to shield them from sun and rain.
In certain cases, how an animal reacts has more to do with its individual personality. Hanna and Allianic indicated that some primates and elephants fare better than others during rainstorms, particularly when thunder and lightning are involved.
Read more at Discovery News
How animals react can depend on three basic things: the species, the individual's personality and the animal's access to human-made shelter and goods.
Outside of humans, orangutans have come up with some of the primate world's best ways of dealing with rain. Orangutans live in the rainforests of Borneo and Sumatra, where dampness is part of the landscape.
In the forest during storms, orangutans will make protective canopies and "hats" out of leaves, but in zoos, they have found a more attractive material.
"Here in Seattle, rain is virtually an everyday occurrence," Gigi Allianic, spokesperson for Seattle's Woodland Park Zoo, told Discovery News. "Our orangutans wrap burlap bags around themselves and often just sit out storms. Like many of our other animals, they also have the option to retreat to an enclosure."
Most terrestrial animals do seek shelter. In nature, that can happen in tree or log holes, under rocks or leaves, or underground. Smaller animals like squirrels and mice will huddle together in such shelters, attempting to stay warm.
Rain seems to annoy most species, however, even aquatic animals. During torrential downpours, animals such as frogs, turtles and fish may retreat to lower levels of lakes and ponds, with some seeking added shelter under things like fallen rocks or driftwood.
Numerous animals, however, often remain out in the open and try to tolerate the wet.
"Grizzly bears at our zoo will often do that, even though they can go into an enclosure," Allianic said, adding that bears are very good swimmers.
Reptiles possess scaly skin composed of a protein called keratin. This allows reptile skin to have waterproof qualities while still remaining properly hydrated.
"Crocodiles are pretty good at dealing with inclement weather," Nick Hanna, assistant curator at Audubon Nature Institute in New Orleans, told Discovery News. "They remain calm and cool and never freak out, even during hard rains."
Some animals do lose their cool, but not necessarily because of rain.
"We've learned that, for certain animals, it's better and safer to leave them out instead of in," Hanna said. "When confined, ostriches tend to run into walls. African antelopes sometimes get so spooked that they will also run into walls."
These species, though, still have access to off exhibit areas and canopies to shield them from sun and rain.
In certain cases, how an animal reacts has more to do with its individual personality. Hanna and Allianic indicated that some primates and elephants fare better than others during rainstorms, particularly when thunder and lightning are involved.
Read more at Discovery News
Oct 29, 2012
Huge Deposit of Jurassic Turtle Remains Found in China
“Bones upon bones, we couldn’t believe our eyes,” says Oliver Wings, paleontologist and guest researcher at the Museum für Naturkunde in Berlin. He was describing the spectacular find of some 1800 fossilized mesa chelonia turtles from the Jurassic era in China’s northwest province of Xinjiang. Wings and the University of Tübingen’s fossil turtle specialist, Dr. Walter Joyce, were working with Chinese paleontologists there in 2008.
The results of their further work in 2009 and 2011 have just been published in the German journal Naturwissenschaften.
“This site has probably more than doubled the known number of individual turtles from the Jurassic,” says Walter Joyce. “Some of the shells were stacked up on top of one another in the rock.” It is what paleontologists call a “bone bed” – in this case consisting only of turtle remains.
Wings, Joyce and their team have made several expeditions to the arid region since 2007, finding fossil sharks, crocodiles, mammals and several dinosaur skeletons. Today one of the world’s driest regions, 160 million years ago Xinjiang was a green place of lakes and rivers, bursting with life. Yet the scientists have shown that even then, conditions were not always ideal, with climate change leading to seasonal drought – and this remarkable fossil find.
The turtles had gathered in one of the remaining waterholes during a very dry period, awaiting rain. Today’s turtles in Australia for instance do the same thing. But for the Xinjiang turtles, the rain came too late. Many of the turtles were already dead and their bodies rotting. When the water arrived, it came with a vengeance: a river of mud, washing the turtles and sediments along with it and dumping them in one place, as the paleontologists read the site and its layers of stone.
Read more at Science Daily
The results of their further work in 2009 and 2011 have just been published in the German journal Naturwissenschaften.
“This site has probably more than doubled the known number of individual turtles from the Jurassic,” says Walter Joyce. “Some of the shells were stacked up on top of one another in the rock.” It is what paleontologists call a “bone bed” – in this case consisting only of turtle remains.
Wings, Joyce and their team have made several expeditions to the arid region since 2007, finding fossil sharks, crocodiles, mammals and several dinosaur skeletons. Today one of the world’s driest regions, 160 million years ago Xinjiang was a green place of lakes and rivers, bursting with life. Yet the scientists have shown that even then, conditions were not always ideal, with climate change leading to seasonal drought – and this remarkable fossil find.
The turtles had gathered in one of the remaining waterholes during a very dry period, awaiting rain. Today’s turtles in Australia for instance do the same thing. But for the Xinjiang turtles, the rain came too late. Many of the turtles were already dead and their bodies rotting. When the water arrived, it came with a vengeance: a river of mud, washing the turtles and sediments along with it and dumping them in one place, as the paleontologists read the site and its layers of stone.
Read more at Science Daily
Mass Extinction Study Provides Lessons for Modern World
The Cretaceous Period of Earth history ended with a mass extinction that wiped out numerous species, most famously the dinosaurs. A new study now finds that the structure of North American ecosystems made the extinction worse than it might have been. Researchers at the University of Chicago, the California Academy of Sciences and the Field Museum of Natural History will publish their findings Oct. 29 online in the Proceedings of the National Academy of Sciences.
The mountain-sized asteroid that left the now-buried Chicxulub impact crater on the coast of Mexico's Yucatan Peninsula is almost certainly the ultimate cause of the end-Cretaceous mass extinction, which occurred 65 million years ago. Nevertheless, "Our study suggests that the severity of the mass extinction in North America was greater because of the ecological structure of communities at the time," noted lead author Jonathan Mitchell, a Ph.D. student of UChicago's Committee on Evolutionary Biology.
Mitchell and his co-authors, Peter Roopnarine of the California Academy of Sciences and Kenneth Angielczyk of the Field Museum, reconstructed terrestrial food webs for 17 Cretaceous ecological communities. Seven of these food webs existed within two million years of the Chicxulub impact and 10 came from the preceding 13 million years.
The findings are based on a computer model showing how disturbances spread through the food web. Roopnarine developed the simulation to predict how many animal species would become extinct from a plant die-off, a likely consequence of the impact.
"Our analyses show that more species became extinct for a given plant die-off in the youngest communities," Mitchell said. "We can trace this difference in response to changes in a number of key ecological groups such as plant-eating dinosaurs like Triceratops and small mammals."
The results of Mitchell and his colleagues paint a picture of late Cretaceous North America in which pre-extinction changes to food webs -- likely driven by a combination of environmental and biological factors -- results in communities that were more fragile when faced with large disturbances.
"Besides shedding light on this ancient extinction, our findings imply that seemingly innocuous changes to ecosystems caused by humans might reduce the ecosystems' abilities to withstand unexpected disturbances," Roopnarine said.
The team's computer model describes all plausible diets for the animals under study. In one run, Tyrannosaurus might eat only Triceratops, while in another it eats only duck-billed dinosaurs, and in a third it might eat a more varied diet. This stems from the uncertainty regarding exactly what Cretaceous animals ate, but this uncertainty actually worked to the study's benefit.
"Using modern food webs as guides, what we have discovered is that this uncertainty is far less important to understanding ecosystem functioning than is our general knowledge of the diets and the number of different species that would have had a particular diet," Angielczyk said.
Data derived from modern food webs helped the simulations account for such phenomena as how specialized animals tend to be, or how body size relates to population size and thus their probability of extinction.
The researchers also selected for their study a large number of specific food webs from all the specific webs possible in their general framework and evaluated how this sample of webs respond to a perturbation, such as the death of plants. They used the same relationships and assumptions to create food webs across all of the different sites, which means the differences between sites just stem from differences in the data rather than from the simulation itself. This makes the simulation a fundamentally comparative method, Roopnarine noted.
"We aren't trying to say that a given ecosystem was fragile, but instead that a given ecosystem was more or less fragile than another," he said.
The computer models showed that if the asteroid hit during the 13 million years preceding the latest Cretaceous communities, there almost certainly would still have been a mass extinction, but one that likely would have been less severe in North America.
Most likely a combination of changing climate and other environmental factors caused some types of animals to become more or less diverse in the Cretaceous, the researchers concluded. In their paper they suggest that the drying up of a shallow sea that covered part of North America may have been one of the main factors leading to the observed changes in diversity.
The study provides no evidence that the latest Cretaceous communities were on the verge of collapse before the asteroid hit. "The ecosystems collapsed because of the asteroid impact, and nothing in our study suggests that they would not have otherwise continued on successfully," Mitchell said. "Unusual circumstances, such as the after-effects of the asteroid impact, were needed for the vulnerability of the communities to become important."
The study has implications for modern conservation efforts, Angielczyk observed.
"Our study shows that the robustness or fragility of an ecosystem under duress depends very much on both the number of species present, as well as the types of species," he said, referring to their ecological function. The study also shows that more is not necessarily better, because simply having many species does not insure against ecosystem collapse.
Read more at Science Daily
The mountain-sized asteroid that left the now-buried Chicxulub impact crater on the coast of Mexico's Yucatan Peninsula is almost certainly the ultimate cause of the end-Cretaceous mass extinction, which occurred 65 million years ago. Nevertheless, "Our study suggests that the severity of the mass extinction in North America was greater because of the ecological structure of communities at the time," noted lead author Jonathan Mitchell, a Ph.D. student of UChicago's Committee on Evolutionary Biology.
Mitchell and his co-authors, Peter Roopnarine of the California Academy of Sciences and Kenneth Angielczyk of the Field Museum, reconstructed terrestrial food webs for 17 Cretaceous ecological communities. Seven of these food webs existed within two million years of the Chicxulub impact and 10 came from the preceding 13 million years.
The findings are based on a computer model showing how disturbances spread through the food web. Roopnarine developed the simulation to predict how many animal species would become extinct from a plant die-off, a likely consequence of the impact.
"Our analyses show that more species became extinct for a given plant die-off in the youngest communities," Mitchell said. "We can trace this difference in response to changes in a number of key ecological groups such as plant-eating dinosaurs like Triceratops and small mammals."
The results of Mitchell and his colleagues paint a picture of late Cretaceous North America in which pre-extinction changes to food webs -- likely driven by a combination of environmental and biological factors -- results in communities that were more fragile when faced with large disturbances.
"Besides shedding light on this ancient extinction, our findings imply that seemingly innocuous changes to ecosystems caused by humans might reduce the ecosystems' abilities to withstand unexpected disturbances," Roopnarine said.
The team's computer model describes all plausible diets for the animals under study. In one run, Tyrannosaurus might eat only Triceratops, while in another it eats only duck-billed dinosaurs, and in a third it might eat a more varied diet. This stems from the uncertainty regarding exactly what Cretaceous animals ate, but this uncertainty actually worked to the study's benefit.
"Using modern food webs as guides, what we have discovered is that this uncertainty is far less important to understanding ecosystem functioning than is our general knowledge of the diets and the number of different species that would have had a particular diet," Angielczyk said.
Data derived from modern food webs helped the simulations account for such phenomena as how specialized animals tend to be, or how body size relates to population size and thus their probability of extinction.
The researchers also selected for their study a large number of specific food webs from all the specific webs possible in their general framework and evaluated how this sample of webs respond to a perturbation, such as the death of plants. They used the same relationships and assumptions to create food webs across all of the different sites, which means the differences between sites just stem from differences in the data rather than from the simulation itself. This makes the simulation a fundamentally comparative method, Roopnarine noted.
"We aren't trying to say that a given ecosystem was fragile, but instead that a given ecosystem was more or less fragile than another," he said.
The computer models showed that if the asteroid hit during the 13 million years preceding the latest Cretaceous communities, there almost certainly would still have been a mass extinction, but one that likely would have been less severe in North America.
Most likely a combination of changing climate and other environmental factors caused some types of animals to become more or less diverse in the Cretaceous, the researchers concluded. In their paper they suggest that the drying up of a shallow sea that covered part of North America may have been one of the main factors leading to the observed changes in diversity.
The study provides no evidence that the latest Cretaceous communities were on the verge of collapse before the asteroid hit. "The ecosystems collapsed because of the asteroid impact, and nothing in our study suggests that they would not have otherwise continued on successfully," Mitchell said. "Unusual circumstances, such as the after-effects of the asteroid impact, were needed for the vulnerability of the communities to become important."
The study has implications for modern conservation efforts, Angielczyk observed.
"Our study shows that the robustness or fragility of an ecosystem under duress depends very much on both the number of species present, as well as the types of species," he said, referring to their ecological function. The study also shows that more is not necessarily better, because simply having many species does not insure against ecosystem collapse.
Read more at Science Daily
Mummy Unwrapping Brought Egyptology to the Public
Mummies have been objects of horror in popular culture since the early 1800s -- more than a century before Boris Karloff portrayed an ancient Egyptian searching for his lost love in the 1932 film "The Mummy." Public "unwrappings" of real mummified human remains performed by both showmen and scientists heightened the fascination, but also helped develop the growing science of Egyptology, says a Missouri University of Science and Technology historian.
Dr. Kathleen Sheppard, an expert in the history of science, particularly archaeology and Egyptology, and an assistant professor of history and political science at Missouri S&T, says that while mummy unwrappings served as public spectacles that objectified exotic artifacts, they were also scientific investigations that sought to reveal medical and historical information about ancient life.
Sheppard wrote about this intersection between science and showmanship in an article titled "Between Spectacle and Science: Margaret Murray and the Tomb of the Two Brothers." It will be published in the December issue of the journal Science in Context.
Sheppard says 20th century Egyptologist Margaret Murray, the first woman to publicly unwrap a mummy, sought to unravel the mysteries of ancient Egypt by exposing mummified human remains. She says Murray's work is culturally significant because it is "poised between spectacle and science, drawing morbid public interest while also producing ground-breaking scientific work that continues to this day."
Public spectacles that displayed mummified remains as objects of curiosity date back to the 16th century, Sheppard says. "These types of spectacles were highly engaging shows in which people were, to a certain degree, educated about different aspects of science both by showmen and scientists."
Many Egyptologists drew a distinction between "Egyptomania," the fascination with all things Egypt, and "Egyptology," the scientific study of Egyptian life, Sheppard says, but Murray had a different goal -- involving the public in scientific inquiry with a goal of correcting popular misconceptions.
Read more at Science Daily
Dr. Kathleen Sheppard, an expert in the history of science, particularly archaeology and Egyptology, and an assistant professor of history and political science at Missouri S&T, says that while mummy unwrappings served as public spectacles that objectified exotic artifacts, they were also scientific investigations that sought to reveal medical and historical information about ancient life.
Sheppard wrote about this intersection between science and showmanship in an article titled "Between Spectacle and Science: Margaret Murray and the Tomb of the Two Brothers." It will be published in the December issue of the journal Science in Context.
Sheppard says 20th century Egyptologist Margaret Murray, the first woman to publicly unwrap a mummy, sought to unravel the mysteries of ancient Egypt by exposing mummified human remains. She says Murray's work is culturally significant because it is "poised between spectacle and science, drawing morbid public interest while also producing ground-breaking scientific work that continues to this day."
Public spectacles that displayed mummified remains as objects of curiosity date back to the 16th century, Sheppard says. "These types of spectacles were highly engaging shows in which people were, to a certain degree, educated about different aspects of science both by showmen and scientists."
Many Egyptologists drew a distinction between "Egyptomania," the fascination with all things Egypt, and "Egyptology," the scientific study of Egyptian life, Sheppard says, but Murray had a different goal -- involving the public in scientific inquiry with a goal of correcting popular misconceptions.
Read more at Science Daily
Labels:
Archeology,
Biology,
History,
Human,
Science
New Study Sheds Light On How and When Vision Evolved
Opsins, the light-sensitive proteins key to vision, may have evolved earlier and undergone fewer genetic changes than previously believed, according to a new study from the National University of Ireland Maynooth and the University of Bristol published October 29 in Proceedings of the National Academy of Sciences (PNAS) .
The study, which used computer modelling to provide a detailed picture of how and when opsins evolved, sheds light on the origin of sight in animals, including humans. The evolutionary origins of vision remain hotly debated, partly due to inconsistent reports of phylogenetic relationships among the earliest opsin-possessing animals.
Dr Davide Pisani of Bristol's School of Earth Sciences and colleagues at NUI Maynooth performed a computational analysis to test every hypothesis of opsin evolution proposed to date. The analysis incorporated all available genomic information from all relevant animal lineages, including a newly sequenced group of sponges (Oscarella carmela) and the Cnidarians, a group of animals thought to have possessed the world's earliest eyes.
Using this information, the researchers developed a timeline with an opsin ancestor common to all groups appearing some 700 million years ago. This opsin was considered 'blind' yet underwent key genetic changes over the span of 11 million years that conveyed the ability to detect light.
Read more at Science Daily
The study, which used computer modelling to provide a detailed picture of how and when opsins evolved, sheds light on the origin of sight in animals, including humans. The evolutionary origins of vision remain hotly debated, partly due to inconsistent reports of phylogenetic relationships among the earliest opsin-possessing animals.
Dr Davide Pisani of Bristol's School of Earth Sciences and colleagues at NUI Maynooth performed a computational analysis to test every hypothesis of opsin evolution proposed to date. The analysis incorporated all available genomic information from all relevant animal lineages, including a newly sequenced group of sponges (Oscarella carmela) and the Cnidarians, a group of animals thought to have possessed the world's earliest eyes.
Using this information, the researchers developed a timeline with an opsin ancestor common to all groups appearing some 700 million years ago. This opsin was considered 'blind' yet underwent key genetic changes over the span of 11 million years that conveyed the ability to detect light.
Read more at Science Daily
Oct 28, 2012
Primates' Brains Make Visual Maps Using Triangular Grids
Primates' brains see the world through triangular grids, according to a new study published online October 28 in the journal Nature.
Scientists at Yerkes National Primate Research Center, Emory University, have identified grid cells, neurons that fire in repeating triangular patterns as the eyes explore visual scenes, in the brains of rhesus monkeys.
The finding has implications for understanding how humans form and remember mental maps of the world, as well as how neurodegenerative diseases such as Alzheimer's erode those abilities. This is the first time grid cells have been detected directly in primates. Grid cells were identified in rats in 2005, and their existence in humans has been indirectly inferred through magnetic resonance imaging.
Grid cells' electrical activities were recorded by introducing electrodes into monkeys' entorhinal cortex, a region of the brain in the medial temporal lobe. At the same time, the monkeys viewed a variety of images on a computer screen and explored those images with their eyes. Infrared eye-tracking allowed the scientists to follow which part of the image the monkey's eyes were focusing on. A single grid cell fires when the eyes focus on multiple discrete locations forming a grid pattern.
"The entorhinal cortex is one of the first brain regions to degenerate in Alzheimer's disease, so our results may help to explain why disorientation is one of the first behavioral signs of Alzheimer's," says senior author Elizabeth Buffalo, PhD, associate professor of neurology at Emory University School of Medicine and Yerkes National Primate Research Center. "We think these neurons help provide a context or structure for visual experiences to be stored in memory."
"Our discovery of grid cells in primates is a big step toward understanding how our brains form memories of visual information," says first author Nathan Killian, a graduate student in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "This is an exciting way of thinking about memory that may lead to novel treatments for neurodegenerative diseases."
In the experiments in which rats' grid cells were identified, the cells fired whenever the rats crossed lines on an invisible triangular grid.
"The surprising thing was that we could identify cells that behaved in the same way when the monkeys were simply moving their eyes," Buffalo says. "It suggests that primates don't have to actually visit a place to construct the same kind of mental map."
Another aspect of grid cells not previously seen with rodents is that the cells' responses change when monkeys are seeing an image for the second time. Specifically, the grid cells reduce their firing rate when a repeat image is seen. Moving from the posterior (rear) toward the anterior (front) of the entorhinal cortex, more neurons show memory responses.
"These results demonstrate that grid cells are involved in memory, not just mapping the visual field," Killian says.
Consistent with previous reports on grid cells in rats, Killian and Buffalo observed "theta-band" oscillations, where grid cells fire in a rhythmic way, from 3 to 12 times per second. Some scientists have proposed that theta oscillations are important for grid cell networks to be generated in development, and also for the brain to put together information from the grid cells. In the monkeys, populations of neurons exhibited theta oscillations that occurred in intermittent bouts, but these bouts did not appear to be critical for formation of the spatial representation.
Vision is thought to be a more prominent sense for primates (monkeys and humans) compared with rodents, for whom touch and smell are more important. While grid cells in rodents and primates were detected in different types of experiments, Buffalo says that it doesn't mean grid cells necessarily have a different nature in primates.
"We are now training a monkey to move through a virtual 3-D space. My guess is that we'll find grid cells that fire in similar patterns as the monkey navigates through that space," she says.
Read more at Science Daily
Scientists at Yerkes National Primate Research Center, Emory University, have identified grid cells, neurons that fire in repeating triangular patterns as the eyes explore visual scenes, in the brains of rhesus monkeys.
The finding has implications for understanding how humans form and remember mental maps of the world, as well as how neurodegenerative diseases such as Alzheimer's erode those abilities. This is the first time grid cells have been detected directly in primates. Grid cells were identified in rats in 2005, and their existence in humans has been indirectly inferred through magnetic resonance imaging.
Grid cells' electrical activities were recorded by introducing electrodes into monkeys' entorhinal cortex, a region of the brain in the medial temporal lobe. At the same time, the monkeys viewed a variety of images on a computer screen and explored those images with their eyes. Infrared eye-tracking allowed the scientists to follow which part of the image the monkey's eyes were focusing on. A single grid cell fires when the eyes focus on multiple discrete locations forming a grid pattern.
"The entorhinal cortex is one of the first brain regions to degenerate in Alzheimer's disease, so our results may help to explain why disorientation is one of the first behavioral signs of Alzheimer's," says senior author Elizabeth Buffalo, PhD, associate professor of neurology at Emory University School of Medicine and Yerkes National Primate Research Center. "We think these neurons help provide a context or structure for visual experiences to be stored in memory."
"Our discovery of grid cells in primates is a big step toward understanding how our brains form memories of visual information," says first author Nathan Killian, a graduate student in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "This is an exciting way of thinking about memory that may lead to novel treatments for neurodegenerative diseases."
In the experiments in which rats' grid cells were identified, the cells fired whenever the rats crossed lines on an invisible triangular grid.
"The surprising thing was that we could identify cells that behaved in the same way when the monkeys were simply moving their eyes," Buffalo says. "It suggests that primates don't have to actually visit a place to construct the same kind of mental map."
Another aspect of grid cells not previously seen with rodents is that the cells' responses change when monkeys are seeing an image for the second time. Specifically, the grid cells reduce their firing rate when a repeat image is seen. Moving from the posterior (rear) toward the anterior (front) of the entorhinal cortex, more neurons show memory responses.
"These results demonstrate that grid cells are involved in memory, not just mapping the visual field," Killian says.
Consistent with previous reports on grid cells in rats, Killian and Buffalo observed "theta-band" oscillations, where grid cells fire in a rhythmic way, from 3 to 12 times per second. Some scientists have proposed that theta oscillations are important for grid cell networks to be generated in development, and also for the brain to put together information from the grid cells. In the monkeys, populations of neurons exhibited theta oscillations that occurred in intermittent bouts, but these bouts did not appear to be critical for formation of the spatial representation.
Vision is thought to be a more prominent sense for primates (monkeys and humans) compared with rodents, for whom touch and smell are more important. While grid cells in rodents and primates were detected in different types of experiments, Buffalo says that it doesn't mean grid cells necessarily have a different nature in primates.
"We are now training a monkey to move through a virtual 3-D space. My guess is that we'll find grid cells that fire in similar patterns as the monkey navigates through that space," she says.
Read more at Science Daily
Mayans Pissed Off Over Doomsday 'Deceit'
The 'golden age' of the Mayan civilization may have occurred over 1,000 years ago, but more than half the population of the Central American nation of Guatemala are of Mayan descent and many still celebrate ancient customs. So, as we approach Dec. 21, 2012, it's little wonder they're pissed that one of their calendars has been hijacked and misinterpreted as a prophet of doom.
But this time, the anger isn't directed at the West's "messianic thinking," Maya leaders have accused the Guatemalan government of perpetuating the myth that the Mayan Long Count calendar predicts the end of the world for financial gain.
"We are speaking out against deceit, lies and twisting of the truth, and turning us into folklore-for-profit. They are not telling the truth about time cycles," Felipe Gomez, leader of the Maya alliance Oxlaljuj Ajpop, told the AFP news agency.
The entire "Mayan doomsday" nonsense centers around the mistaken belief that the ancient civilization had some magical ability to foretell the Apocalypse. But they didn't. In actuality, it focuses on a calendar cycle that runs out this year. This calendar, the Long Count, is a wonderfully complex system that spans around 5,200 years and is of huge spiritual significance to the Maya people. The final cycle of the calendar -- the 13th b'aktun -- will complete on Dec. 21.
Obviously this means the world will come to an end... Right?
Wrong.
In a statement released by Oxlaljuj Ajpop, the end of the cycle simply "means there will be big changes on the personal, family and community level, so that there is harmony and balance between mankind and nature."
To some crazed doomsayers desperately trying to make money from selling their nonsensical books of doom, this "harmony and balance" means the Universe it going to char broil the planet, killing everyone who isn't 'prepared' (i.e. those poor unfortunate souls who ignored their warnings).
The Mayan descendents don't quite see "harmony and balance" in the same way.
But when seeing an opportunity to profit from the West's insatiable desire to watch second-rate doomsday flicks and Indiana Jones version of archaeology, the Guatemalan government has embraced the inevitable surge of tourism December will bring. But the way in which it is being handled has frustrated groups like Oxlaljuj Ajpop.
The Culture Ministry is hosting a massive "doomsday" event in Guatemala City and many tourism groups have seen the opportunity to create "doomsday tours." Gomez has criticized this "show", pointing out that it is disrespectful to the Mayan culture.
Read more at Discovery News
But this time, the anger isn't directed at the West's "messianic thinking," Maya leaders have accused the Guatemalan government of perpetuating the myth that the Mayan Long Count calendar predicts the end of the world for financial gain.
"We are speaking out against deceit, lies and twisting of the truth, and turning us into folklore-for-profit. They are not telling the truth about time cycles," Felipe Gomez, leader of the Maya alliance Oxlaljuj Ajpop, told the AFP news agency.
The entire "Mayan doomsday" nonsense centers around the mistaken belief that the ancient civilization had some magical ability to foretell the Apocalypse. But they didn't. In actuality, it focuses on a calendar cycle that runs out this year. This calendar, the Long Count, is a wonderfully complex system that spans around 5,200 years and is of huge spiritual significance to the Maya people. The final cycle of the calendar -- the 13th b'aktun -- will complete on Dec. 21.
Obviously this means the world will come to an end... Right?
Wrong.
In a statement released by Oxlaljuj Ajpop, the end of the cycle simply "means there will be big changes on the personal, family and community level, so that there is harmony and balance between mankind and nature."
To some crazed doomsayers desperately trying to make money from selling their nonsensical books of doom, this "harmony and balance" means the Universe it going to char broil the planet, killing everyone who isn't 'prepared' (i.e. those poor unfortunate souls who ignored their warnings).
The Mayan descendents don't quite see "harmony and balance" in the same way.
But when seeing an opportunity to profit from the West's insatiable desire to watch second-rate doomsday flicks and Indiana Jones version of archaeology, the Guatemalan government has embraced the inevitable surge of tourism December will bring. But the way in which it is being handled has frustrated groups like Oxlaljuj Ajpop.
The Culture Ministry is hosting a massive "doomsday" event in Guatemala City and many tourism groups have seen the opportunity to create "doomsday tours." Gomez has criticized this "show", pointing out that it is disrespectful to the Mayan culture.
Read more at Discovery News
Subscribe to:
Posts (Atom)