The remains of three colorful dragon heads made of clay have been discovered in a huge palace in Xanadu, a city constructed by the grandsons of Genghis Khan.
The palace sprawls over 9,000 square meters (about 100,000 square feet), or nearly twice the floor space of the modern-day White House.Archaeologists have been excavating the palace, learning how it was designed and decorated.
Made of fine, red, baked clay the dragon heads would have been attached to the ends of beams and used asdecoration. They “are lifelike and dynamic” and “have yellow, blue, white and black coloring” glazed on them, researchers wrote in a report published recently in the journal Chinese Cultural Relics.
The construction of Xanadu, known in China as Shangdu, started in 1256 at a time when the Mongol Empire, led by Möngke Khan (grandson of Genghis Khan), was in the process of taking over China. After Möngke Khan’s death in 1259, his successor, Kublai Khan (also a grandson of Genghis), finished the conquest of China.Kublai had helped design Xanadu and when he became ruler he used the city as China’s capital during the summer months.
“The site is composed of a palatial district, an imperial city and an outer city, containing remains of three layers of city walls, and occupies an area of 484,000 square meters [about 120 acres],” the archaeologists wrote in their report.
While Xanadu was occupied only briefly, being destroyed in 1368, it became a place of legend, its name romanticized in popular culture as awondrous exotic place where one of the most powerful rulers in the world held court. The discovery of the dragon heads, and other remains from Xanadu, paints a picture of what the site looked like.
While the dragon heads are some of the most eye-catching finds at the palace, archaeologists also discovered a type of ramp called “mandao,” meaning “path for the horses” inChinesewhich allowed horses and vehicles access to the palace.
These ramps “would have been strongly connected to the pastoral way of life of the Mongols,” the archaeologists wrote.
The ramps were important because horses and pastoral animals were an essential part of Mongolian life. Recent research suggeststhat an unusually wet climate in Mongolia helped these animals flourish in Genghis Khan’s time, helping him and his successors conquer a vast amount of territory.
Archaeologists also found artifacts showing more of the rich colors that would have been seen by those who set foot in Xanadu at the time. These artifacts include the remains of aclayfish head whose body “is glazed yellow and green” with “bright and lifelike” scales, the archaeologists wrote.
Read more at Discovery News
May 30, 2015
Cassini to Get Final View of Saturn's Weird 'Spongy' Moon
As NASA’s Cassini spacecraft enters the final phase of its epic mission around Saturn, mission scientists are planning the probe’s final flyby of one of the strangest moons in the solar system.
So far, images of the irregularly-shaped 168 mile (270 kilometers) wide Hyperion have only been captured of the approximate same side, showing the moon’s signature “spongy” appearance. During the final flyby, expected on Sunday (May 31) at 6:36 a.m. PDT (9:36 a.m. EDT), mission scientists hope that Cassini will have the opportunity to photograph the other side of the moon so we can gain a better picture of its composition.
But predicting which side of Hyperion will be facing Cassini at flyby is a nigh-on impossible task; the moon is tumbling chaotically as it orbits Saturn, denying orbital dynamics experts the chance to accurately calculate the object’s spin.
Hyperion has an unusually low density (and therefore gravity) for an object of its size, a factor that scientists believe can be to blame for the moon’s porous appearance. Through Hyperion’s evolution, rocky impactors have struck the satellite, compressing the surface rather than excavating it, so any material blown off the surface is lost to space, leaving this weird pock-marked surface.
Unfortunately, Sunday’s flyby won’t be as close as the closest Cassini has come to Hyperion. In 2005, Cassini cruised past Hyperion at a distance of 314 miles (505 km) — the final flyby will be 21,000 miles (34,000 kilometers) from the moon.
Read more at Discovery News
So far, images of the irregularly-shaped 168 mile (270 kilometers) wide Hyperion have only been captured of the approximate same side, showing the moon’s signature “spongy” appearance. During the final flyby, expected on Sunday (May 31) at 6:36 a.m. PDT (9:36 a.m. EDT), mission scientists hope that Cassini will have the opportunity to photograph the other side of the moon so we can gain a better picture of its composition.
But predicting which side of Hyperion will be facing Cassini at flyby is a nigh-on impossible task; the moon is tumbling chaotically as it orbits Saturn, denying orbital dynamics experts the chance to accurately calculate the object’s spin.
Hyperion has an unusually low density (and therefore gravity) for an object of its size, a factor that scientists believe can be to blame for the moon’s porous appearance. Through Hyperion’s evolution, rocky impactors have struck the satellite, compressing the surface rather than excavating it, so any material blown off the surface is lost to space, leaving this weird pock-marked surface.
Unfortunately, Sunday’s flyby won’t be as close as the closest Cassini has come to Hyperion. In 2005, Cassini cruised past Hyperion at a distance of 314 miles (505 km) — the final flyby will be 21,000 miles (34,000 kilometers) from the moon.
Read more at Discovery News
May 29, 2015
Shark Files: Shark Skeletons Had Boney Start
The discovery of bone in the jaw of an ancient shark indicates modern sharks are far more advanced than previously thought, say researchers.
The finding, reported in the journal PLOS one , challenges evolutionary models suggesting animals with skeletons built of cartilage evolved into those with more advanced bony skeletons.
“Fish-like sharks were thought to be primitive because they only had cartilage, and never evolved to develop the bones,” says the study’s lead author, palaeontologist John Long of Flinders University.
“So now we’re turning that idea on its head by saying that early fossil sharks actually had bone in their skeleton and now they’ve lost it.”
The 380-million-year old fossil was unearthed in the Western Australian Kimberley region’s Gogo formation in 2005 by Long and colleagues.
The fossil, named Gogoselachus Lynnbeazleyae, is the first shark to be found in this area.
It remains include both sides of the lower jaw, parts of the shoulder girdle that supported the pectoral fins, several gill arch bones, about 80 teeth, and several hundred scales.
Evolution of Tissue
While modern sharks have a miniscule amount of bone in the roots of their teeth, they do not have any bone in their main skeleton.
Instead, their skeletons are built out of cartilage comprising tiny structures called tesserae, a soft rubbery tissue that is the precursor to bones in other animals.
However, when the authors recently examined the microstructure of the cartilage of Gogoselachus using micro CT scanning, they discovered the matrix holding the cartilage together in the jaw contained bone cells.
“Our fossil shark is the first to show a true cellular matrix made of bone binding the tiny cartilage units together,” says Long.
“We’re looking at a shark which has actually evolved from something that previously had a lot more bone in the skeleton, and eventually modern sharks would lose that bone and just become entirely cartilaginous.”
“So our fossil has given us a window into the evolution of tissues, and the reason sharks are so successful today is because they’ve reduced the bone in their skeleton and become more lightweight with an entirely cartilaginous skeleton,” he says.
Unique in the World
Long says the revolutionary new insights into the early evolution of shark cartilage was only possible because of the degree of preservation achieved by the Gogo formation.
Most fossil fishes from the Devonian period between 360 to 400 million years ago, are squashed flat in rock.
However, fossils in the Gogo formation occur in limestone nodules that preserve them in three dimensional form.
“Gogo is an ancient tropical reef, not a coral reef because they’re relatively recent, but a reef made of algae and sponge-like creatures,” says Long.
“It had a great abundance of life, many kinds of fishes, armoured placoderms which are an extinct group, but also early bony fishes like lung fishes and ray fin fishes, which are the dominate fishes today.
“We expect there should be more sharks there, except for some reason they’re not common on this particular reef where other kinds of fishes dominate.”
Read more at Discovery News
The finding, reported in the journal PLOS one , challenges evolutionary models suggesting animals with skeletons built of cartilage evolved into those with more advanced bony skeletons.
“Fish-like sharks were thought to be primitive because they only had cartilage, and never evolved to develop the bones,” says the study’s lead author, palaeontologist John Long of Flinders University.
“So now we’re turning that idea on its head by saying that early fossil sharks actually had bone in their skeleton and now they’ve lost it.”
The 380-million-year old fossil was unearthed in the Western Australian Kimberley region’s Gogo formation in 2005 by Long and colleagues.
The fossil, named Gogoselachus Lynnbeazleyae, is the first shark to be found in this area.
It remains include both sides of the lower jaw, parts of the shoulder girdle that supported the pectoral fins, several gill arch bones, about 80 teeth, and several hundred scales.
Evolution of Tissue
While modern sharks have a miniscule amount of bone in the roots of their teeth, they do not have any bone in their main skeleton.
Instead, their skeletons are built out of cartilage comprising tiny structures called tesserae, a soft rubbery tissue that is the precursor to bones in other animals.
However, when the authors recently examined the microstructure of the cartilage of Gogoselachus using micro CT scanning, they discovered the matrix holding the cartilage together in the jaw contained bone cells.
“Our fossil shark is the first to show a true cellular matrix made of bone binding the tiny cartilage units together,” says Long.
“We’re looking at a shark which has actually evolved from something that previously had a lot more bone in the skeleton, and eventually modern sharks would lose that bone and just become entirely cartilaginous.”
“So our fossil has given us a window into the evolution of tissues, and the reason sharks are so successful today is because they’ve reduced the bone in their skeleton and become more lightweight with an entirely cartilaginous skeleton,” he says.
Unique in the World
Long says the revolutionary new insights into the early evolution of shark cartilage was only possible because of the degree of preservation achieved by the Gogo formation.
Most fossil fishes from the Devonian period between 360 to 400 million years ago, are squashed flat in rock.
However, fossils in the Gogo formation occur in limestone nodules that preserve them in three dimensional form.
“Gogo is an ancient tropical reef, not a coral reef because they’re relatively recent, but a reef made of algae and sponge-like creatures,” says Long.
“It had a great abundance of life, many kinds of fishes, armoured placoderms which are an extinct group, but also early bony fishes like lung fishes and ray fin fishes, which are the dominate fishes today.
“We expect there should be more sharks there, except for some reason they’re not common on this particular reef where other kinds of fishes dominate.”
Read more at Discovery News
Colliding Galaxies May Erupt With Mega Jets
Powerful jets of material spewing from the edge of monster black holes may be more likely to arise where two galaxies have merged together, a new study suggests.
Like a cosmic version of Old Faithful (the famous Yellowstone geyser), some black holes at the center of galaxies will spew jets of material into space that stretch for thousands of light-years. You can see an illustration of what these gushing pillars look like in a video of the galaxy crash discovery.
Using data from the Hubble Space Telescope, new research suggests these jets are more likely to be found in galaxies that are the product of galaxy mergers. However, the authors of the research say merging two galaxies isn't always a recipe for creating galactic jets.
In the study, scientists used the Hubble Space Telescope to look for the radio waves emitted by the massive jets that spew particles into space at nearly the speed of light. These jets are thought to be created by activity taking place near the edge of a supermassive black hole (and astronomers think most, if not all, galaxies in the universe have a supermassive black hole at their center).
When a black hole is gobbling up material, the friction and movement of the particles may generate light. If the black hole is particularly gluttonous, surrounded by a buffet of matter, it may create enough light to outshine all the stars in the galaxy. These bright regions around a galactic center are called active galactic nuclei, or AGNs.
The researchers compared AGNs that produce jets and those that don't, as well as non-AGN galaxies with no jets. From there, they looked into the history of these galaxies, looking for evidence that the current galaxy was the product of a merger.
What they found was that more than 90 percent of the surveyed AGNs with jets were also the product of galaxy mergers. But, not all galaxy mergers necessarily created jets.
"We found that most merger events in themselves do not actually result in the creation of AGNs with powerful radio emission," said Roberto Gilli, of the Osservatorio Astronomico di Bologna, Italy, and an author on the paper. "About 40 percent of the other galaxies we looked at had also experienced a merger and yet had failed to produce the spectacular radio emissions and jets of their counterparts."
In a statement, the European Space Agency said: "Although it is now clear that a galactic merger is almost certainly necessary for a galaxy to host a supermassive black hole with relativistic jets, the team deduced that there must be additional conditions which need to be met."
It's possible that the merger of two galaxies shovels more gas toward the black hole at the new galaxy's center, which could increase the amount of food for the black hole to feast on, thus increasing the likelihood of jets forming.
Read more at Discovery News
Like a cosmic version of Old Faithful (the famous Yellowstone geyser), some black holes at the center of galaxies will spew jets of material into space that stretch for thousands of light-years. You can see an illustration of what these gushing pillars look like in a video of the galaxy crash discovery.
Using data from the Hubble Space Telescope, new research suggests these jets are more likely to be found in galaxies that are the product of galaxy mergers. However, the authors of the research say merging two galaxies isn't always a recipe for creating galactic jets.
In the study, scientists used the Hubble Space Telescope to look for the radio waves emitted by the massive jets that spew particles into space at nearly the speed of light. These jets are thought to be created by activity taking place near the edge of a supermassive black hole (and astronomers think most, if not all, galaxies in the universe have a supermassive black hole at their center).
When a black hole is gobbling up material, the friction and movement of the particles may generate light. If the black hole is particularly gluttonous, surrounded by a buffet of matter, it may create enough light to outshine all the stars in the galaxy. These bright regions around a galactic center are called active galactic nuclei, or AGNs.
The researchers compared AGNs that produce jets and those that don't, as well as non-AGN galaxies with no jets. From there, they looked into the history of these galaxies, looking for evidence that the current galaxy was the product of a merger.
What they found was that more than 90 percent of the surveyed AGNs with jets were also the product of galaxy mergers. But, not all galaxy mergers necessarily created jets.
"We found that most merger events in themselves do not actually result in the creation of AGNs with powerful radio emission," said Roberto Gilli, of the Osservatorio Astronomico di Bologna, Italy, and an author on the paper. "About 40 percent of the other galaxies we looked at had also experienced a merger and yet had failed to produce the spectacular radio emissions and jets of their counterparts."
In a statement, the European Space Agency said: "Although it is now clear that a galactic merger is almost certainly necessary for a galaxy to host a supermassive black hole with relativistic jets, the team deduced that there must be additional conditions which need to be met."
It's possible that the merger of two galaxies shovels more gas toward the black hole at the new galaxy's center, which could increase the amount of food for the black hole to feast on, thus increasing the likelihood of jets forming.
Read more at Discovery News
What Is 'Manhattanhenge?'
Today through Saturday marks a biannual solar event called Manhattanhenge, where the rising or setting sun aligns with the east-west grid of Manhattan streets.
Astrophysicist Neil deGrasse Tyson coined the term back in 2002, inspired by the famous Stonehenge site in the United Kingdom, where the sun sets in alignment with the stones every summer solstice.
Technically, "Manhattanhenge" occurs around the summer solstice, not on the solstice itself. That's because of the orientation of Manhattan's famous grid pattern — established by the the Commissioners’ Plan of 1811 — is not perfectly aligned with the geographic north-south line; it's rotated 29 degrees east, shifting the dates of alignment.
If that alignment had been perfect, Manhattanhenge would have occurred on the equinoxes every year: the first day of spring and autumn, respectively.
(Historical side note: The goal of the 1811 plan was "a free and abundant circulation of air" to stave off disease. The right angles were also favored because "straight-sided and right-angled houses are the most cheap to build." The rigid Manhattan grid has been much-maligned over the last 200 years, but recently has come back into favor with city planners.)
This kind of alignment is not unique to Manhattan; any city with a uniform street grid will have dates where the sun aligns with those streets, including Chicago, Toronto and Montreal.
But Manhattan also boasts a clear view of the horizon, looking across the Hudson River toward New Jersey. Plus you've got all those tall buildings lining the streets, creating the perfect vertical frame to show the setting sun to best advantage.
Tyson has been outspoken in the past about astronomical inaccuracies in film and television. For instance, you can always spot a fake sunrise onscreen, because the sun will move up and to the left as it rises. In reality, the sun always rises up and to the right. Directors tend to film a sunset for such scenes and then just run it backwards to portray a sunrise, thinking nobody will notice. (And they'd probably get away with it, too, if it weren't for those meddling astrophysicists!)
The term Manhattanhenge technically applies to the setting sun phenomenon that flanks the summer solstice, usually around May 28 and July 12, although the precise dates vary slightly year to year. A similar alignment occurs with the rising sun around the winter solstice, usually Dec. 5 and Jan. 8.
From Discovery News
Astrophysicist Neil deGrasse Tyson coined the term back in 2002, inspired by the famous Stonehenge site in the United Kingdom, where the sun sets in alignment with the stones every summer solstice.
Technically, "Manhattanhenge" occurs around the summer solstice, not on the solstice itself. That's because of the orientation of Manhattan's famous grid pattern — established by the the Commissioners’ Plan of 1811 — is not perfectly aligned with the geographic north-south line; it's rotated 29 degrees east, shifting the dates of alignment.
If that alignment had been perfect, Manhattanhenge would have occurred on the equinoxes every year: the first day of spring and autumn, respectively.
(Historical side note: The goal of the 1811 plan was "a free and abundant circulation of air" to stave off disease. The right angles were also favored because "straight-sided and right-angled houses are the most cheap to build." The rigid Manhattan grid has been much-maligned over the last 200 years, but recently has come back into favor with city planners.)
This kind of alignment is not unique to Manhattan; any city with a uniform street grid will have dates where the sun aligns with those streets, including Chicago, Toronto and Montreal.
But Manhattan also boasts a clear view of the horizon, looking across the Hudson River toward New Jersey. Plus you've got all those tall buildings lining the streets, creating the perfect vertical frame to show the setting sun to best advantage.
Tyson has been outspoken in the past about astronomical inaccuracies in film and television. For instance, you can always spot a fake sunrise onscreen, because the sun will move up and to the left as it rises. In reality, the sun always rises up and to the right. Directors tend to film a sunset for such scenes and then just run it backwards to portray a sunrise, thinking nobody will notice. (And they'd probably get away with it, too, if it weren't for those meddling astrophysicists!)
The term Manhattanhenge technically applies to the setting sun phenomenon that flanks the summer solstice, usually around May 28 and July 12, although the precise dates vary slightly year to year. A similar alignment occurs with the rising sun around the winter solstice, usually Dec. 5 and Jan. 8.
From Discovery News
This Isn’t a Spider, But It Does Have Genitals in Its Legs
The 1,300 known species of sea spider are truly ancient animals that as far as scientists can tell aren’t closely related to any extant species, spiders or otherwise. At the moment, though, they’re lumped in the group that holds spiders and horseshoe crabs. They have such tiny abdomens that their guts extend into their legs. Their genitals are there in the limbs too, which makes mating … interesting. And like sea horses, it’s the males that carry the young.
Sea spiders live in both deep and shallow seas around the world, but all are carnivores, through and through. They have claw-like mouthparts known as chelicerae, which spiders also have, suggesting they may belong to the same group (appropriately enough called Chelicerata). The actual feeding happens through a proboscis, a sort of tube that can be longer than the rest of the sea spider’s abdomen in some species.
“They feed generally on things that don’t move, like sponges and corals, but also slow-moving things like worms or sea slugs,” says marine biologist Claudia Arango of the Queensland Museum in Australia. “What they do is they’ve got very sharp jaws at the tip of that tube, the proboscis, so they pierce the prey and start sucking out fluids.”
The deep and shallows are of course worlds apart as far as habitats go, so sea spider species have adapted accordingly. To find food in the blackness, the blind deep-sea varieties likely sniff out their prey’s chemical cues, while their shallow-water peers have four simple eyes. In the shallows, they also tend to be more colorful than in the deep sea, since in the darkness, flashy colors won’t do you no good nohow.
The sea spider Nymphon grossipes, which really got short-changed on the whole name thing. |
Leg Genitals and Other Adventures in Sea Spider Sex
What the many species of sea spider can agree on, though, is how to have sex: namely, very acrobatically. Both males and females have genital pores in their legs, males on just their last two pairs and females on every single limb. When a couple comes together the spindly male crawls on top of the spindly female. “So basically he climbs up and walks all over the female and then starts trying to go under the female so that both the pores come in contact,” says Arango. “The female would be standing totally normal while the male would be upside down, clinging on the female.”
Pseudopallene harrisi, from Australia. Note the claw-like chelicerae. It’s fashionably colorful not because it’s from Australia, but because that’s par for the course for shallow-water sea spiders. |
Inevitably, though, the larvae must set out on their own, and some species won’t just float at the mercy of the currents. They’ll invade the bodies of other creatures on the seafloor, including bivalves, burrowing into their flesh and feeding on them and eventually killing them. Others invade the bodies of coral, stealing the nutrients that algae produce for them.
A sea spider male carries eggs with specialized limbs called ovigers. |
File Sea Spiders Under: It’s Complicated
It should be clear by now that sea spiders aren’t like any other creature on Earth—not by a long shot. I mean, the body plan alone is out of control. The daddy long legs is a lanky little thing, but the sea spider has so simplified its body plan that really its abdomen is little more than a joint for its legs, forcing its organs to flow into its limbs. Its heart is exceedingly simple, and because it lacks gills, it seems to absorb oxygen through its cuticle. And it has that bizarre proboscis, plus the males have those unique specialized arms used to hold eggs.
It all adds up to large-scale befuddlement for the folks studying them, and accordingly it’s a matter of controversy where exactly sea spiders fall in the tree of life. But Arango has an idea. “When you look at the morphology of the sea spiders, apart from a superficial resemblance to spiders, there’s lots of things that are unique,” says Arango. “In terms of giving them a place in the classification the best thing we can do, based on DNA mostly, is keep them at the base of the chelicerates.”
Anoplodactylus evansi may be Australian, but it looks a lot like a St. Louis Blues fan. |
Read more at Wired Science
May 28, 2015
Dinosaurs May Have Been Warm-Blooded
Dinosaurs were warm-blooded animals that had many traits in common with mammals, finds controversial new research.
The study, published in the journal Science, counters other popular theories, which say that dinosaurs were either cold-blooded and reptile-like, or occupied a unique intermediate category of animals that were neither fully cold nor warm-blooded.
“Upon re-analysis, it was apparent that dinosaurs weren’t just somewhat like living mammals in their physiology — they fit right within our understanding of what it means to be a ‘warm-blooded’ mammal,” author Michael D’Emic, a Stony Brook University paleontologist, said in a press release.
The “re-analysis” refers to D’Emic’s work revisiting yet another Science paper, published last year, which compiled a huge dataset on growth and metabolism of hundreds of living animals. D’Emic admits that this dataset was “remarkable” and “unprecedented.”
Yet, that earlier study found that dinosaurs would have fit into the intermediate category between ectothermic (cold-blooded, with body temperature controlled largely by the surrounding environment) and endothermic (warm-blooded, like us).
D'Emic looked at that study again, focusing on two primary aspects.
First, the original study had scaled yearly growth rates to daily ones in order to standardize comparisons.
"This is problematic," D'Emic said, "because many animals do not grow continuously throughout the year, generally slowing or pausing growth during colder, drier, or otherwise more stressful seasons. Therefore, the previous study underestimated dinosaur growth rates by failing to account for their uneven growth. Like most animals, dinosaurs slowed or paused their growth annually, as shown by rings in their bones analogous to tree rings."
He added that very stressful or seasonal environments can also impact growth, and would have really affected dinosaurs. He believes that these factors were underestimated in the earlier study.
The second aspect of his re-analysis took into account the widely held view that all dinosaurs did not become extinct: some evolved to become birds. Today’s birds are warm-blooded, so he argues that dinosaurs must have been this way too.
"Separating what we commonly think of as 'dinosaurs' from birds in a statistical analysis is generally inappropriate, because birds are dinosaurs–they’re just the dinosaurs that haven't gone extinct," he explained.
If D'Emic's theory about warm-blooded dinosaurs holds, then this would lead to still more re-evaluation of other data on things like the big sails and head ornaments on some dinosaurs. Certain past studies have linked these to body temperature regulation. If they didn’t serve that function, then perhaps they were used more for flashy mating displays or other types of visual communication.
Holly Woodward, an assistant professor in the Center for Health Sciences at Oklahoma State University, supports re-examining prior studies, even those that appear seamless.
As she said, "D'Emic's study reveals how important access to the data behind published results is for hypothesis testing and advancing our understanding of dinosaur growth dynamics."
Read more at Discovery News
The study, published in the journal Science, counters other popular theories, which say that dinosaurs were either cold-blooded and reptile-like, or occupied a unique intermediate category of animals that were neither fully cold nor warm-blooded.
“Upon re-analysis, it was apparent that dinosaurs weren’t just somewhat like living mammals in their physiology — they fit right within our understanding of what it means to be a ‘warm-blooded’ mammal,” author Michael D’Emic, a Stony Brook University paleontologist, said in a press release.
The “re-analysis” refers to D’Emic’s work revisiting yet another Science paper, published last year, which compiled a huge dataset on growth and metabolism of hundreds of living animals. D’Emic admits that this dataset was “remarkable” and “unprecedented.”
Yet, that earlier study found that dinosaurs would have fit into the intermediate category between ectothermic (cold-blooded, with body temperature controlled largely by the surrounding environment) and endothermic (warm-blooded, like us).
D'Emic looked at that study again, focusing on two primary aspects.
First, the original study had scaled yearly growth rates to daily ones in order to standardize comparisons.
"This is problematic," D'Emic said, "because many animals do not grow continuously throughout the year, generally slowing or pausing growth during colder, drier, or otherwise more stressful seasons. Therefore, the previous study underestimated dinosaur growth rates by failing to account for their uneven growth. Like most animals, dinosaurs slowed or paused their growth annually, as shown by rings in their bones analogous to tree rings."
He added that very stressful or seasonal environments can also impact growth, and would have really affected dinosaurs. He believes that these factors were underestimated in the earlier study.
The second aspect of his re-analysis took into account the widely held view that all dinosaurs did not become extinct: some evolved to become birds. Today’s birds are warm-blooded, so he argues that dinosaurs must have been this way too.
"Separating what we commonly think of as 'dinosaurs' from birds in a statistical analysis is generally inappropriate, because birds are dinosaurs–they’re just the dinosaurs that haven't gone extinct," he explained.
If D'Emic's theory about warm-blooded dinosaurs holds, then this would lead to still more re-evaluation of other data on things like the big sails and head ornaments on some dinosaurs. Certain past studies have linked these to body temperature regulation. If they didn’t serve that function, then perhaps they were used more for flashy mating displays or other types of visual communication.
Holly Woodward, an assistant professor in the Center for Health Sciences at Oklahoma State University, supports re-examining prior studies, even those that appear seamless.
As she said, "D'Emic's study reveals how important access to the data behind published results is for hypothesis testing and advancing our understanding of dinosaur growth dynamics."
Read more at Discovery News
Mystery Deepens Over Rare Roman Tombstone
Mystery has deepened over a Roman tombstone unearthed earlier this year in western England, as new research revealed it had no link with the skeleton laying beneath it.
The inscribed stone was discovered during the construction work of a parking lot in Cirencester.
Made from Cotswold limestone, it was found laying on its front in a grave — directly above an adult skeleton.
When it was turned over, the honey colored stone revealed fine decorations and five lines of Latin inscription which read: “D.M. BODICACIA CONIUNX VIXIT ANNO S XXVII,” possibly meaning: “To the shades of the underworld, Bodicacia, spouse, lived 27 years.”
The discovery was hailed as unique since the stone was believed to be the only tombstone from Roman Britain to record the person found beneath.
In fact, while the dedication on the tombstone is to a woman, the skeleton beneath it was that of a male.
It turns out the gravestone and skeleton were also laid at different times — the inscribed stone was early Roman, dating to the 2nd century A.D., while the burial was most certainly late Roman, from the 4th century A.D..
“We believe the tombstone to have been re-used as a grave cover perhaps as long as two centuries after it was first erected,” Ed McSloy, Cotswold Archaeology’s finds expert, told Discovery News.
Martin Henig and Roger Tomlin, leading experts in Roman sculpture and inscriptions at the University of Oxford, noted that the back of the stone is very roughly worked, almost unfinished, in strong contrast to the finely sculpted front.
Unlikely to have been a free-standing tombstone, the five-foot-long inscribed stone may have rather been set into walls, possibly those of a mausoleum.
Who the grave belonged to remains a mystery.
“Reading the letters, the most plausible interpretation of the name is Bodicacia, a previously unknown Celtic name,” McSloy said.
Indeed the name appears to be a variant of a Celtic name with same root as Boudicca. This was the rebel queen of the Iceni, a British tribe, who unsuccessfully attempted to defeat the Romans.
Bodicacia’s tombstone was also unique. The pediment, which is the decorated, triangular portion at top of the stone, shows the Roman god Oceanus.
A divine personification of the sea in the classical world, the god was portrayed with a long mustache, stylized long hair, and crab-like pincers above the head.
Read more at Discovery News
The inscribed stone was discovered during the construction work of a parking lot in Cirencester.
Made from Cotswold limestone, it was found laying on its front in a grave — directly above an adult skeleton.
When it was turned over, the honey colored stone revealed fine decorations and five lines of Latin inscription which read: “D.M. BODICACIA CONIUNX VIXIT ANNO S XXVII,” possibly meaning: “To the shades of the underworld, Bodicacia, spouse, lived 27 years.”
The discovery was hailed as unique since the stone was believed to be the only tombstone from Roman Britain to record the person found beneath.
In fact, while the dedication on the tombstone is to a woman, the skeleton beneath it was that of a male.
It turns out the gravestone and skeleton were also laid at different times — the inscribed stone was early Roman, dating to the 2nd century A.D., while the burial was most certainly late Roman, from the 4th century A.D..
“We believe the tombstone to have been re-used as a grave cover perhaps as long as two centuries after it was first erected,” Ed McSloy, Cotswold Archaeology’s finds expert, told Discovery News.
Martin Henig and Roger Tomlin, leading experts in Roman sculpture and inscriptions at the University of Oxford, noted that the back of the stone is very roughly worked, almost unfinished, in strong contrast to the finely sculpted front.
Unlikely to have been a free-standing tombstone, the five-foot-long inscribed stone may have rather been set into walls, possibly those of a mausoleum.
Who the grave belonged to remains a mystery.
“Reading the letters, the most plausible interpretation of the name is Bodicacia, a previously unknown Celtic name,” McSloy said.
Indeed the name appears to be a variant of a Celtic name with same root as Boudicca. This was the rebel queen of the Iceni, a British tribe, who unsuccessfully attempted to defeat the Romans.
Bodicacia’s tombstone was also unique. The pediment, which is the decorated, triangular portion at top of the stone, shows the Roman god Oceanus.
A divine personification of the sea in the classical world, the god was portrayed with a long mustache, stylized long hair, and crab-like pincers above the head.
Read more at Discovery News
Sacrificed Humans Discovered Among Prehistoric Tombs
A prehistoric cemetery containing hundreds of tombs, some of which held sacrificed humans, has been discovered near Mogou village in northwestern China.
The burials date back around 4,000 years, before writing was developed in the area. In just one archaeological field season — between August and November 2009 — almost 300 tombs were excavated, and hundreds more were found in other seasons conducted between 2008 and 2011.
The tombs were dug beneath the surface of the ground and were oriented toward the Northwest. Some of the tombs had small chambers where finely crafted pottery was placed near the deceased. Archaeologists also found that mounds of sediment covered some of the tombs, which could have marked the location of these tombs.
Within the tombs, archaeologists found entire families buried together, their heads also facing the Northwest. They were buried with a variety of goods, including, necklaces, weapons and decorated pottery.
Human sacrifices were also evident in the burials. In one tomb, “the human sacrifice was placed on its side with limbs bent and its face toward the tomb chamber. The bones are relatively well preserved, and the individual’s age at death is estimated at around 13 years,” archaeologists wrote in a paper published recently in the journal Chinese Cultural Relics.
Predicting the future
The goods found in the tombs included pottery decorated with incised designs. In some cases, the potter made numerous incisions shaped like the letter “O,” with the O’s forming patterns on the vessel. Sometimes, instead of making O’s, the potter would incise wavy lines near the top of the pot.
The researchers also discovered artifacts that could have been used as weapons. Bronze sabers were found that researchers say could have been used for cutting. They also found stone mace heads. (A mace is a blunt weapon that can smash a person’s skull in.) Axes, daggers and knives were also found in the tombs.
Archaeologists also found what they call “bone divination lots,” or artifacts that could have been used in rituals aimed at predicting the future. Bone divination was practiced widely throughout the ancient world. In fact, when writing was developed in China centuries later, some of the earliest texts were written on bones used for divination.
Qijia culture
Most of the tombs belong to the Qijia culture, whose people used artifacts with similar designs and lived in the upper Yellow River valley.
“Qijia culture sites are found in a broad area along all of the upper Yellow River as well as its tributaries, the Huangshui, Daxia, Wei, Tao and western Hanshui rivers,” Chen Honghai, a professor at Northwestern University in China, wrote in a chapter of the book “A Companion to Chinese Archaeology” (Wiley, 2013).
Honghai wrote that people from the Qijia culture lived in a somewhat arid area. To adjust to these conditions, the Qijia people grew millet, a cereal suited to a dry environment; and raised a variety of animals, including pigs, sheep and goats.
People from the Qijia culture lived in modest settlements (smaller than 20 acres), in houses that were often partially buried beneath the ground. “Remains of buildings are mainly square or rectangular, and they are usually semi-subterranean. The doors usually point south, identical to the current local custom of building houses, as rooms on the sunny side receive more light and warmth,” Honghai wrote.
Read more at Discovery News
The burials date back around 4,000 years, before writing was developed in the area. In just one archaeological field season — between August and November 2009 — almost 300 tombs were excavated, and hundreds more were found in other seasons conducted between 2008 and 2011.
The tombs were dug beneath the surface of the ground and were oriented toward the Northwest. Some of the tombs had small chambers where finely crafted pottery was placed near the deceased. Archaeologists also found that mounds of sediment covered some of the tombs, which could have marked the location of these tombs.
Within the tombs, archaeologists found entire families buried together, their heads also facing the Northwest. They were buried with a variety of goods, including, necklaces, weapons and decorated pottery.
Human sacrifices were also evident in the burials. In one tomb, “the human sacrifice was placed on its side with limbs bent and its face toward the tomb chamber. The bones are relatively well preserved, and the individual’s age at death is estimated at around 13 years,” archaeologists wrote in a paper published recently in the journal Chinese Cultural Relics.
Predicting the future
The goods found in the tombs included pottery decorated with incised designs. In some cases, the potter made numerous incisions shaped like the letter “O,” with the O’s forming patterns on the vessel. Sometimes, instead of making O’s, the potter would incise wavy lines near the top of the pot.
The researchers also discovered artifacts that could have been used as weapons. Bronze sabers were found that researchers say could have been used for cutting. They also found stone mace heads. (A mace is a blunt weapon that can smash a person’s skull in.) Axes, daggers and knives were also found in the tombs.
Archaeologists also found what they call “bone divination lots,” or artifacts that could have been used in rituals aimed at predicting the future. Bone divination was practiced widely throughout the ancient world. In fact, when writing was developed in China centuries later, some of the earliest texts were written on bones used for divination.
Qijia culture
Most of the tombs belong to the Qijia culture, whose people used artifacts with similar designs and lived in the upper Yellow River valley.
“Qijia culture sites are found in a broad area along all of the upper Yellow River as well as its tributaries, the Huangshui, Daxia, Wei, Tao and western Hanshui rivers,” Chen Honghai, a professor at Northwestern University in China, wrote in a chapter of the book “A Companion to Chinese Archaeology” (Wiley, 2013).
Honghai wrote that people from the Qijia culture lived in a somewhat arid area. To adjust to these conditions, the Qijia people grew millet, a cereal suited to a dry environment; and raised a variety of animals, including pigs, sheep and goats.
People from the Qijia culture lived in modest settlements (smaller than 20 acres), in houses that were often partially buried beneath the ground. “Remains of buildings are mainly square or rectangular, and they are usually semi-subterranean. The doors usually point south, identical to the current local custom of building houses, as rooms on the sunny side receive more light and warmth,” Honghai wrote.
Read more at Discovery News
New Spacecraft Photos Hint at a Rich and Complex Pluto
As NASA’s New Horizons spacecraft blasts closer to Pluto at a pace of 750,000 miles per day, increasingly detailed images are beginning to come our way.
In the latest series of images beamed back to Earth from the mission’s Long-Range Reconnaissance Imager (LORRI) instrument, a complex small world is beginning to reveal itself — vast, dark regions are coming into focus. Now we are in a new regime of Plutonian discovery; these pixelated views are the highest resolution photos we have ever seen of the dwarf planet.
“As New Horizons closes in on Pluto, it’s transforming from a point of light to a planetary object of intense interest,” said NASA’s Director of Planetary Science Jim Green in a news release today (Wednesday). “We’re in for an exciting ride for the next seven weeks.”
“These new images show us that Pluto’s differing faces are each distinct; likely hinting at what may be very complex surface geology or variations in surface composition from place to place,” said New Horizons Principal Investigator Alan Stern of the Southwest Research Institute in Boulder, Colo. “These images also continue to support the hypothesis that Pluto has a polar cap whose extent varies with longitude; we’ll be able to make a definitive determination of the polar bright region’s iciness when we get compositional spectroscopy of that region in July.”
In less than 2 months New Horizons will make its Pluto close encounter and as these gradually-sharpening images are showing us, it seems likely that Pluto will be a rich and complex place.
“By late June the image resolution will be four times better than the images made May 8-12, and by the time of closest approach, we expect to obtain images with more than 5,000 times the current resolution,” added Hal Weaver, New Horizons project scientist at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md.
These latest photos are derived through a method known as “image deconvolution”, which allows us to see the broad differences in surface morphology. These changes in surface albedo likely real features, revealing the (possible) bright polar ice caps and vast regions that appear to absorb more light. However, some of the image processing can generate spurious details that will disappear in later observations.
Read more at Discovery News
In the latest series of images beamed back to Earth from the mission’s Long-Range Reconnaissance Imager (LORRI) instrument, a complex small world is beginning to reveal itself — vast, dark regions are coming into focus. Now we are in a new regime of Plutonian discovery; these pixelated views are the highest resolution photos we have ever seen of the dwarf planet.
“As New Horizons closes in on Pluto, it’s transforming from a point of light to a planetary object of intense interest,” said NASA’s Director of Planetary Science Jim Green in a news release today (Wednesday). “We’re in for an exciting ride for the next seven weeks.”
“These new images show us that Pluto’s differing faces are each distinct; likely hinting at what may be very complex surface geology or variations in surface composition from place to place,” said New Horizons Principal Investigator Alan Stern of the Southwest Research Institute in Boulder, Colo. “These images also continue to support the hypothesis that Pluto has a polar cap whose extent varies with longitude; we’ll be able to make a definitive determination of the polar bright region’s iciness when we get compositional spectroscopy of that region in July.”
In less than 2 months New Horizons will make its Pluto close encounter and as these gradually-sharpening images are showing us, it seems likely that Pluto will be a rich and complex place.
“By late June the image resolution will be four times better than the images made May 8-12, and by the time of closest approach, we expect to obtain images with more than 5,000 times the current resolution,” added Hal Weaver, New Horizons project scientist at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md.
These latest photos are derived through a method known as “image deconvolution”, which allows us to see the broad differences in surface morphology. These changes in surface albedo likely real features, revealing the (possible) bright polar ice caps and vast regions that appear to absorb more light. However, some of the image processing can generate spurious details that will disappear in later observations.
Read more at Discovery News
May 27, 2015
CIA Stops Sharing Climate Change Info With Scientists
In a recent speech, President Obama proclaimed that climate change “constitutes a serious threat to global security (and), an immediate risk to our national security,” and warned that it actually could exacerbate other menaces, such as terrorism and political instability.
“Severe drought helped to create the instability in Nigeria that was exploited by the terrorist group Boko Haram,” Obama said. “It’s now believed that drought and crop failures and high food prices helped fuel the early unrest in Syria, which descended into civil war in the heart of the Middle East.”
But even as the White House is affirming its focus, the CIA reportedly is ending a key program that shared the agency’s climate change data — some of it gathered by surveillance satellites and other clandestine sources.
Investigative magazine Mother Jones broke the story last week that the intelligence agency is shutting down the Measurements of Earth Data for Environmental Analysis program. MEDEA allowed a select group of scientists access to classified information about climate change. Mother Jones said that the data included not only satellite observations, but also ocean temperature and tidal readings gathered by U.S. Navy submarines.
The CIA began gathering climate data for global security purposes during the Cold War, when it tracked the effect of climate change on Soviet grain harvests. According to one document mentioning MEDEA on the CIA website, the program was created in the early 1990s, in part through the efforts of then-U.S. Sen. Al Gore (D-Tenn.), as part of an effort to share intelligence related to environmental problems. It included about 60 scientists with security clearances. The researchers found, among other things, that “found that historical imagery from our early satellite systems could provide a more accurate picture of climate change over time.”
In a 1996 speech, then-CIA director John Deutch said that MEDEA ”will give scientists an ongoing record of changes in the earth that will improve their understanding of environmental processes. More importantly, it will greatly enhance their ability to provide strategic warning of potentially catastrophic threats to the health and welfare of our citizens.”
In the early 2000s, MEDEA was shut down by President George W. Bush, who initially took the position, contrary to the great majority of climate scientists, that it was unclear whether human activity was driving global warming. (He eventually changed his view.) The program was revived in 2010 by Obama.
The New York Times reported in 2010 that the shared CIA data reportedly included a trove of images of Arctic sea ice, which enabled scientists to distinguish summer melts from longer-term climate trends.
University of Washington scientist Norbert Untersteiner, one of the scientists given access to the CIA’s climate data, told The New York Times that the intelligence data was “really useful.”
Though Republicans in Congress for long have criticized the use of intelligence resources to study climate change, the reasons for shuttling down MEDEA remain murky.
Read more at Discovery News
“Severe drought helped to create the instability in Nigeria that was exploited by the terrorist group Boko Haram,” Obama said. “It’s now believed that drought and crop failures and high food prices helped fuel the early unrest in Syria, which descended into civil war in the heart of the Middle East.”
But even as the White House is affirming its focus, the CIA reportedly is ending a key program that shared the agency’s climate change data — some of it gathered by surveillance satellites and other clandestine sources.
Investigative magazine Mother Jones broke the story last week that the intelligence agency is shutting down the Measurements of Earth Data for Environmental Analysis program. MEDEA allowed a select group of scientists access to classified information about climate change. Mother Jones said that the data included not only satellite observations, but also ocean temperature and tidal readings gathered by U.S. Navy submarines.
The CIA began gathering climate data for global security purposes during the Cold War, when it tracked the effect of climate change on Soviet grain harvests. According to one document mentioning MEDEA on the CIA website, the program was created in the early 1990s, in part through the efforts of then-U.S. Sen. Al Gore (D-Tenn.), as part of an effort to share intelligence related to environmental problems. It included about 60 scientists with security clearances. The researchers found, among other things, that “found that historical imagery from our early satellite systems could provide a more accurate picture of climate change over time.”
In a 1996 speech, then-CIA director John Deutch said that MEDEA ”will give scientists an ongoing record of changes in the earth that will improve their understanding of environmental processes. More importantly, it will greatly enhance their ability to provide strategic warning of potentially catastrophic threats to the health and welfare of our citizens.”
In the early 2000s, MEDEA was shut down by President George W. Bush, who initially took the position, contrary to the great majority of climate scientists, that it was unclear whether human activity was driving global warming. (He eventually changed his view.) The program was revived in 2010 by Obama.
The New York Times reported in 2010 that the shared CIA data reportedly included a trove of images of Arctic sea ice, which enabled scientists to distinguish summer melts from longer-term climate trends.
University of Washington scientist Norbert Untersteiner, one of the scientists given access to the CIA’s climate data, told The New York Times that the intelligence data was “really useful.”
Though Republicans in Congress for long have criticized the use of intelligence resources to study climate change, the reasons for shuttling down MEDEA remain murky.
Read more at Discovery News
Modern Human Leg Mummified Using Ancient Egyptian Methods
The ancient Egyptians famously mummified the dead to preserve their loved ones in perpetuity, and now, scientists have mummified fresh tissue from a human corpse to gain insight into these ancient preservation techniques.
The team adhered to ancient Egyptian techniques to mummify part of the human body, which had been donated to science. They placed the tissue in a salt solution, and measured the progress of preservation using state-of-the-art microscopy and imaging techniques.
The findings, detailed Friday (May 22) in the journal The Anatomical Record, give researchers some fascinating new clues about the ancient Egyptian embalming process.
“We wanted to have an evidence-based methodology” for understanding what the mummification process looked like, said Christina Papageorgopoulou, one of the researchers on the new study and a physical anthropologist at the Democritus University of Thrace in Greece. “The only way you can do this is by the experiment yourself.”
Making a mummy
Most of what scientists know about ancient Egyptian mummification comes from the Greek historian Herodotus, who lived during the fifth century B.C. First, embalmers would have removed the dead individual’s organs — including the brain, which would be extracted through the nose. Then, they would sterilize the chest and abdominal cavities, before placing the body in a salty fluid containing natron — a mixture of soda ash and sodium bicarbonate — which would drain the bodily fluids and prevent the body from rotting. Finally, they would swaddle the body in strips of linen and bury it in a tomb or grave.
Some studies have attempted to use these techniques to mummify animals or human organs, and there have been one or two attempts to mummify a complete human body. But the process had never been studied using modern scientific techniques while the mummification was in progress.
In this new study, Papageorgopoulou and her colleagues used the Egyptian salt-based preservation method to mummify the leg of a female human body that had been donated to the University of Zurich in Switzerland, where the experiment was conducted. “If we used the whole body, we would have had to cut it up and take out the intestines [and other organs],” Papageorgopoulou told Live Science.
For comparison, they also attempted to mummify a limb “naturally,” using dry heat, but that attempt failed and was stopped after a week.
The researchers took samples of the tissue every two to three days, and examined it using a variety of methods: the naked eye, a microscope, DNA analysis, and X-ray imaging methods.
Ancient process revealed
For the most part, the mummification was successful, but it took nearly seven months (208 days), which is much longer than the two months the ancient Egyptian method took, according to Herodotus. (Other accounts report that it took even less time.)
“We were not so quick like the ancient Egyptians,” Papageorgopoulou said. She suspects the cooler, damper conditions in the lab in Zurich, compared with the arid environment of ancient Egypt, may explain the discrepancy.
The salt solution effectively removed the water from the leg tissue, which prevented bacteria and fungi from degrading it. The microscopic analysis revealed good preservation of the skin and muscle tissue, as well.
Read more at Discovery News
The team adhered to ancient Egyptian techniques to mummify part of the human body, which had been donated to science. They placed the tissue in a salt solution, and measured the progress of preservation using state-of-the-art microscopy and imaging techniques.
The findings, detailed Friday (May 22) in the journal The Anatomical Record, give researchers some fascinating new clues about the ancient Egyptian embalming process.
“We wanted to have an evidence-based methodology” for understanding what the mummification process looked like, said Christina Papageorgopoulou, one of the researchers on the new study and a physical anthropologist at the Democritus University of Thrace in Greece. “The only way you can do this is by the experiment yourself.”
Making a mummy
Most of what scientists know about ancient Egyptian mummification comes from the Greek historian Herodotus, who lived during the fifth century B.C. First, embalmers would have removed the dead individual’s organs — including the brain, which would be extracted through the nose. Then, they would sterilize the chest and abdominal cavities, before placing the body in a salty fluid containing natron — a mixture of soda ash and sodium bicarbonate — which would drain the bodily fluids and prevent the body from rotting. Finally, they would swaddle the body in strips of linen and bury it in a tomb or grave.
Some studies have attempted to use these techniques to mummify animals or human organs, and there have been one or two attempts to mummify a complete human body. But the process had never been studied using modern scientific techniques while the mummification was in progress.
In this new study, Papageorgopoulou and her colleagues used the Egyptian salt-based preservation method to mummify the leg of a female human body that had been donated to the University of Zurich in Switzerland, where the experiment was conducted. “If we used the whole body, we would have had to cut it up and take out the intestines [and other organs],” Papageorgopoulou told Live Science.
For comparison, they also attempted to mummify a limb “naturally,” using dry heat, but that attempt failed and was stopped after a week.
The researchers took samples of the tissue every two to three days, and examined it using a variety of methods: the naked eye, a microscope, DNA analysis, and X-ray imaging methods.
Ancient process revealed
For the most part, the mummification was successful, but it took nearly seven months (208 days), which is much longer than the two months the ancient Egyptian method took, according to Herodotus. (Other accounts report that it took even less time.)
“We were not so quick like the ancient Egyptians,” Papageorgopoulou said. She suspects the cooler, damper conditions in the lab in Zurich, compared with the arid environment of ancient Egypt, may explain the discrepancy.
The salt solution effectively removed the water from the leg tissue, which prevented bacteria and fungi from degrading it. The microscopic analysis revealed good preservation of the skin and muscle tissue, as well.
Read more at Discovery News
Gory Remains of First Known Murdered Human Found
The first known murder in human history took place 430,000 years ago in Spain, suggests a new study that describes the mutilated remains of the victim.
While the study, published in the latest issue of the journal PLOS ONE, does not specify the species of human, the site and the time of the likely crime indicate that the first known murder victim was a proto-Neanderthal, meaning an early member of the Neanderthal lineage.
The victim’s skull appears to have been bashed twice, leading to his or her demise.
“The type of injuries, their location, the strong similarity of the fractures in shape and size, and the different orientations and implied trajectories of the two fractures suggest they were produced with the same object in face-to-face interpersonal conflict,” lead author Nohemi Sala and colleagues write.
“Given that either of the two traumatic events was likely lethal, the presence of multiple blows implies an intention to kill,” they added.
Sala, a researcher at Centro Mixto UCM-ISCIII de Evolución y Comportamiento Humanos in Madrid, and an international team came to this conclusion after studying the skull in detail. It was unearthed at a well-known Spanish site called Sima de los Huesos (SH) where at least 28 Neanderthals and proto-Neanderthals have been found.
During the time that bones began to accumulate at the site, the only possible access route to the place was “through a deep vertical chimney,” the authors said.
The origin of the accumulation has been hotly debated, with four different theories proposed: 1- non-human carnivores dragged their prey there, 2- geological activity somehow led to the accumulation, 3- accidental falls, and 4- intentional accumulation of bodies by early humans.
Could the victim described in the study have tripped and fallen down the shaft, hitting his or her head a couple of times on the way down?
The scientists reject that remote possibility.
They explained that “any scenario related to the free-fall would require the highly improbable occurrence of the same object striking the skull twice.”
The orientation of the hits, based on damage to the skull, further suggest that someone wielding an object bashed the head with near equal force, resulting in two fractures that would have penetrated the bone-brain barrier. Sala and colleagues therefore believe that the victim “did not survive these cranial traumatic events.”
Then there’s the question of how the murdered remains wound up going down the shaft. The researchers propose that the site was reserved for disposal of dead individuals.
Read more at Discovery News
While the study, published in the latest issue of the journal PLOS ONE, does not specify the species of human, the site and the time of the likely crime indicate that the first known murder victim was a proto-Neanderthal, meaning an early member of the Neanderthal lineage.
The victim’s skull appears to have been bashed twice, leading to his or her demise.
“The type of injuries, their location, the strong similarity of the fractures in shape and size, and the different orientations and implied trajectories of the two fractures suggest they were produced with the same object in face-to-face interpersonal conflict,” lead author Nohemi Sala and colleagues write.
“Given that either of the two traumatic events was likely lethal, the presence of multiple blows implies an intention to kill,” they added.
Sala, a researcher at Centro Mixto UCM-ISCIII de Evolución y Comportamiento Humanos in Madrid, and an international team came to this conclusion after studying the skull in detail. It was unearthed at a well-known Spanish site called Sima de los Huesos (SH) where at least 28 Neanderthals and proto-Neanderthals have been found.
During the time that bones began to accumulate at the site, the only possible access route to the place was “through a deep vertical chimney,” the authors said.
The origin of the accumulation has been hotly debated, with four different theories proposed: 1- non-human carnivores dragged their prey there, 2- geological activity somehow led to the accumulation, 3- accidental falls, and 4- intentional accumulation of bodies by early humans.
Could the victim described in the study have tripped and fallen down the shaft, hitting his or her head a couple of times on the way down?
The scientists reject that remote possibility.
They explained that “any scenario related to the free-fall would require the highly improbable occurrence of the same object striking the skull twice.”
The orientation of the hits, based on damage to the skull, further suggest that someone wielding an object bashed the head with near equal force, resulting in two fractures that would have penetrated the bone-brain barrier. Sala and colleagues therefore believe that the victim “did not survive these cranial traumatic events.”
Then there’s the question of how the murdered remains wound up going down the shaft. The researchers propose that the site was reserved for disposal of dead individuals.
Read more at Discovery News
Big-Toothed Prehistoric Human Lived Alongside 'Lucy'
The human family tree has a new member: a big-toothed early human ancestor that lived in Ethiopia 3.3 to 3.5 million years ago, according to a paper in the latest issue of the journal Nature.
The newly discovered human, named Australopithecus deyiremeda, overlapped in both time and region with yet another prehistoric human, "Lucy" (Austrolithecus afarensis). The find is strong evidence for the presence of more than one closely related early human ancestor species prior to 3 million years ago.
Because this particular species' teeth had super thick enamel and its jaws were built to last, "it probably engaged in heavy chewing and consumed harder, tougher, and more abrasive dietary resources," lead author Yohannes Haile-Selassie, curator of physical anthropology at The Cleveland Museum of Natural History, told Discovery News.
Haile-Selassie and his team analyzed the remains, which were collected from the central Afar region of Ethiopia. Combined results from three established different dating processes yielded the estimated age of the fossils.
The location where the fossils were found is just 21.8 miles south of where "Lucy" lived in Hadar, Ethiopia. The researchers think the species could have lived in even closer proximity. The two might have co-existed, which would be like Homo sapiens now sharing turf with another type of human.
While such a scenario today seems unfathomable for humans, it was likely common back in the day.
As for why, Haile-Selassie said, "There is some evidence that during the middle Pliocene time period (about 3.3 million years ago) when multiple hominins (early humans) were running around, there were rivers supporting gallery forests that laterally extend to woodlands and grasslands."
"It was an ecosystem that supported numerous and varied habitats," he continued. "With niche partitioning, a number of related taxa can co-exist in such an ecosystem."
Anthropologists debate about which species is the stem of Homo sapiens, meaning that it should be positioned at the very bottom of the human family tree. Candidates now include two other early humans, Ardipithecus ramidus and Australopithecus anamensis, but it could also be an as-of-yet undiscovered species.
Whichever it was, the stem species, via evolution and divergence, could have later resulted in more than 20 different types of humans, including Homo sapiens.
In terms of what happened to this crowded field of early relatives, Haile-Selassie said, "Some of them gradually gave rise to a new form and went extinct. Others went extinct with no descendants."
He added, "When the environment changes and their habitat diminished, or when they faced harsh competition with others for resources and failed to be good competitors, they had no alternative but extinction."
The fossil evidence for the earliest anatomically modern humans comes from Ethiopia, so it cannot be ruled out that A. deyiremeda and/or "Lucy" is our direct ancestor.
Read more at Discovery News
The newly discovered human, named Australopithecus deyiremeda, overlapped in both time and region with yet another prehistoric human, "Lucy" (Austrolithecus afarensis). The find is strong evidence for the presence of more than one closely related early human ancestor species prior to 3 million years ago.
Because this particular species' teeth had super thick enamel and its jaws were built to last, "it probably engaged in heavy chewing and consumed harder, tougher, and more abrasive dietary resources," lead author Yohannes Haile-Selassie, curator of physical anthropology at The Cleveland Museum of Natural History, told Discovery News.
Haile-Selassie and his team analyzed the remains, which were collected from the central Afar region of Ethiopia. Combined results from three established different dating processes yielded the estimated age of the fossils.
The location where the fossils were found is just 21.8 miles south of where "Lucy" lived in Hadar, Ethiopia. The researchers think the species could have lived in even closer proximity. The two might have co-existed, which would be like Homo sapiens now sharing turf with another type of human.
While such a scenario today seems unfathomable for humans, it was likely common back in the day.
As for why, Haile-Selassie said, "There is some evidence that during the middle Pliocene time period (about 3.3 million years ago) when multiple hominins (early humans) were running around, there were rivers supporting gallery forests that laterally extend to woodlands and grasslands."
"It was an ecosystem that supported numerous and varied habitats," he continued. "With niche partitioning, a number of related taxa can co-exist in such an ecosystem."
Anthropologists debate about which species is the stem of Homo sapiens, meaning that it should be positioned at the very bottom of the human family tree. Candidates now include two other early humans, Ardipithecus ramidus and Australopithecus anamensis, but it could also be an as-of-yet undiscovered species.
Whichever it was, the stem species, via evolution and divergence, could have later resulted in more than 20 different types of humans, including Homo sapiens.
In terms of what happened to this crowded field of early relatives, Haile-Selassie said, "Some of them gradually gave rise to a new form and went extinct. Others went extinct with no descendants."
He added, "When the environment changes and their habitat diminished, or when they faced harsh competition with others for resources and failed to be good competitors, they had no alternative but extinction."
The fossil evidence for the earliest anatomically modern humans comes from Ethiopia, so it cannot be ruled out that A. deyiremeda and/or "Lucy" is our direct ancestor.
Read more at Discovery News
May 26, 2015
World Already Reaping Benefits From Ozone Treaty
The UN treaty to protect the ozone layer has prevented a likely surge in skin cancer in Australia, New Zealand and northern Europe, a study published on Tuesday said.
If the 1987 Montreal Protocol had never been signed, the ozone hole over Antarctica would have grown in size by 40 percent by 2013, it said.
Ultra-violet levels in Australia and New Zealand, which currently have the highest mortality rates from skin cancer, could have risen by between eight and 12 percent.
In northern Europe, depletion of the ozone layer over the Arctic could have boosted ultra-violet levels in Scandinavia and Britain by more than 14 percent, it said.
"Our research confirms the importance of the Montreal Protocol and shows that we have already had real benefits," said Martyn Chipperfield, a professor at Britain's University of Leeds who led the study.
"We knew that it would save us from large ozone loss 'in the future,' but in fact we are already past the point when things would have become noticeably worse."
The Protocol commits all UN members to scrapping a group of chlorine- and bromine-containing chemicals.
Used in aerosol sprays, solvents and refrigerants, these substances destroy ozone molecules in the stratosphere that filter out cancer-causing ultra-violet light.
The authors to the paper, published in the journal Nature Communications, built a 3-D computer model based on the latest data about the state of the stratosphere.
Concentrations of ozone-depleting gases are now about 10 percent below their peak of 1993, although it will take until around 2050 before the ozone hole over Antarctica shrinks to its 1980 state.
Translating rises in ultra-violet levels into increases in skin cancer is hard to quantify, but "changes as large as these would have had potentially serious consequences in the decades that followed," said the paper.
Previous research suggests every 5-percent rise in ultra-violet leads to increases of 15 and eight percent in the incidence of squamous and basal cell carcinoma respectively, the two commonest forms of skin cancer.
That calculation is based on the absence of additional measures to protect the public from damaging rays.
But the impact on melanoma, a rarer but deadlier type of skin cancer, has never been measured.
Read more at Discovery News
If the 1987 Montreal Protocol had never been signed, the ozone hole over Antarctica would have grown in size by 40 percent by 2013, it said.
Ultra-violet levels in Australia and New Zealand, which currently have the highest mortality rates from skin cancer, could have risen by between eight and 12 percent.
In northern Europe, depletion of the ozone layer over the Arctic could have boosted ultra-violet levels in Scandinavia and Britain by more than 14 percent, it said.
"Our research confirms the importance of the Montreal Protocol and shows that we have already had real benefits," said Martyn Chipperfield, a professor at Britain's University of Leeds who led the study.
"We knew that it would save us from large ozone loss 'in the future,' but in fact we are already past the point when things would have become noticeably worse."
The Protocol commits all UN members to scrapping a group of chlorine- and bromine-containing chemicals.
Used in aerosol sprays, solvents and refrigerants, these substances destroy ozone molecules in the stratosphere that filter out cancer-causing ultra-violet light.
The authors to the paper, published in the journal Nature Communications, built a 3-D computer model based on the latest data about the state of the stratosphere.
Concentrations of ozone-depleting gases are now about 10 percent below their peak of 1993, although it will take until around 2050 before the ozone hole over Antarctica shrinks to its 1980 state.
Translating rises in ultra-violet levels into increases in skin cancer is hard to quantify, but "changes as large as these would have had potentially serious consequences in the decades that followed," said the paper.
Previous research suggests every 5-percent rise in ultra-violet leads to increases of 15 and eight percent in the incidence of squamous and basal cell carcinoma respectively, the two commonest forms of skin cancer.
That calculation is based on the absence of additional measures to protect the public from damaging rays.
But the impact on melanoma, a rarer but deadlier type of skin cancer, has never been measured.
Read more at Discovery News
When Will We See an Actual Dino-Chicken?
Talk of a “chickenosaurus” lit up the science world last week when researchers announced they had modified the beak of a chicken embryo to resemble the snout of its dinosaur ancestors. But although some experts have lauded the feat, a beak is just one of many modifications needed to revert a chicken into a dinosaur.
Given these obstacles, how close are scientists to creating a dino-chicken?
“From a quantitative point of view, we’re 50 percent there,” said Jack Horner, a professor of paleontology at Montana State University and a curator of paleontology at the Museum of the Rockies.
Horner has long supported the idea of modifying a chicken to look like a dinosaur, and unlike the researchers on the latest study, he actually wants to raise a live one. And why stop there? By understanding how and when to modify certain molecular mechanisms, countless changes could be within reach. As Horner pointed out, a glow-in-the-dark unicorn is not out of the question.
There are four major modifications needed to make a so-called chickenosaurus, Horner said. To turn a chicken into a dinosaurlike beast, scientists would have to give it teeth and a long tail, and revert its wings back into arms and hands.
The creature would also need a modified mouth — a feat accomplished by the researchers who did this latest study, he said.
“This dino-chicken project — we can liken it to the moon project,” Horner told Live Science. “We know we can do it; it’s just there are … some huge hurdles.”
Challenges ahead
One of those “huge hurdles” was cleared in the latest study, published May 12 in the journal Evolution, in which researchers turned chicken beaks into dino snouts. But even that seemingly small step involved seven years of work. First, the researchers studied beak development in the embryos of chickens and emus, and snout development in the embryos of turtles, alligators and lizards.
It’s likely that millions of years ago, birds and reptiles had similar developmental pathways that gave them snouts, but over time, molecular changes led to the development of beaks in birds, the researchers said.
It’s difficult for scientists to get embryos of present-day animals, such as crocodiles, to compare because they have to find farms that raise them. And then, the molecular work — determining exactly which developmental pathways are different, how they’re different and what controls them — can take “countless hours and hundreds of experiments for a few successful ones,” said the study’s lead researcher, Bhart-Anjan Bhullar, a paleontologist and developmental biologist currently at the University of Chicago and cross-appointed at Yale University, where he will be starting as full-time faculty. “It’s kind of the same as fossil finding.”
For their “fossil finding,” the researchers needed an extensive fossil record of birds and their ancestors to see what birds looked like at different stages of their evolution.
“You have to understand what you’re tracing before you try to trace it,” Bhullar told Live Science.
Bhullar; his doctoral advisor Arkhat Abzhanov, a developmental biologist at Harvard University; and their teammates focused on two genes that are active in facial development. Each gene codes a protein, but the proteins — which carry out the work of genes — showed different activities in modern-day chicken and reptile embryonic development, the researchers found. When the researchers blocked the activity of these two proteins in chickens, the birds developed structures that resembled snouts, not beaks.
Unexpected find
And then there’s the unexpected finding that revealed the complex task at hand: When the group transformed the beaks of chicken embryos into snouts, they also inadvertently changed the chicken’s palate, or roof of the mouth.
In contrast, the palates of the bird embryos were broad and flat, and connected “to the rest of the skull in a way that ancestral reptiles’ palatines did, but bird palatines do not,” Bhullar said.
In birds, “the palatine bone is really long and thin, and it’s not very connected with other bones of the skull,” Bhullar said.
In fact, birds can lift up their top jaw independently of their lower jaw — an ability not seen in most other vertebrates.
So, by changing the beak, the researchers also changed the palate. When the researchers went back to the fossil record, they found that the snout and palatine bone appeared to change together throughout evolution. For instance, an 85-million-year-old fossil of a birdlike creature that had teeth and a primitive beak also had a birdlike palate, they said.
However, in an even older fossil, the palatine was not transformed, and neither was the beak, Bhullar said.
“Part of that is verifying experimentally whether the molecular changes we see are actually able to change the anatomy in the ways we predicted,” Bhullar said. “In a way, that recapitulates the change we see in the fossil record.”
But his goal “is simply to understand, in as a deep a way as possible, the molecular mechanisms behind major evolutionary transitions,” he said. He’s not interested in making “a more nonavian, dinosaurlike bird.”
Will it work?
But Horner is interested in making a so-called chickenosaurus. His group is currently working on giving the chicken a long tail— arguably, the most complex part of making a dino-chicken, he said. For instance, they just screened genes in mice to determine what types of genetic pathways block tail development. This knowledge could help them figure out how to switch on tail growth, he said.
But it remains to be seen how chickens would react to tails, arms, fingers and teeth, Bhullar said.
But, on the other hand, chickens may be resilient creatures."Just because you changed one part doesn't mean that the animal will be able to use it or be able to use it correctly," he said. "You could perhaps give a chicken fingers, but if the fingers don't have the right muscles on them, or if the nervous system and the brain are not properly wired to deal with a hand that has separate digits, then you may have to do a considerable amount of additional engineering."
"People also sometimes underestimate plasticity of the body," Bhullar said. "It's amazing how much compensation goes on, and the nervous system, in particular, is very plastic."
Bhullar said that, if dinosaur-like features, such as a snout and teeth, were to be restored, he wonders "whether the brain wouldn't rewire itself in some way that would permit these animals to use these features."
Read more at Discovery News
Given these obstacles, how close are scientists to creating a dino-chicken?
“From a quantitative point of view, we’re 50 percent there,” said Jack Horner, a professor of paleontology at Montana State University and a curator of paleontology at the Museum of the Rockies.
Horner has long supported the idea of modifying a chicken to look like a dinosaur, and unlike the researchers on the latest study, he actually wants to raise a live one. And why stop there? By understanding how and when to modify certain molecular mechanisms, countless changes could be within reach. As Horner pointed out, a glow-in-the-dark unicorn is not out of the question.
There are four major modifications needed to make a so-called chickenosaurus, Horner said. To turn a chicken into a dinosaurlike beast, scientists would have to give it teeth and a long tail, and revert its wings back into arms and hands.
The creature would also need a modified mouth — a feat accomplished by the researchers who did this latest study, he said.
“This dino-chicken project — we can liken it to the moon project,” Horner told Live Science. “We know we can do it; it’s just there are … some huge hurdles.”
Challenges ahead
One of those “huge hurdles” was cleared in the latest study, published May 12 in the journal Evolution, in which researchers turned chicken beaks into dino snouts. But even that seemingly small step involved seven years of work. First, the researchers studied beak development in the embryos of chickens and emus, and snout development in the embryos of turtles, alligators and lizards.
It’s likely that millions of years ago, birds and reptiles had similar developmental pathways that gave them snouts, but over time, molecular changes led to the development of beaks in birds, the researchers said.
It’s difficult for scientists to get embryos of present-day animals, such as crocodiles, to compare because they have to find farms that raise them. And then, the molecular work — determining exactly which developmental pathways are different, how they’re different and what controls them — can take “countless hours and hundreds of experiments for a few successful ones,” said the study’s lead researcher, Bhart-Anjan Bhullar, a paleontologist and developmental biologist currently at the University of Chicago and cross-appointed at Yale University, where he will be starting as full-time faculty. “It’s kind of the same as fossil finding.”
For their “fossil finding,” the researchers needed an extensive fossil record of birds and their ancestors to see what birds looked like at different stages of their evolution.
“You have to understand what you’re tracing before you try to trace it,” Bhullar told Live Science.
Bhullar; his doctoral advisor Arkhat Abzhanov, a developmental biologist at Harvard University; and their teammates focused on two genes that are active in facial development. Each gene codes a protein, but the proteins — which carry out the work of genes — showed different activities in modern-day chicken and reptile embryonic development, the researchers found. When the researchers blocked the activity of these two proteins in chickens, the birds developed structures that resembled snouts, not beaks.
Unexpected find
And then there’s the unexpected finding that revealed the complex task at hand: When the group transformed the beaks of chicken embryos into snouts, they also inadvertently changed the chicken’s palate, or roof of the mouth.
In contrast, the palates of the bird embryos were broad and flat, and connected “to the rest of the skull in a way that ancestral reptiles’ palatines did, but bird palatines do not,” Bhullar said.
In birds, “the palatine bone is really long and thin, and it’s not very connected with other bones of the skull,” Bhullar said.
In fact, birds can lift up their top jaw independently of their lower jaw — an ability not seen in most other vertebrates.
So, by changing the beak, the researchers also changed the palate. When the researchers went back to the fossil record, they found that the snout and palatine bone appeared to change together throughout evolution. For instance, an 85-million-year-old fossil of a birdlike creature that had teeth and a primitive beak also had a birdlike palate, they said.
However, in an even older fossil, the palatine was not transformed, and neither was the beak, Bhullar said.
“Part of that is verifying experimentally whether the molecular changes we see are actually able to change the anatomy in the ways we predicted,” Bhullar said. “In a way, that recapitulates the change we see in the fossil record.”
But his goal “is simply to understand, in as a deep a way as possible, the molecular mechanisms behind major evolutionary transitions,” he said. He’s not interested in making “a more nonavian, dinosaurlike bird.”
Will it work?
But Horner is interested in making a so-called chickenosaurus. His group is currently working on giving the chicken a long tail— arguably, the most complex part of making a dino-chicken, he said. For instance, they just screened genes in mice to determine what types of genetic pathways block tail development. This knowledge could help them figure out how to switch on tail growth, he said.
But it remains to be seen how chickens would react to tails, arms, fingers and teeth, Bhullar said.
But, on the other hand, chickens may be resilient creatures."Just because you changed one part doesn't mean that the animal will be able to use it or be able to use it correctly," he said. "You could perhaps give a chicken fingers, but if the fingers don't have the right muscles on them, or if the nervous system and the brain are not properly wired to deal with a hand that has separate digits, then you may have to do a considerable amount of additional engineering."
"People also sometimes underestimate plasticity of the body," Bhullar said. "It's amazing how much compensation goes on, and the nervous system, in particular, is very plastic."
Bhullar said that, if dinosaur-like features, such as a snout and teeth, were to be restored, he wonders "whether the brain wouldn't rewire itself in some way that would permit these animals to use these features."
Read more at Discovery News
Dark Side of Medieval Convent Life Revealed
British archaeologists excavating a church site in Oxford have brought to light the darker side of medieval convent life, revealing skeletons of nuns who died in disgrace after being accused of immoral behavior.
Discovered ahead of the construction of a new hotel, the burial ground stretches around what used to be Littlemore Priory, a nunnery founded in 1110 and dissolved in 1525.
Archaeologists led by Paul Murray, of John Moore Heritage Services, found 92 skeletons of women, men and children.
“Burials within the church are likely to represent wealthy or eminent individuals, nuns and prioresses,” Murray said in a statement.
“Those buried outside most likely represent the laity and a general desire to be buried as close to the religious heart of the church as possible,” he added.
Females made up the majority of the burials, at 35, with males accounting for 28; it was impossible to determine the gender of the remaining 29.
Among the burials, the archaeologists unearthed a female aged 45 or more who was likely one of the 20 women who held the position of prioress throughout the history of the priory.
She was interred at the exact center of the crossing in a well constructed stone coffin, with a head niche.
Some skeletons showed signs of debilitating ailments, such as two children who suffered from developmental dysplasia of the hip.
“This would have a resulted in reduced length of the leg and therefore a severe limp and perhaps needing the use of a crutch,” Murray said.
A single burial possibly had leprosy, while another skeleton showed signs of a blunt force trauma to the skull, likely the cause of death.
Other unusual burials included a stillborn baby in a well-made casket, and a woman buried in a face down position.
“This was perhaps a penitential act to atone for her sins,” Murray said.
The woman may have been one of the sinner nuns Cardinal Wolsey accused of immoral behavior when he closed down the nunnery.
Indeed, the last prioress, Katherine Wells (1507 and 1518) was deposed of the position as punishment for a number of misdeeds, such as giving birth to an illegitimate child fathered by a priest from Kent, and stealing things belonging to the monastery — pots, pens and candlesticks, etc. — to provide a dowry for her daughter.
According to accounts taken after bishop Atwater’s visitation in 1517 an 1518, another nun had an illegitimate child by a married man of Oxford.
Life at the nunnery could be severe, records show, with the prioress often putting the nuns into the stocks and beating them “with fists and feet.”
When in 1518, the bishop visited the nunnery again, the prioress complained that one of the nuns “played and romped” with boys in the cloister and refused to be corrected.
The story goes that when the nun was put in the stocks she was rescued by three other nuns who broke down the door, burnt the stocks and broke a window to escape to friends where they remained for two or three weeks.
Wells appears to have regained her position later in 1518 as no other prioresses are recorded after this date. It is likely she remained at the priory for a further seven years until its dissolution in 1525.
According to Murray, the bishop reports are certainly tainted to at least some degree and were used to justify Cardinal Wolsey’s desire to dissolve the nunnery and use its revenues to fund Cardinal’s College, now Christ Church, traditionally considered one of Oxford’s most aristocratic colleges.
“The complaints made about the nuns when they ‘played and romped’ with boys in the cloister and their refusal to be corrected, perhaps reveals something about the nuns caring nature and an element of free spirit,” he added.
Read more at Discovery News
Discovered ahead of the construction of a new hotel, the burial ground stretches around what used to be Littlemore Priory, a nunnery founded in 1110 and dissolved in 1525.
Archaeologists led by Paul Murray, of John Moore Heritage Services, found 92 skeletons of women, men and children.
“Burials within the church are likely to represent wealthy or eminent individuals, nuns and prioresses,” Murray said in a statement.
“Those buried outside most likely represent the laity and a general desire to be buried as close to the religious heart of the church as possible,” he added.
Females made up the majority of the burials, at 35, with males accounting for 28; it was impossible to determine the gender of the remaining 29.
Among the burials, the archaeologists unearthed a female aged 45 or more who was likely one of the 20 women who held the position of prioress throughout the history of the priory.
She was interred at the exact center of the crossing in a well constructed stone coffin, with a head niche.
Some skeletons showed signs of debilitating ailments, such as two children who suffered from developmental dysplasia of the hip.
“This would have a resulted in reduced length of the leg and therefore a severe limp and perhaps needing the use of a crutch,” Murray said.
A single burial possibly had leprosy, while another skeleton showed signs of a blunt force trauma to the skull, likely the cause of death.
Other unusual burials included a stillborn baby in a well-made casket, and a woman buried in a face down position.
“This was perhaps a penitential act to atone for her sins,” Murray said.
The woman may have been one of the sinner nuns Cardinal Wolsey accused of immoral behavior when he closed down the nunnery.
Indeed, the last prioress, Katherine Wells (1507 and 1518) was deposed of the position as punishment for a number of misdeeds, such as giving birth to an illegitimate child fathered by a priest from Kent, and stealing things belonging to the monastery — pots, pens and candlesticks, etc. — to provide a dowry for her daughter.
According to accounts taken after bishop Atwater’s visitation in 1517 an 1518, another nun had an illegitimate child by a married man of Oxford.
Life at the nunnery could be severe, records show, with the prioress often putting the nuns into the stocks and beating them “with fists and feet.”
When in 1518, the bishop visited the nunnery again, the prioress complained that one of the nuns “played and romped” with boys in the cloister and refused to be corrected.
The story goes that when the nun was put in the stocks she was rescued by three other nuns who broke down the door, burnt the stocks and broke a window to escape to friends where they remained for two or three weeks.
Wells appears to have regained her position later in 1518 as no other prioresses are recorded after this date. It is likely she remained at the priory for a further seven years until its dissolution in 1525.
According to Murray, the bishop reports are certainly tainted to at least some degree and were used to justify Cardinal Wolsey’s desire to dissolve the nunnery and use its revenues to fund Cardinal’s College, now Christ Church, traditionally considered one of Oxford’s most aristocratic colleges.
“The complaints made about the nuns when they ‘played and romped’ with boys in the cloister and their refusal to be corrected, perhaps reveals something about the nuns caring nature and an element of free spirit,” he added.
Read more at Discovery News
Red Dwarf Stars Probably Not Friendly for Earth 2.0
Red dwarf stars — the stellar runts of the galaxy — probably aren’t so great for nurturing Earth-like worlds, say scientists running new simulations of the formation of planets around a variety of stars.
Astronomers have discovered a huge variety of alien worlds orbiting all types of stars, but one type of star, the M dwarf, has stuck out as one location where Earth-like exoplanets could be nurtured. Red dwarfs are plentiful in our galaxy and many nearby red dwarfs are known to play host to small and (likely) rocky exoplanets. Red dwarfs are long-lived and should there be a suitable rocky world orbiting within the star system’s habitable zone, surely there’s a good chance for life to evolve?
Unfortunately, there’s some limitations with this thinking. As the habitable zone around M dwarfs is extremely compact (these stars are smaller and therefore dimmer than our sun, for example), any potentially habitable world would need to orbit a red dwarf very closely. In these cases, the rocky world will likely become “tidally locked” with the star, forcing one hemisphere to endure an eternal day, while the other freezes in an eternal night — certainly not very “Earth-like” in the classical sense.
Also, many of these dwarfs are thought to be tumultuous little stars, erupting with powerful flares that could extinguish any form of biology (as we know it) before it can get a foothold. Already red dwarfs are looking a little unlikely as hosts for bona fide Earth-like worlds.
But as the vast majority of stars in our galaxy are red dwarfs, surely, just because there are so many, a few of these red dwarf star systems have formed with the right balance of planetary material and perfectly-located orbits that worlds similar to Earth have been able to coalesce?
According to new research from Tokyo Institute of Technology and Tsinghua University, not so much.
The researchers, Shigeru Ida (Tokyo Tech) and Feng Tian (Tsinghua), first defined what “Earth-like” means. For an alien world to meet this classification, it needs to be of the approximate mass and physical size as Earth. It also needs to have a similar water:land mass ratio. Exoplanets with too little water are considered “dune worlds” — with too little water for Earth-like biology to take hold — and those with too much water are known as “water worlds” — with chaotic climates and poor nutrient supply.
Ida and Tian found that, for an Earth-like exoplanet, with just the right water:land ratio to form around an M dwarf, there needs to be an extremely unlikely balance of planetary material very early in that world’s formative years.
As red dwarfs form pre-main sequence, they are extremely luminous, blasting out vast quantities of energy. They then quickly settle down into a more quiescent, cooler state. For any would-be Earth-like exoplanet forming in a habitable orbit, the early years of a red dwarf’s life are hard. Due to their close proximity, these worlds would be cooked, with extreme implications for their future habitability.
After running their simulation for 1000 stars 30 percent the mass of our sun (the approximate mass of M dwarf stars), 5,000 exoplanets with Earth-like masses formed. However, only 55 of those planets formed within the stars’ habitable zones and only 1 formed with the “perfect” Earth-like water:land ratio. 31 habitable zone planets turned into “dune worlds” and 23 turned into “water worlds.”
So far, things are looking bleak for red dwarfs hosting Earth-like planets, but the story changes as the stars become more massive.
Read more at Discovery News
Astronomers have discovered a huge variety of alien worlds orbiting all types of stars, but one type of star, the M dwarf, has stuck out as one location where Earth-like exoplanets could be nurtured. Red dwarfs are plentiful in our galaxy and many nearby red dwarfs are known to play host to small and (likely) rocky exoplanets. Red dwarfs are long-lived and should there be a suitable rocky world orbiting within the star system’s habitable zone, surely there’s a good chance for life to evolve?
Unfortunately, there’s some limitations with this thinking. As the habitable zone around M dwarfs is extremely compact (these stars are smaller and therefore dimmer than our sun, for example), any potentially habitable world would need to orbit a red dwarf very closely. In these cases, the rocky world will likely become “tidally locked” with the star, forcing one hemisphere to endure an eternal day, while the other freezes in an eternal night — certainly not very “Earth-like” in the classical sense.
Also, many of these dwarfs are thought to be tumultuous little stars, erupting with powerful flares that could extinguish any form of biology (as we know it) before it can get a foothold. Already red dwarfs are looking a little unlikely as hosts for bona fide Earth-like worlds.
But as the vast majority of stars in our galaxy are red dwarfs, surely, just because there are so many, a few of these red dwarf star systems have formed with the right balance of planetary material and perfectly-located orbits that worlds similar to Earth have been able to coalesce?
According to new research from Tokyo Institute of Technology and Tsinghua University, not so much.
The researchers, Shigeru Ida (Tokyo Tech) and Feng Tian (Tsinghua), first defined what “Earth-like” means. For an alien world to meet this classification, it needs to be of the approximate mass and physical size as Earth. It also needs to have a similar water:land mass ratio. Exoplanets with too little water are considered “dune worlds” — with too little water for Earth-like biology to take hold — and those with too much water are known as “water worlds” — with chaotic climates and poor nutrient supply.
Ida and Tian found that, for an Earth-like exoplanet, with just the right water:land ratio to form around an M dwarf, there needs to be an extremely unlikely balance of planetary material very early in that world’s formative years.
As red dwarfs form pre-main sequence, they are extremely luminous, blasting out vast quantities of energy. They then quickly settle down into a more quiescent, cooler state. For any would-be Earth-like exoplanet forming in a habitable orbit, the early years of a red dwarf’s life are hard. Due to their close proximity, these worlds would be cooked, with extreme implications for their future habitability.
After running their simulation for 1000 stars 30 percent the mass of our sun (the approximate mass of M dwarf stars), 5,000 exoplanets with Earth-like masses formed. However, only 55 of those planets formed within the stars’ habitable zones and only 1 formed with the “perfect” Earth-like water:land ratio. 31 habitable zone planets turned into “dune worlds” and 23 turned into “water worlds.”
So far, things are looking bleak for red dwarfs hosting Earth-like planets, but the story changes as the stars become more massive.
Read more at Discovery News
May 25, 2015
One step closer to a single-molecule device
Under the direction of Latha Venkataraman, associate professor of applied physics at Columbia Engineering, researchers have designed a new technique to create a single-molecule diode, and, in doing so, they have developed molecular diodes that perform 50 times better than all prior designs. Venkataraman's group is the first to develop a single-molecule diode that may have real-world technological applications for nanoscale devices. Their paper, "Single-Molecule Diodes with High On-Off Ratios through Environmental Control," is published May 25 in Nature Nanotechnology.
"Our new approach created a single-molecule diode that has a high (>250) rectification and a high "on" current (~ 0.1 micro Amps)," says Venkataraman. "Constructing a device where the active elements are only a single molecule has long been a tantalizing dream in nanoscience. This goal, which has been the 'holy grail' of molecular electronics ever since its inception with Aviram and Ratner's 1974 seminal paper, represents the ultimate in functional miniaturization that can be achieved for an electronic device."
With electronic devices becoming smaller every day, the field of molecular electronics has become ever more critical in solving the problem of further miniaturization, and single molecules represent the limit of miniaturization. The idea of creating a single-molecule diode was suggested by Arieh Aviram and Mark Ratner who theorized in 1974 that a molecule could act as a rectifier, a one-way conductor of electric current. Researchers have since been exploring the charge-transport properties of molecules. They have shown that single-molecules attached to metal electrodes (single-molecule junctions) can be made to act as a variety of circuit elements, including resistors, switches, transistors, and, indeed, diodes. They have learned that it is possible to see quantum mechanical effects, such as interference, manifest in the conductance properties of molecular junctions.
Since a diode acts as an electricity valve, its structure needs to be asymmetric so that electricity flowing in one direction experiences a different environment than electricity flowing in the other direction. In order to develop a single-molecule diode, researchers have simply designed molecules that have asymmetric structures.
"While such asymmetric molecules do indeed display some diode-like properties, they are not effective," explains Brian Capozzi, a PhD student working with Venkataraman and lead author of the paper. "A well-designed diode should only allow current to flow in one direction -- the 'on' direction -- and it should allow a lot of current to flow in that direction. Asymmetric molecular designs have typically suffered from very low current flow in both 'on' and 'off' directions, and the ratio of current flow in the two has typically been low. Ideally, the ratio of 'on' current to 'off' current, the rectification ratio, should be very high."
In order to overcome the issues associated with asymmetric molecular design, Venkataraman and her colleagues -- Chemistry Assistant Professor Luis Campos' group at Columbia and Jeffrey Neaton's group at the Molecular Foundry at UC Berkeley -- focused on developing an asymmetry in the environment around the molecular junction. They created an environmental asymmetry through a rather simple method -- they surrounded the active molecule with an ionic solution and used gold metal electrodes of different sizes to contact the molecule.
Their results achieved rectification ratios as high as 250: 50 times higher than earlier designs. The "on" current flow in their devices can be more than 0.1 microamps, which, Venkataraman notes, is a lot of current to be passing through a single-molecule. And, because this new technique is so easily implemented, it can be applied to all nanoscale devices of all types, including those that are made with graphene electrodes.
Read more at Science Daily
"Our new approach created a single-molecule diode that has a high (>250) rectification and a high "on" current (~ 0.1 micro Amps)," says Venkataraman. "Constructing a device where the active elements are only a single molecule has long been a tantalizing dream in nanoscience. This goal, which has been the 'holy grail' of molecular electronics ever since its inception with Aviram and Ratner's 1974 seminal paper, represents the ultimate in functional miniaturization that can be achieved for an electronic device."
With electronic devices becoming smaller every day, the field of molecular electronics has become ever more critical in solving the problem of further miniaturization, and single molecules represent the limit of miniaturization. The idea of creating a single-molecule diode was suggested by Arieh Aviram and Mark Ratner who theorized in 1974 that a molecule could act as a rectifier, a one-way conductor of electric current. Researchers have since been exploring the charge-transport properties of molecules. They have shown that single-molecules attached to metal electrodes (single-molecule junctions) can be made to act as a variety of circuit elements, including resistors, switches, transistors, and, indeed, diodes. They have learned that it is possible to see quantum mechanical effects, such as interference, manifest in the conductance properties of molecular junctions.
Since a diode acts as an electricity valve, its structure needs to be asymmetric so that electricity flowing in one direction experiences a different environment than electricity flowing in the other direction. In order to develop a single-molecule diode, researchers have simply designed molecules that have asymmetric structures.
"While such asymmetric molecules do indeed display some diode-like properties, they are not effective," explains Brian Capozzi, a PhD student working with Venkataraman and lead author of the paper. "A well-designed diode should only allow current to flow in one direction -- the 'on' direction -- and it should allow a lot of current to flow in that direction. Asymmetric molecular designs have typically suffered from very low current flow in both 'on' and 'off' directions, and the ratio of current flow in the two has typically been low. Ideally, the ratio of 'on' current to 'off' current, the rectification ratio, should be very high."
In order to overcome the issues associated with asymmetric molecular design, Venkataraman and her colleagues -- Chemistry Assistant Professor Luis Campos' group at Columbia and Jeffrey Neaton's group at the Molecular Foundry at UC Berkeley -- focused on developing an asymmetry in the environment around the molecular junction. They created an environmental asymmetry through a rather simple method -- they surrounded the active molecule with an ionic solution and used gold metal electrodes of different sizes to contact the molecule.
Their results achieved rectification ratios as high as 250: 50 times higher than earlier designs. The "on" current flow in their devices can be more than 0.1 microamps, which, Venkataraman notes, is a lot of current to be passing through a single-molecule. And, because this new technique is so easily implemented, it can be applied to all nanoscale devices of all types, including those that are made with graphene electrodes.
Read more at Science Daily
Researchers find the 'key' to quantum network solution
Scientists at the University of York's Centre for Quantum Technology have made an important step in establishing scalable and secure high rate quantum networks.
Working with colleagues at the Technical University of Denmark (DTU), Massachusetts Institute of Technology (MIT), and the University of Toronto, they have developed a protocol to achieve key-rates at metropolitan distances at three orders-of-magnitude higher than previously.
Standard protocols of Quantum Key Distribution (QKD) exploit random sequences of quantum bits (qubits) to distribute secret keys in a completely secure fashion. Once these keys are shared by two remote parties, they can communicate confidentially by encrypting and decrypting binary messages. The security of the scheme relies on one of the most fundamental laws of quantum physics, the uncertainty principle.
Today's classical communications by email or phone are vulnerable to eavesdroppers but quantum communications based on single particle levels (photons) can easily detect eavesdroppers because they invariably disrupt or perturb a quantum signal. By making quantum measurements, two remote parties can estimate how much information an eavesdropper is stealing from the channel and can apply suitable protocols of privacy amplification to negate the effects of the information loss.
However, the problem with QKD protocols based on simple quantum systems, such as single-photon qubits, is their low key-rate, despite their effectiveness in working over long distances. This makes them unsuitable for adaptation for use in metropolitan networks.
The team, led by Dr Stefano Pirandola, of the Department of Computer Science at York, overcame this problem, both theoretically and experimentally, using continuous-variable quantum systems. These allow the parallel transmission of many qubits of information while retaining the quantum capability of detecting and defeating eavesdroppers. The research is published in Nature Photonics.
Dr Pirandola said: "You want a high rate and a fast connection particularly for systems that serve a metropolitan area. You have to transmit a lot of information in the fastest possible way; essentially you need a quantum equivalent of broadband.
Read more at Science Daily
Working with colleagues at the Technical University of Denmark (DTU), Massachusetts Institute of Technology (MIT), and the University of Toronto, they have developed a protocol to achieve key-rates at metropolitan distances at three orders-of-magnitude higher than previously.
Standard protocols of Quantum Key Distribution (QKD) exploit random sequences of quantum bits (qubits) to distribute secret keys in a completely secure fashion. Once these keys are shared by two remote parties, they can communicate confidentially by encrypting and decrypting binary messages. The security of the scheme relies on one of the most fundamental laws of quantum physics, the uncertainty principle.
Today's classical communications by email or phone are vulnerable to eavesdroppers but quantum communications based on single particle levels (photons) can easily detect eavesdroppers because they invariably disrupt or perturb a quantum signal. By making quantum measurements, two remote parties can estimate how much information an eavesdropper is stealing from the channel and can apply suitable protocols of privacy amplification to negate the effects of the information loss.
However, the problem with QKD protocols based on simple quantum systems, such as single-photon qubits, is their low key-rate, despite their effectiveness in working over long distances. This makes them unsuitable for adaptation for use in metropolitan networks.
The team, led by Dr Stefano Pirandola, of the Department of Computer Science at York, overcame this problem, both theoretically and experimentally, using continuous-variable quantum systems. These allow the parallel transmission of many qubits of information while retaining the quantum capability of detecting and defeating eavesdroppers. The research is published in Nature Photonics.
Dr Pirandola said: "You want a high rate and a fast connection particularly for systems that serve a metropolitan area. You have to transmit a lot of information in the fastest possible way; essentially you need a quantum equivalent of broadband.
Read more at Science Daily
Laser technique for low-cost self-assembly of nanostructures
Researchers from Swinburne University of Technology and the University of Science and Technology of China have developed a low-cost technique that holds promise for a range of scientific and technological applications.
They have combined laser printing and capillary force to build complex, self-assembling microstructures using a technique called laser printing capillary-assisted self-assembly (LPCS).
This type of self-assembly is seen in nature, such as in gecko feet and the salvinia leaf, and scientists have been trying to mimic these multi-functional structures for decades.
The researchers have found they can control capillary force -- the tendency of a liquid to rise in narrow tubes or be drawn into small openings -- by changing the surface structure of a material.
"Using laser printing techniques we can control the size, geometry, elasticity and distance between tiny pillars -- narrower than the width of a human hair -- to get the self-assembly that we want," lead author of a study published in the Proceedings of the National Academy of Science , Swinburne's Dr Yanlei Hu, said.
Ultrafast laser printing produces an array of vertical nanorods of varying heights. After the laser process, the material is washed in a development solvent using a method similar to traditional darkroom film processing. The gravity-governed capillary force difference creates pillars of unequal physical properties along different axes.
"A possible application of these structures is in on-chip micro-object trap-release systems which are in demand in chemical analysis and biomedical devices," co-author Dr Ben Cumming said.
The researchers demonstrated the ability of the LPCS structures to selectively capture and release micro-particles.
Read more at Science Daily
They have combined laser printing and capillary force to build complex, self-assembling microstructures using a technique called laser printing capillary-assisted self-assembly (LPCS).
This type of self-assembly is seen in nature, such as in gecko feet and the salvinia leaf, and scientists have been trying to mimic these multi-functional structures for decades.
The researchers have found they can control capillary force -- the tendency of a liquid to rise in narrow tubes or be drawn into small openings -- by changing the surface structure of a material.
"Using laser printing techniques we can control the size, geometry, elasticity and distance between tiny pillars -- narrower than the width of a human hair -- to get the self-assembly that we want," lead author of a study published in the Proceedings of the National Academy of Science , Swinburne's Dr Yanlei Hu, said.
Ultrafast laser printing produces an array of vertical nanorods of varying heights. After the laser process, the material is washed in a development solvent using a method similar to traditional darkroom film processing. The gravity-governed capillary force difference creates pillars of unequal physical properties along different axes.
"A possible application of these structures is in on-chip micro-object trap-release systems which are in demand in chemical analysis and biomedical devices," co-author Dr Ben Cumming said.
The researchers demonstrated the ability of the LPCS structures to selectively capture and release micro-particles.
Read more at Science Daily
'Beautiful Mind' Mathematician John Nash Killed in Car Crash
Nobel Prize-winning US mathematician John Nash, who inspired the film “A Beautiful Mind,” was killed with his wife in a New Jersey car crash.
Nash, 86, and his 82-year-old wife Alicia were riding in a taxi on Saturday when the accident took place, State Police Sergeant Gregory Williams told AFP.
“The taxi passengers were ejected,” Williams said, adding that they were both killed.
The Princeton University and Massachusetts Institute of Technology (MIT) mathematician is best known for his contribution to game theory — the study of decision-making — which won him the Nobel economics prize in 1994.
His life story formed the basis of the Oscar-winning 2001 film “A Beautiful Mind” in which actor Russell Crowe played the genius, who struggled with mental illness.
“Stunned… my heart goes out to John & Alicia & family. An amazing partnership. Beautiful minds, beautiful hearts,” Crowe said on Twitter.
Ron Howard, who won the Oscar for best director for the film, tweeted that “it was an honor telling part of their story.”
Nash’s life took a sharp turn in early 1959 when he began suffering from “mental disturbances” that caused him to resign his faculty position at MIT.
The schizophrenic delusions would end up affecting not only his career but also his marriage to Alicia, whom he wed in 1957.
The couple divorced in the early 1960s, but remained in contact and remarried decades later in 2001. By the time Nash received his Nobel Prize, his delusions had decreased.
“I am still making the effort and it is conceivable that with the gap period of about 25 years of partially deluded thinking providing a sort of vacation, my situation may be atypical,” Nash said in an autobiographical description, written at the time of his Nobel Prize award in 1994.
“It did happen that when I had been long enough hospitalized that I would finally renounce my delusional hypothesis and revert to thinking of myself as a human of more conventional circumstances and return to mathematical research.”
Earlier this month, Nash and mathematician Louis Nirenberg received Norway’s prestigious Abel Prize for their contributions to the theory of nonlinear partial differential equations (PDEs) and its applications to geometric analysis.
Nash’s ‘achievements inspired generations’
“John’s remarkable achievements inspired generations of mathematicians, economists and scientists who were influenced by his brilliant, groundbreaking work in game theory,” Princeton president Christopher Eisgruber said in a statement Sunday.
Read more at Discovery News
Nash, 86, and his 82-year-old wife Alicia were riding in a taxi on Saturday when the accident took place, State Police Sergeant Gregory Williams told AFP.
“The taxi passengers were ejected,” Williams said, adding that they were both killed.
The Princeton University and Massachusetts Institute of Technology (MIT) mathematician is best known for his contribution to game theory — the study of decision-making — which won him the Nobel economics prize in 1994.
His life story formed the basis of the Oscar-winning 2001 film “A Beautiful Mind” in which actor Russell Crowe played the genius, who struggled with mental illness.
“Stunned… my heart goes out to John & Alicia & family. An amazing partnership. Beautiful minds, beautiful hearts,” Crowe said on Twitter.
Ron Howard, who won the Oscar for best director for the film, tweeted that “it was an honor telling part of their story.”
Nash’s life took a sharp turn in early 1959 when he began suffering from “mental disturbances” that caused him to resign his faculty position at MIT.
The schizophrenic delusions would end up affecting not only his career but also his marriage to Alicia, whom he wed in 1957.
The couple divorced in the early 1960s, but remained in contact and remarried decades later in 2001. By the time Nash received his Nobel Prize, his delusions had decreased.
“I am still making the effort and it is conceivable that with the gap period of about 25 years of partially deluded thinking providing a sort of vacation, my situation may be atypical,” Nash said in an autobiographical description, written at the time of his Nobel Prize award in 1994.
“It did happen that when I had been long enough hospitalized that I would finally renounce my delusional hypothesis and revert to thinking of myself as a human of more conventional circumstances and return to mathematical research.”
Earlier this month, Nash and mathematician Louis Nirenberg received Norway’s prestigious Abel Prize for their contributions to the theory of nonlinear partial differential equations (PDEs) and its applications to geometric analysis.
Nash’s ‘achievements inspired generations’
“John’s remarkable achievements inspired generations of mathematicians, economists and scientists who were influenced by his brilliant, groundbreaking work in game theory,” Princeton president Christopher Eisgruber said in a statement Sunday.
Read more at Discovery News
May 24, 2015
Curiosity rover adjusts route up Martian mountain
NASA's Curiosity Mars rover climbed a hill Thursday to approach an alternative site for investigating a geological boundary, after a comparable site proved hard to reach.
The drive of about 72 feet (22 meters) up slopes as steep as 21 degrees brought Curiosity close to a target area where two distinctive types of bedrock meet. The rover science team wants to examine an outcrop that contains the contact between the pale rock unit the mission analyzed lower on Mount Sharp and a darker, bedded rock unit that the mission has not yet examined up close.
Two weeks ago, Curiosity was headed for a comparable geological contact farther south. Foiled by slippery slopes on the way there, the team rerouted the vehicle and chose a westward path.The mission's strategic planning keeps multiple route options open to deal with such situations.
"Mars can be very deceptive," said Chris Roumeliotis, Curiosity's lead rover driver at NASA's Jet Propulsion Laboratory, Pasadena, California. "We knew that polygonal sand ripples have caused Curiosity a lot of drive slip in the past, but there appeared to be terrain with rockier, more consolidated characteristics directly adjacent to these ripples. So we drove around the sand ripples onto what we expected to be firmer terrain that would give Curiosity better traction. Unfortunately, this terrain turned out to be unconsolidated material too, which definitely surprised us and Curiosity."
In three out of four drives between May 7 and May 13, Curiosity experienced wheel slippage in excess of the limit set for the drive, and it stopped mid-drive for safety. The rover's onboard software determines the amount of slippage occurring by comparing the wheel-rotation tally to actual drive distance calculated from analysis of images taken during the drive.
The rover was heading generally southward from near the base of a feature called "Jocko Butte" toward a geological contact in the eastern part of the "Logan Pass" area.
Routes to this contact site would have required driving across steeper slopes than Curiosity has yet experienced on Mars, and the rover had already experienced some sideways slipping on one slope in this area.
"We decided to go back to Jocko Butte, and, in parallel, work with the scientists to identify alternate routes," Roumeliotis said.
The team spent a few days analyzing images from the rover and from NASA's Mars Reconnaissance Orbiter to choose the best route for short-term and long-term objectives.
Read more at Science Daily
The drive of about 72 feet (22 meters) up slopes as steep as 21 degrees brought Curiosity close to a target area where two distinctive types of bedrock meet. The rover science team wants to examine an outcrop that contains the contact between the pale rock unit the mission analyzed lower on Mount Sharp and a darker, bedded rock unit that the mission has not yet examined up close.
Two weeks ago, Curiosity was headed for a comparable geological contact farther south. Foiled by slippery slopes on the way there, the team rerouted the vehicle and chose a westward path.The mission's strategic planning keeps multiple route options open to deal with such situations.
"Mars can be very deceptive," said Chris Roumeliotis, Curiosity's lead rover driver at NASA's Jet Propulsion Laboratory, Pasadena, California. "We knew that polygonal sand ripples have caused Curiosity a lot of drive slip in the past, but there appeared to be terrain with rockier, more consolidated characteristics directly adjacent to these ripples. So we drove around the sand ripples onto what we expected to be firmer terrain that would give Curiosity better traction. Unfortunately, this terrain turned out to be unconsolidated material too, which definitely surprised us and Curiosity."
In three out of four drives between May 7 and May 13, Curiosity experienced wheel slippage in excess of the limit set for the drive, and it stopped mid-drive for safety. The rover's onboard software determines the amount of slippage occurring by comparing the wheel-rotation tally to actual drive distance calculated from analysis of images taken during the drive.
The rover was heading generally southward from near the base of a feature called "Jocko Butte" toward a geological contact in the eastern part of the "Logan Pass" area.
Routes to this contact site would have required driving across steeper slopes than Curiosity has yet experienced on Mars, and the rover had already experienced some sideways slipping on one slope in this area.
"We decided to go back to Jocko Butte, and, in parallel, work with the scientists to identify alternate routes," Roumeliotis said.
The team spent a few days analyzing images from the rover and from NASA's Mars Reconnaissance Orbiter to choose the best route for short-term and long-term objectives.
Read more at Science Daily
Genies Blamed For School Sickness, Possessions
Last week in Saudi Arabia nearly 200 elementary and middle school students “refused to attend classes after nine students claimed that genies — or jinn, as they are better known in the Arabic world — had made them sick” according to ArabNews.com, which added that “the students had fainted and experienced spasms at the start of the second semester, prompting many parents to believe jinns were present at the school.”
Jinn are described in the Koran, the Muslim holy book, as creatures made by Allah of smokeless fire. Belief in jinn is widespread throughout the Arabic and Muslim world. Just as many Christians readily accept the literal reality of angels, many Muslims accept the existence of genies as self-evident. Both religions share the belief that spirits such as demons and jinn can take possession of humans. Jinn are believed, like ghosts, to haunt buildings, homes and other locations.
It will often begin with one or two people exhibiting symptoms and as others in the same location see the behavior they unconsciously begin experiencing the same or similar symptoms. Episodes are most common in closed social units such as schools and factories, and among females — likely because they tend to have stronger social bonds than males. The symptoms are not serious and go away on their own, often within hours or days.
This is not the first time that jinn have been blamed for unexplained fits in Saudi schools. In his book “Legends of the Fire Spirits: Jinn and Genies from Arabia to Zanzibar” Robert Lebling notes that in 2000:
Though many doctors attributed that incident — like the one last week — to mass hysteria, some, including a cleric tasked with investigating the matter, insisted that jinn did in fact inhabit the school and were responsible for the symptoms.
As with many folk beliefs, the reality of jinn is less important than whether people believe in them; if a group of students or teachers believe that jinn can make them faint and there is no other ready explanation, then jinn will be blamed.
Though the school fits were harmless and soon passed, sometimes these beliefs can be dangerous; earlier this month a Moroccan woman who was believed to be possessed by a jinn died during an exorcism. According to a news story:
In 2012 a Pakistani man and his family was convicted of killing his young wife during an exorcism who he believed was possessed by a jinn.
Read more at Discovery News
Jinn are described in the Koran, the Muslim holy book, as creatures made by Allah of smokeless fire. Belief in jinn is widespread throughout the Arabic and Muslim world. Just as many Christians readily accept the literal reality of angels, many Muslims accept the existence of genies as self-evident. Both religions share the belief that spirits such as demons and jinn can take possession of humans. Jinn are believed, like ghosts, to haunt buildings, homes and other locations.
It will often begin with one or two people exhibiting symptoms and as others in the same location see the behavior they unconsciously begin experiencing the same or similar symptoms. Episodes are most common in closed social units such as schools and factories, and among females — likely because they tend to have stronger social bonds than males. The symptoms are not serious and go away on their own, often within hours or days.
This is not the first time that jinn have been blamed for unexplained fits in Saudi schools. In his book “Legends of the Fire Spirits: Jinn and Genies from Arabia to Zanzibar” Robert Lebling notes that in 2000:
“Newspapers in Saudi Arabia reported the haunting of the al-Fikriyah Institute of Education, a functioning girls’ school in the Red Sea port of Jeddah, in an incident with unusual psycho-social overtones. A number of teachers at the school were reportedly subjected to fits and epileptic-like seizures, supposedly as a result of a (jinn) haunting.”
Though many doctors attributed that incident — like the one last week — to mass hysteria, some, including a cleric tasked with investigating the matter, insisted that jinn did in fact inhabit the school and were responsible for the symptoms.
As with many folk beliefs, the reality of jinn is less important than whether people believe in them; if a group of students or teachers believe that jinn can make them faint and there is no other ready explanation, then jinn will be blamed.
Though the school fits were harmless and soon passed, sometimes these beliefs can be dangerous; earlier this month a Moroccan woman who was believed to be possessed by a jinn died during an exorcism. According to a news story:
“To cast out the evil spirit from her body, the Fqih (exorcist) loudly recited incantations and passages from the Koran, the holy book of Islam. With the help of four of his assistants, the Fqih hit her with a stick all over her body in order to ‘force the evil spirit’ to leave her.”
In 2012 a Pakistani man and his family was convicted of killing his young wife during an exorcism who he believed was possessed by a jinn.
Read more at Discovery News
Subscribe to:
Posts (Atom)