How do immune cells manage to sort through vast numbers of similar-looking proteins within the body to detect foreign invaders and fight infections?
"For immune cells, singling out foreign proteins is like looking for a needle in a haystack -- where the needle may look very much like a straw, and where some straws may also look very much like a needle," notes McGill University physics professor Paul François.
Understanding how immune cells tackle this formidable challenge is important, because it could provide crucial insights into the understanding of immune diseases, from AIDS to auto-immune disorders.
In a study published May 21 in the journal Physical Review Letters, François and McGill graduate student Jean-Benoît Lalanne used computational tools to examine what kind of solutions immune systems may use to detect small concentrations of foreign antigens (characteristic of potentially harmful infections) in a sea of "self-antigens" normally present at the surface of cells.
The researchers' computer simulations yielded a surprisingly simple solution related to the well-known phenomenon of biochemical adaptation -- a general biochemical mechanism that enable organisms to cope with varying environmental conditions.
To find solutions, the computer uses an algorithm inspired by Darwinian evolution. This algorithm, designed previously within the François research group, randomly generates mathematical models of biochemical networks. It then scores them by comparing properties of these networks to predefined properties of the immune system. Networks with best scores are duplicated in the next generation and mutated, and the process is iterated over many simulated "generations" until networks reach a perfect score.
In this case, almost all solutions found were very similar, sharing a common core structure or motif.
"Our approach provides a simpler theoretical framework and understanding of what happens" as immune cells sort through the "haystack" to detect foreign antigens and trigger the immune response, François says. "Our model shares many similarities with real immune networks. Strikingly, the simplest evolved solution we found has both similar characteristics and some of the blind spots of real immune cells we studied in a previous collaborative study with the groups of Grégoire Altan-Bonnet (Memorial Sloane Kettering, New York), Eric Siggia (Rockefeller University, New York) and Massimo Vergassola (Pasteur Institute, Paris)."
Read more at Science Daily
Jun 8, 2013
Map Shows Antarctica Without Ice and Snow
What does Antarctica look like in its underwear? A team at the British Antarctic Survey working with NASA pulled together decades of data to show us a virtual map without all the ice and snow. For the first time, the continent’s bare topography is revealed.
The Bedmap2 is a new virtual map created from substantial amounts of data that included recent measurements from airborne missions as well as satellites. The project, led by British Antarctic Survey scientist Peter Fretwell, relied on NASA’s Operation IceBridge, which has recorded Antarctica’s surface elevations, ice shelf limits and ice thickness. The new map led to some unexpected discoveries about the southernmost continent.
Not only is the volume of ice in Antarctica 4.6 percent greater than previously thought but the deepest point turns out to be under Byrd Glacier — about 1,300 feet deeper than the spot that had been called the deepest, according to research Fretwell and his colleagues recently published in the scientific journal The Cryosphere (PDF).
The Bedmap2 could also help humanity in the future. Study co-author Hamish Pritchard pointed out that understanding the actual height and thickness of the ice as well as the landscape underneath will be fundamental to modelling the ice sheet. ”Knowing how much the sea will rise is of global importance, and these maps are a step towards that goal,” he told the British Antarctic Survey.
Over at NASA, interactive images show how the continent currently appears, and using a slider you can see the Bedmap2 topography below. There’s also a feature comparing the original Bedmap from 10 years ago with the newest one. Visualizing what’s below the frozen landscape is impressive, as long as it doesn’t end up being a snapshot of our planet’s shirtless future.
From Discovery News
The Bedmap2 is a new virtual map created from substantial amounts of data that included recent measurements from airborne missions as well as satellites. The project, led by British Antarctic Survey scientist Peter Fretwell, relied on NASA’s Operation IceBridge, which has recorded Antarctica’s surface elevations, ice shelf limits and ice thickness. The new map led to some unexpected discoveries about the southernmost continent.
Not only is the volume of ice in Antarctica 4.6 percent greater than previously thought but the deepest point turns out to be under Byrd Glacier — about 1,300 feet deeper than the spot that had been called the deepest, according to research Fretwell and his colleagues recently published in the scientific journal The Cryosphere (PDF).
The Bedmap2 could also help humanity in the future. Study co-author Hamish Pritchard pointed out that understanding the actual height and thickness of the ice as well as the landscape underneath will be fundamental to modelling the ice sheet. ”Knowing how much the sea will rise is of global importance, and these maps are a step towards that goal,” he told the British Antarctic Survey.
Over at NASA, interactive images show how the continent currently appears, and using a slider you can see the Bedmap2 topography below. There’s also a feature comparing the original Bedmap from 10 years ago with the newest one. Visualizing what’s below the frozen landscape is impressive, as long as it doesn’t end up being a snapshot of our planet’s shirtless future.
From Discovery News
Jun 7, 2013
Apes and Human Babies Use Similar Gestures
Ape and human infants at comparable stages of development use similar gestures, such as pointing or lifting their arms to be picked up, new research suggests.
Chimpanzee, bonobo and human babies rely mainly on gestures at about a year old, and gradually develop symbolic language (words, for human babies; and signs, for apes) as they get older.
The findings suggest that “gesture plays an important role in the evolution of language, because it preceded language use across the species," said study co-author Kristen Gillespie-Lynch, a developmental psychologist at the College of Staten Island in New York.
The gesturing behavior was described June 6, in the journal Frontiers in Comparative Psychology.
Language precursor
The idea that language arose from gesture and a primitive sign language has a long history. French philosopher Étienne Bonnot de Condillac proposed the idea in 1746, and other scientists have noted that walking on two legs, which frees up the hands for gesturing, occurred earlier in human evolution than changes to the vocal tract that enabled speaking.
But although apes in captivity can learn some language by learning from humans, in the wild, they don't gesture nearly as much as human infants, making it difficult to tease out commonalities in language development that have biological versus environmental roots.
To do so, Gillespie-Lynch and her colleagues compared detailed video of an American baby girl in everyday life with two apes of the same age that were trained to communicate. Panpanzee, a chimpanzee, and Panbanisha, a bonobo, were living at the Language Research Center in Atlanta, where they received interactive training in sign language, gesturing and vocalizations; they also went through a daily testing session.
The researchers analyzed the young apes’ behavior when they were about a year old to about 26 months old to that of the human baby when she was 11 months old to almost 2 years old.
Common language
Both the apes and the human baby started out gesturing more than using words, and they used similar gestures, such as pointing at or reaching for things they wanted, or lifting their arms when they wanted to be picked up.
“The 'up' gesture looks just like if you find a human child asking to pick them up," Gillespie-Lynch told LiveScience.
The baby girl used more gestures overall and developed gestures — such as waving bye-bye, shaking the head and nodding — that the apes did not demonstrate.
The girl tended to use more gestures for showing things to caretakers, whereas the apes relied more on reaching gestures. Together, the findings suggest the human child was more focused on sharing her experience with others, whereas the apes were using gestures more instrumentally to get what they wanted.
As they grew older, the species' trajectories diverged. All the infants gradually shifted to using more symbolic words, but the child's shift was much more dramatic than the apes'. And from the start, the little girl vocalized more than the apes did.
Read more at Discovery News
Chimpanzee, bonobo and human babies rely mainly on gestures at about a year old, and gradually develop symbolic language (words, for human babies; and signs, for apes) as they get older.
The findings suggest that “gesture plays an important role in the evolution of language, because it preceded language use across the species," said study co-author Kristen Gillespie-Lynch, a developmental psychologist at the College of Staten Island in New York.
The gesturing behavior was described June 6, in the journal Frontiers in Comparative Psychology.
Language precursor
The idea that language arose from gesture and a primitive sign language has a long history. French philosopher Étienne Bonnot de Condillac proposed the idea in 1746, and other scientists have noted that walking on two legs, which frees up the hands for gesturing, occurred earlier in human evolution than changes to the vocal tract that enabled speaking.
But although apes in captivity can learn some language by learning from humans, in the wild, they don't gesture nearly as much as human infants, making it difficult to tease out commonalities in language development that have biological versus environmental roots.
To do so, Gillespie-Lynch and her colleagues compared detailed video of an American baby girl in everyday life with two apes of the same age that were trained to communicate. Panpanzee, a chimpanzee, and Panbanisha, a bonobo, were living at the Language Research Center in Atlanta, where they received interactive training in sign language, gesturing and vocalizations; they also went through a daily testing session.
The researchers analyzed the young apes’ behavior when they were about a year old to about 26 months old to that of the human baby when she was 11 months old to almost 2 years old.
Common language
Both the apes and the human baby started out gesturing more than using words, and they used similar gestures, such as pointing at or reaching for things they wanted, or lifting their arms when they wanted to be picked up.
“The 'up' gesture looks just like if you find a human child asking to pick them up," Gillespie-Lynch told LiveScience.
The baby girl used more gestures overall and developed gestures — such as waving bye-bye, shaking the head and nodding — that the apes did not demonstrate.
The girl tended to use more gestures for showing things to caretakers, whereas the apes relied more on reaching gestures. Together, the findings suggest the human child was more focused on sharing her experience with others, whereas the apes were using gestures more instrumentally to get what they wanted.
As they grew older, the species' trajectories diverged. All the infants gradually shifted to using more symbolic words, but the child's shift was much more dramatic than the apes'. And from the start, the little girl vocalized more than the apes did.
Read more at Discovery News
New Lizard Species Named After Jim Morrison
Jim Morrison of The Doors, who famously slithered around in tight pants on stage, was known as “The Lizard King.” Now scientists have named a newly discovered prehistoric enormous lizard after the late great rock star.
The lizard, Barbaturex morrisoni, is described in the latest Proceedings of the Royal Society B. It weighed 60 pounds and grew to six feet in length. About 40 million years ago, it was the “king” of land-dwelling lizards because of its power and imposing size, project leader Jason Head of the University of Nebraska-Lincoln and his colleagues believe.
As for the lizard’s name, Head explained via a press release, “I was listening to The Doors quite a bit during the research. Some of their musical imagery includes reptiles and ancient places, and Jim Morrison was of course ‘The Lizard King,’ so it all kind of came together.”
The lizard was a plant-eater, like present-day iguanas. It lived in the jungles of Southeast Asia.
When the lizard was alive, the climate in its environment was up to 9 degrees Fahrenheit warmer than it is today. A warmer and moister environment would have encouraged the growth and evolution of subtropical vegetation, which would have provided resources allowing for larger reptiles and mammals.
What goes around comes around, unfortunately, so it was probably climate change and cooler temperatures that altered the food supply and led to the eventual extinction of Barbaturex morrisoni.
Barbatus, the first part of its name, is from the Latin words for “bearded,” and “king.” The “beard” refers to ridges that existed in a beard-like form along the underside of the reptile’s lower jaw.
When Head first examined the fossils for the lizard, he noticed its bones were characteristic of a group of modern lizards that includes bearded dragons, chameleons and plant-eaters such as spiny-tailed lizards.
Read more at Discovery News
The lizard, Barbaturex morrisoni, is described in the latest Proceedings of the Royal Society B. It weighed 60 pounds and grew to six feet in length. About 40 million years ago, it was the “king” of land-dwelling lizards because of its power and imposing size, project leader Jason Head of the University of Nebraska-Lincoln and his colleagues believe.
As for the lizard’s name, Head explained via a press release, “I was listening to The Doors quite a bit during the research. Some of their musical imagery includes reptiles and ancient places, and Jim Morrison was of course ‘The Lizard King,’ so it all kind of came together.”
The lizard was a plant-eater, like present-day iguanas. It lived in the jungles of Southeast Asia.
When the lizard was alive, the climate in its environment was up to 9 degrees Fahrenheit warmer than it is today. A warmer and moister environment would have encouraged the growth and evolution of subtropical vegetation, which would have provided resources allowing for larger reptiles and mammals.
What goes around comes around, unfortunately, so it was probably climate change and cooler temperatures that altered the food supply and led to the eventual extinction of Barbaturex morrisoni.
Barbatus, the first part of its name, is from the Latin words for “bearded,” and “king.” The “beard” refers to ridges that existed in a beard-like form along the underside of the reptile’s lower jaw.
When Head first examined the fossils for the lizard, he noticed its bones were characteristic of a group of modern lizards that includes bearded dragons, chameleons and plant-eaters such as spiny-tailed lizards.
Read more at Discovery News
Mystery Feature Surrounds Star: It's a (Dust) Trap!
Dust. It’s insignificant to us, or at most, a nuisance. Dust is that little bit of nothing in our daily lives. Dust collects on things that are old or forgotten. Dust has a bad name on Earth, while elsewhere in the Universe, it turns out to be a key ingredient of the life cycles of stars and planets.
The proliferation of exoplanet discoveries over the last two decades has also been accompanied by a greater understanding of how planets form around stars. Truly, planets come from star dust, or the left over material from the gravitational collapse of a gas cloud into a hot, nuclear powered furnace. The actual details of how this occurs, however, is still literally shrouded in mystery.
One often needs a radio or infrared telescope to peer through the dark, dusty regions around forming stars and planetary systems. Sometimes, these instruments can be used to view the dust itself, which holds valuable information about the physics involved.
Above is a new image from that famous new telescope in the Chilean desert, the Atacama Large Millimeter/Submillimeter Array, or ALMA. While still in its commissioning phases, it was used to resolved a “cashew-shaped” feature of dust around the star Oph IRS 48, 390 light-years away from Earth.
The lead author of the paper is Nienke van der Marel, a Ph.D. student at Leiden Observatory in the Netherlands. He expressed initial skepticism at the strange feature, but the sharpness and sensitivity of ALMA, even without its full complement of antennas, made it clear that the “dust trap” was real.
Such a trap, or vortex or bump, of material has been theorized as a way of solving the problems of how tiny dust grains clump together for form larger dust grains, and eventually larger and larger structures to form planets. Somehow, over just a few short million years, these need to go from dust to pebbles to boulders to worlds without self-destructing first. The action of a large gas giant in the system could create such a dust trap by gravitational interactions, thus allowing a safe space for the clumps to form. Previous observations of the forming planetary system show a gap in the disk of gas and dust, indicative of a large planet that has already begun to clear out its orbit.
The cartoon model of Oph IRS 48 gives a clear picture of the current working model. The blue of the disk represents the gas seen in observations of the carbon monoxide molecule. The brown indicates everything from small dust grains to larger pebbles, shown to be larger within the dust trap. These are “herded” together in a sense by the young, still-forming planet believed to be in the disk’s gap.
Read more at Discovery News
The proliferation of exoplanet discoveries over the last two decades has also been accompanied by a greater understanding of how planets form around stars. Truly, planets come from star dust, or the left over material from the gravitational collapse of a gas cloud into a hot, nuclear powered furnace. The actual details of how this occurs, however, is still literally shrouded in mystery.
One often needs a radio or infrared telescope to peer through the dark, dusty regions around forming stars and planetary systems. Sometimes, these instruments can be used to view the dust itself, which holds valuable information about the physics involved.
Above is a new image from that famous new telescope in the Chilean desert, the Atacama Large Millimeter/Submillimeter Array, or ALMA. While still in its commissioning phases, it was used to resolved a “cashew-shaped” feature of dust around the star Oph IRS 48, 390 light-years away from Earth.
The lead author of the paper is Nienke van der Marel, a Ph.D. student at Leiden Observatory in the Netherlands. He expressed initial skepticism at the strange feature, but the sharpness and sensitivity of ALMA, even without its full complement of antennas, made it clear that the “dust trap” was real.
Such a trap, or vortex or bump, of material has been theorized as a way of solving the problems of how tiny dust grains clump together for form larger dust grains, and eventually larger and larger structures to form planets. Somehow, over just a few short million years, these need to go from dust to pebbles to boulders to worlds without self-destructing first. The action of a large gas giant in the system could create such a dust trap by gravitational interactions, thus allowing a safe space for the clumps to form. Previous observations of the forming planetary system show a gap in the disk of gas and dust, indicative of a large planet that has already begun to clear out its orbit.
The cartoon model of Oph IRS 48 gives a clear picture of the current working model. The blue of the disk represents the gas seen in observations of the carbon monoxide molecule. The brown indicates everything from small dust grains to larger pebbles, shown to be larger within the dust trap. These are “herded” together in a sense by the young, still-forming planet believed to be in the disk’s gap.
Read more at Discovery News
Our Sun Lives in a Glitzy Galactic Boulevard
Throughout human history philosophers, theologians, and scientists assumed Earth was the center of the universe. Ptolemy’s geocentric model of the universe was used to prepare astrological charts for over 1,500 years. (No wonder the astrologers couldn’t make any successful predictions!)
The Copernican model of a sun-centered universe took hold only 400 years ago. And, less than 100 years ago, many astronomers thought that the sun was in the center of our Milky Way galaxy.
Fast-forward to today and astronomy textbooks all show our sun and solar system residing in a ho-hum back-ally neighborhood of the galaxy. We live halfway between the galactic core and the galactic center, on the edge in a region called the Orion Spur that is nestled between two major spiral arms.
But this week a cutesy press release from the National Radio Astronomy Observatory in Charlottesville, Va., announced that “our Solar System’s Milky Way neighborhood just went upscale.”
A high-resolution radio probe of our galaxy shows that the sun presently lives in the middle of a much larger structure the radio astronomers have dubbed the Local Arm. We’re still nestled between the inner Sagittarius Arm and beefy outer Perseus Arm, but our stellar byway has more muscle now.
This has been a tricky bit of interstellar cartography because we live inside the pancake-shaped galaxy. Dust clouds and star clouds block the view of most of the galaxy in visible light. But radio and infrared light can penetrate the dusty smog and see clear across the galaxy.
In 2008, NASA’s Spitzer Space Telescope observations led to a remapping of the Milky Way that showed two major arms, Scutum-Centaurus and Perseus, attached to the ends of a thick central bar. Two other spiral arms, Norma and Sagittarius, were demoted to minor arms because they are less distinct. The major arms consist of the highest densities of both young and old stars. The minor arms are primarily filled with gas and pockets of star-forming activity.
From 2008 to 2012 radio astronomers mapped our galactic neighborhood using the precision Very-Long Baseline Array, ten radio telescopes spanning over 5,000 miles. The array yields images as sharp as what would be provided by a single continent-sized dish antenna.
With this capability astronomers were able to use straightforward trigonometric parallax (using Earth’s orbit as the baseline) to measure distances to nearby star-forming regions. The radio telescopes didn’t look directly at stars but picked up emissions of water and methanol molecules that boost microwave frequencies.
The sun is just passing through the Local Arm along a 250 million year orbit about the galactic center. Still, we need a better name than Local Arm. That’s as mundane as the Local Group — the name for the backwater neighborhood of galaxies that we inhabit.
It’s a little too early and presumptive to call it the Federation Arm, as Trekkies might like. The easiest name with some gravitas would be the Orion Arm, which is already being used. Because it looks like our stellar arm does a dog-leg off of the Perseus Arm, how about the name Canis-Cruris Arm? (Latin for dog-leg.) The constellation Canis Major, the Great Dog, is alongside Orion in the winter sky.
Read more at Discovery News
The Copernican model of a sun-centered universe took hold only 400 years ago. And, less than 100 years ago, many astronomers thought that the sun was in the center of our Milky Way galaxy.
Fast-forward to today and astronomy textbooks all show our sun and solar system residing in a ho-hum back-ally neighborhood of the galaxy. We live halfway between the galactic core and the galactic center, on the edge in a region called the Orion Spur that is nestled between two major spiral arms.
But this week a cutesy press release from the National Radio Astronomy Observatory in Charlottesville, Va., announced that “our Solar System’s Milky Way neighborhood just went upscale.”
A high-resolution radio probe of our galaxy shows that the sun presently lives in the middle of a much larger structure the radio astronomers have dubbed the Local Arm. We’re still nestled between the inner Sagittarius Arm and beefy outer Perseus Arm, but our stellar byway has more muscle now.
This has been a tricky bit of interstellar cartography because we live inside the pancake-shaped galaxy. Dust clouds and star clouds block the view of most of the galaxy in visible light. But radio and infrared light can penetrate the dusty smog and see clear across the galaxy.
In 2008, NASA’s Spitzer Space Telescope observations led to a remapping of the Milky Way that showed two major arms, Scutum-Centaurus and Perseus, attached to the ends of a thick central bar. Two other spiral arms, Norma and Sagittarius, were demoted to minor arms because they are less distinct. The major arms consist of the highest densities of both young and old stars. The minor arms are primarily filled with gas and pockets of star-forming activity.
From 2008 to 2012 radio astronomers mapped our galactic neighborhood using the precision Very-Long Baseline Array, ten radio telescopes spanning over 5,000 miles. The array yields images as sharp as what would be provided by a single continent-sized dish antenna.
With this capability astronomers were able to use straightforward trigonometric parallax (using Earth’s orbit as the baseline) to measure distances to nearby star-forming regions. The radio telescopes didn’t look directly at stars but picked up emissions of water and methanol molecules that boost microwave frequencies.
The sun is just passing through the Local Arm along a 250 million year orbit about the galactic center. Still, we need a better name than Local Arm. That’s as mundane as the Local Group — the name for the backwater neighborhood of galaxies that we inhabit.
It’s a little too early and presumptive to call it the Federation Arm, as Trekkies might like. The easiest name with some gravitas would be the Orion Arm, which is already being used. Because it looks like our stellar arm does a dog-leg off of the Perseus Arm, how about the name Canis-Cruris Arm? (Latin for dog-leg.) The constellation Canis Major, the Great Dog, is alongside Orion in the winter sky.
Read more at Discovery News
Jun 6, 2013
New North America Viking Voyage Discovered
Some 1,000 years ago, the Vikings set off on a voyage to Notre Dame Bay in modern-day Newfoundland, Canada, new evidence suggests.
The journey would have taken the Vikings, also called the Norse, from L'Anse aux Meadows on the northern tip of the same island to a densely populated part of Newfoundland and may have led to the first contact between Europeans and the indigenous people of the New World.
"This area of Notre Dame Bay was as good a candidate as any for that first contact between the Old World and the New World, and that's kind of an exciting thing," said Kevin Smith, deputy director and chief curator of the Haffenreffer Museum of Anthropology at Brown University.
Evidence of the voyage was discovered by a combination of archaeological excavation and chemical analysis of two jasper artifacts that the Norse used to light fires. The analysis, presented at the annual meeting of the Society for American Archaeology in Honolulu, suggests the jasper used in the artifacts came from the area of Notre Dame Bay.
The jasper artifacts were found L'Anse aux Meadows and the Norse explorers likely set out from that outpost. They would've headed due south, traveling some 143 miles (230 kilometers) to Notre Dame Bay. When they reached their destination Norse would have set foot in an area of Newfoundland that modern-day researchers know was well inhabited.
"This area of Notre Dame Bay archaeologically the area of densest settlement on Newfoundland, at that time, of indigenous people, the ancestors of the Beothuk," a people who, at the time, lived as hunter-gatherers, Smith told LiveScience.
Aside from likely encountering the ancestral Beothuk, the Norse would probably have been impressed by the landscape itself. The coastline had fjords, inlets and offshore islands, with lots of forests. Birds, sea mammals and fish also would have been plentiful.
"For anyone coming from the nearly treeless islands of the North Atlantic, this would have potentially been a very interesting zone," Smith said. "There are a lot of trees; there's a lot of opportunities for cutting things down; it's a bit warmer; it's an interesting mix of resources," Smith said.
For any Norse voyagers who had been to Norway, it would have been familiar. It still would have made an impression though, since the lands the Norse had occupied in their journey across the North Atlantic tended to be more barren.
Researchers don't know the specifics about the contact between the Norse and the ancestral Beothuk on this voyage, presuming it actually happened. It could have been a peaceful encounter, although the Norse sagas also tell of hostile meetings with people in the New World. Also, while the possible meeting likely would have been one of the earliest Old World-New World encounters, researchers don't know if it was the very first.
Norse matches
The two jasper artifacts were key pieces of evidence that helped the researchers unravel the existence of the voyage.
The larger, and more recently excavated of the two, was found in 2008, only 33 feet (10 meters) away from an ancient Norse hall. The discovery was made by Priscilla Renouf, a professor at Memorial University in Newfoundland, and Todd Kristensen, who is now a graduate student at the University of Alberta.
"You can think of these almost as the matches of the Vikings," Smith said. The Norse would have struck them against a steel fire starter to make sparks to start a fire, he explained. As time passed, and after being struck against steel repeatedly, the jasper fire starters wore down and were thrown out.
The chemical composition of jasper varies depending on where it was obtained. To figure out where the larger jasper fire starter came from, Smith, Thomas Urban of Oxford University, and Susan Herringer of Brown University's Joukowsky Institute for Archaeology and the Ancient World looked for the outcrops in the New (or Old) World chemically matched it. They compared the fire starter with geological samples using a handheld X-ray florescence device that can detect the chemical signature of jasper.
The results suggested the jasper originated from the area of Notre Dame Bay, somewhere along a 44-mile-long (71 km) stretch of the coast. The closest chemical match was to a geological sample from modern-day Fortune Harbor.
The second, smaller jasper piece was unearthed in the 1960s in excavations carried out by Helge and Anne Stine Ingstad, who discovered L'Anse aux Meadows. Different tests run on this piece suggested in 1999 that it also came from the Notre Dame Bay area. At the time Smith couldn’t prove it was used as a fire starter, but now believes it likely is.
Exploring the New World
Ever since the discovery of L'Anse aux Meadows nearly 50 years ago, archaeologists and historians have been trying to uncover the story of Norse exploration in the New World.
Previous research has revealed the presence of butternut seeds at L'Anse aux Meadows, indicating the Norse made a trip to the Gulf of St. Lawrence or possibly even a bit beyond. Additionally, Norse artifacts (and possibly a structure) have been discovered in the Canadian Arctic, indicating a trading relationship with the indigenous people there that might have lasted for centuries.
Read more at Discovery News
The journey would have taken the Vikings, also called the Norse, from L'Anse aux Meadows on the northern tip of the same island to a densely populated part of Newfoundland and may have led to the first contact between Europeans and the indigenous people of the New World.
"This area of Notre Dame Bay was as good a candidate as any for that first contact between the Old World and the New World, and that's kind of an exciting thing," said Kevin Smith, deputy director and chief curator of the Haffenreffer Museum of Anthropology at Brown University.
Evidence of the voyage was discovered by a combination of archaeological excavation and chemical analysis of two jasper artifacts that the Norse used to light fires. The analysis, presented at the annual meeting of the Society for American Archaeology in Honolulu, suggests the jasper used in the artifacts came from the area of Notre Dame Bay.
The jasper artifacts were found L'Anse aux Meadows and the Norse explorers likely set out from that outpost. They would've headed due south, traveling some 143 miles (230 kilometers) to Notre Dame Bay. When they reached their destination Norse would have set foot in an area of Newfoundland that modern-day researchers know was well inhabited.
"This area of Notre Dame Bay archaeologically the area of densest settlement on Newfoundland, at that time, of indigenous people, the ancestors of the Beothuk," a people who, at the time, lived as hunter-gatherers, Smith told LiveScience.
Aside from likely encountering the ancestral Beothuk, the Norse would probably have been impressed by the landscape itself. The coastline had fjords, inlets and offshore islands, with lots of forests. Birds, sea mammals and fish also would have been plentiful.
"For anyone coming from the nearly treeless islands of the North Atlantic, this would have potentially been a very interesting zone," Smith said. "There are a lot of trees; there's a lot of opportunities for cutting things down; it's a bit warmer; it's an interesting mix of resources," Smith said.
For any Norse voyagers who had been to Norway, it would have been familiar. It still would have made an impression though, since the lands the Norse had occupied in their journey across the North Atlantic tended to be more barren.
Researchers don't know the specifics about the contact between the Norse and the ancestral Beothuk on this voyage, presuming it actually happened. It could have been a peaceful encounter, although the Norse sagas also tell of hostile meetings with people in the New World. Also, while the possible meeting likely would have been one of the earliest Old World-New World encounters, researchers don't know if it was the very first.
Norse matches
The two jasper artifacts were key pieces of evidence that helped the researchers unravel the existence of the voyage.
The larger, and more recently excavated of the two, was found in 2008, only 33 feet (10 meters) away from an ancient Norse hall. The discovery was made by Priscilla Renouf, a professor at Memorial University in Newfoundland, and Todd Kristensen, who is now a graduate student at the University of Alberta.
"You can think of these almost as the matches of the Vikings," Smith said. The Norse would have struck them against a steel fire starter to make sparks to start a fire, he explained. As time passed, and after being struck against steel repeatedly, the jasper fire starters wore down and were thrown out.
The chemical composition of jasper varies depending on where it was obtained. To figure out where the larger jasper fire starter came from, Smith, Thomas Urban of Oxford University, and Susan Herringer of Brown University's Joukowsky Institute for Archaeology and the Ancient World looked for the outcrops in the New (or Old) World chemically matched it. They compared the fire starter with geological samples using a handheld X-ray florescence device that can detect the chemical signature of jasper.
The results suggested the jasper originated from the area of Notre Dame Bay, somewhere along a 44-mile-long (71 km) stretch of the coast. The closest chemical match was to a geological sample from modern-day Fortune Harbor.
The second, smaller jasper piece was unearthed in the 1960s in excavations carried out by Helge and Anne Stine Ingstad, who discovered L'Anse aux Meadows. Different tests run on this piece suggested in 1999 that it also came from the Notre Dame Bay area. At the time Smith couldn’t prove it was used as a fire starter, but now believes it likely is.
Exploring the New World
Ever since the discovery of L'Anse aux Meadows nearly 50 years ago, archaeologists and historians have been trying to uncover the story of Norse exploration in the New World.
Previous research has revealed the presence of butternut seeds at L'Anse aux Meadows, indicating the Norse made a trip to the Gulf of St. Lawrence or possibly even a bit beyond. Additionally, Norse artifacts (and possibly a structure) have been discovered in the Canadian Arctic, indicating a trading relationship with the indigenous people there that might have lasted for centuries.
Read more at Discovery News
Labels:
Archeology,
Geology,
History,
Human,
Science
Boom! Super Seismo-Sonic Earthquakes Are Real
The inner workings of bizarre and potentially dangerous earthquakes that break the seismic sound barrier have now for the first time been confirmed in laboratory experiments with real rocks, report scientists in today’s issue of the journal Science.
What are called supershear earthquakes are strange events in which the rupturing fault breaks faster than certain seismic waves can travel, creating a sort of seismic mach cone that fires out the end of a fault’s rupture zone -- the part of the fault that breaks loose allowing two rock surfaces to jerk past each other. That cone and the waves that follow can cause inordinately severe shaking, out of proportion to the earthquake's magnitude.
“It’s like the (seismic) waves are propagating along and all of a sudden it steps on the accelerator,” explained Eric Dunham, an assistant professor and seismological researcher at Stanford University who has done modeling work on supershear waves.
The waves behind the weird phenomenon are called shear waves, which normally are relatively slow seismic waves that move over the surface in a manner similar to ocean waves. These are the waves that are felt as rolling and shaking motions after the initial shock of normal earthquakes. The initial shock is another, much faster, kind of wave that behaves more like pressure waves that make sound in the air.
In a supershear earthquake, however, the shear waves are created very quickly when a long fault, like the San Andreas, breaks loose faster than the speed shear waves normally travel. When this happens the shear waves mach cone is created that can reach the same speed as the pressure waves, explained the paper’s lead author, François Passelègue of the Geology Laboratory at École Normale Supérieure in Paris, France.
“The main additional hazard due to supershear earthquakes is that there are two big wave arrivals,” Passelègue told Discovery News. To someone riding out such a quake, the first thing to arrive would be the sharp pressure wave, but instead of the rolling shear waves following it, the powerful supershear mach cone would arrive and shake the ground in a direction parallel to the fault zone that created it. Then, soon after, a second shear wave would hit with ground motions at right angles to the fault zone. “This sudden change in the direction of the dominant ground motions is dramatic for buildings.”
Read more at Discovery News
What are called supershear earthquakes are strange events in which the rupturing fault breaks faster than certain seismic waves can travel, creating a sort of seismic mach cone that fires out the end of a fault’s rupture zone -- the part of the fault that breaks loose allowing two rock surfaces to jerk past each other. That cone and the waves that follow can cause inordinately severe shaking, out of proportion to the earthquake's magnitude.
“It’s like the (seismic) waves are propagating along and all of a sudden it steps on the accelerator,” explained Eric Dunham, an assistant professor and seismological researcher at Stanford University who has done modeling work on supershear waves.
The waves behind the weird phenomenon are called shear waves, which normally are relatively slow seismic waves that move over the surface in a manner similar to ocean waves. These are the waves that are felt as rolling and shaking motions after the initial shock of normal earthquakes. The initial shock is another, much faster, kind of wave that behaves more like pressure waves that make sound in the air.
In a supershear earthquake, however, the shear waves are created very quickly when a long fault, like the San Andreas, breaks loose faster than the speed shear waves normally travel. When this happens the shear waves mach cone is created that can reach the same speed as the pressure waves, explained the paper’s lead author, François Passelègue of the Geology Laboratory at École Normale Supérieure in Paris, France.
“The main additional hazard due to supershear earthquakes is that there are two big wave arrivals,” Passelègue told Discovery News. To someone riding out such a quake, the first thing to arrive would be the sharp pressure wave, but instead of the rolling shear waves following it, the powerful supershear mach cone would arrive and shake the ground in a direction parallel to the fault zone that created it. Then, soon after, a second shear wave would hit with ground motions at right angles to the fault zone. “This sudden change in the direction of the dominant ground motions is dramatic for buildings.”
Read more at Discovery News
Ireland's Ancient Link to Volcanism Found
Sláinte (a Gaelic toast) to the Irish monks! They not only preserved the knowledge of ancient Greece in western Europe after the fall of the Roman Empire, they also logged a 1200 year climate record of the Emerald Isle from 431 to 1649 CE in the Irish Annals. The writings of those meteorology-minded monks were recently used to correlate volcanic activity to intense cold snaps in Ireland.
The monks wrote the Irish Annals as a record of religious feast days and major events, but the clerics also noted extreme cold weather events, such as heavy snow or prolonged ice cover on lakes.
For example in the Annals of Connacht from 1465 CE: “Exceeding great frost and snow and stormy weather this year, so that no herb grew in the ground and no leaf budded on a tree until the feast of St. Brendan [May16].”
The monks kept up their observations through the Black Plague and Viking raids, but stopped after English invaders suppressed the traditional culture of Ireland during the Tudor conquest in the 1600s.
“It’s clear that the scribes of the Irish Annals were diligent reporters of severe cold weather, most probably because of the negative impacts this had on society and the biosphere,” Francis Ludlow of Harvard University said in a press release.
Ludlow was lead author of a study that paired Irish weather observations with volcanic eruptions. The scientists dated historic volcanic activity using info from the Greenland Ice Sheet Project about volcanic residues trapped in Greenland’s glaciers. The study was published in Environmental Research Letters.
For example, Ludlow’s team found that the eruption of the Peruvian volcano, Huaynaputina, in 1600 was associated with a few years of hard winter in Ireland. Chinese records also showed a cold winter after than eruption, according to a study published in the International Journal of Climatology.
Read more at Discovery News
The monks wrote the Irish Annals as a record of religious feast days and major events, but the clerics also noted extreme cold weather events, such as heavy snow or prolonged ice cover on lakes.
For example in the Annals of Connacht from 1465 CE: “Exceeding great frost and snow and stormy weather this year, so that no herb grew in the ground and no leaf budded on a tree until the feast of St. Brendan [May16].”
The monks kept up their observations through the Black Plague and Viking raids, but stopped after English invaders suppressed the traditional culture of Ireland during the Tudor conquest in the 1600s.
“It’s clear that the scribes of the Irish Annals were diligent reporters of severe cold weather, most probably because of the negative impacts this had on society and the biosphere,” Francis Ludlow of Harvard University said in a press release.
Ludlow was lead author of a study that paired Irish weather observations with volcanic eruptions. The scientists dated historic volcanic activity using info from the Greenland Ice Sheet Project about volcanic residues trapped in Greenland’s glaciers. The study was published in Environmental Research Letters.
For example, Ludlow’s team found that the eruption of the Peruvian volcano, Huaynaputina, in 1600 was associated with a few years of hard winter in Ireland. Chinese records also showed a cold winter after than eruption, according to a study published in the International Journal of Climatology.
Read more at Discovery News
Oldest Human Tumor Found in Neanderthal Bone
The oldest human tumor ever found — by more than 100,000 years — has been discovered in the rib of a Neanderthal.
The bone, excavated more than 100 years ago in Croatia, has been hollowed out by a tumor still seen in humans today, known as fibrous dysplasia. These tumors are not cancerous (they don't spread to other tissues), but they replace the weblike inner structure of a bone with a soft, fibrous mass.
"They range all the way from being totally benign, where you wouldn’t recognize them, to being extremely painful," said David Frayer, an anthropologist at the University of Kansas who reported the finding along with his colleagues today (June 5) in the journal PLOS ONE. "The size of this one, and the bulging of it, probably caused the individual pain."
Unusual bone
The Neanderthal rib fragment measures just more than an inch long (30 millimeters). It was first unearthed between 1899 and 1905 in a cave known as the Krapina rock shelter in Croatia. This site held more than 900 Neanderthal bones dating back 120,000 to 130,000 years ago. Many of the bones display signs of trauma, and quite a few show post-mortem cutting marks, perhaps indicating cannibalism or some sort of ritual reburial.
Neanderthals (Homo neanderthalensis) were a human species closely related to modern humans (Homo sapiens). They died out approximately 30,000 years ago, though not without apparently interbreeding with Homo sapiens: Many modern-day humans carry Neanderthal DNA, suggesting the two species had sex.
In the 1980s, University of Pennsylvania researchers X-rayed the entire collection of bones found at Krapina, and they published a book in 1999 showing each radiograph. Most of those X-rays were quite high-quality, said Janet Monge, the keeper of physical anthropology at the University of Pennsylvania Museum, who participated in that project and the current study.
But there was one exception: One little rib fragment appeared "burned out" in the X-ray image, an overexposure that turned out to be due to the loss of inner bone in the specimen.
Now, the study researchers have returned to the rib, subjecting it to higher-quality X-rays and to microCT (computed tomography) scanning, which is similar to -- but higher-resolution than -- the types of scans doctors use to detect bone trauma in living patients.
Ancient tumor
The new images reveal a hollow shell, with an empty cavity where a network of inner "spongy bone" should be. (This spongy bone is so named because it's full of holes where blood vessels sneak through.)
"We do see it in human patients today," Monge told LiveScience. "It's exactly the same kind of process and in the same place."
Fibrous dysplasia is caused by a spontaneous genetic mutation in the cells that produce bone, according to the Mayo Clinic. In some cases, the tumors are small and asymptomatic. In other cases, they cause pain and weakness. Because the researchers have only an isolated rib from this particular Neanderthal, they can't say whether his or her other bones would have been affected.
Previously, the oldest known tumors came from Egyptian mummies and dated back only 4,000 years or so. (A 1,600-year-old tumor containing teeth was found in the pelvis of an ancient Roman corpse.) That makes the Neanderthal tumor, at about 120,000 years old, the most ancient "by a lot!" Monge said.
In many ways, the Neanderthal tumor is a needle-in-a-haystack find, Frayer said.
"People of that time didn't live as long as they did today; plus, there weren't very many of them compared to the Egyptians and people today," he told LiveScience. "So finding evidence of tumors and evidence of cancers, is -- I don't know if I want to say ‘lucky’ -- but there isn't a lot of evidence for it."
Read more at Discovery News
The bone, excavated more than 100 years ago in Croatia, has been hollowed out by a tumor still seen in humans today, known as fibrous dysplasia. These tumors are not cancerous (they don't spread to other tissues), but they replace the weblike inner structure of a bone with a soft, fibrous mass.
"They range all the way from being totally benign, where you wouldn’t recognize them, to being extremely painful," said David Frayer, an anthropologist at the University of Kansas who reported the finding along with his colleagues today (June 5) in the journal PLOS ONE. "The size of this one, and the bulging of it, probably caused the individual pain."
Unusual bone
The Neanderthal rib fragment measures just more than an inch long (30 millimeters). It was first unearthed between 1899 and 1905 in a cave known as the Krapina rock shelter in Croatia. This site held more than 900 Neanderthal bones dating back 120,000 to 130,000 years ago. Many of the bones display signs of trauma, and quite a few show post-mortem cutting marks, perhaps indicating cannibalism or some sort of ritual reburial.
Neanderthals (Homo neanderthalensis) were a human species closely related to modern humans (Homo sapiens). They died out approximately 30,000 years ago, though not without apparently interbreeding with Homo sapiens: Many modern-day humans carry Neanderthal DNA, suggesting the two species had sex.
In the 1980s, University of Pennsylvania researchers X-rayed the entire collection of bones found at Krapina, and they published a book in 1999 showing each radiograph. Most of those X-rays were quite high-quality, said Janet Monge, the keeper of physical anthropology at the University of Pennsylvania Museum, who participated in that project and the current study.
But there was one exception: One little rib fragment appeared "burned out" in the X-ray image, an overexposure that turned out to be due to the loss of inner bone in the specimen.
Now, the study researchers have returned to the rib, subjecting it to higher-quality X-rays and to microCT (computed tomography) scanning, which is similar to -- but higher-resolution than -- the types of scans doctors use to detect bone trauma in living patients.
Ancient tumor
The new images reveal a hollow shell, with an empty cavity where a network of inner "spongy bone" should be. (This spongy bone is so named because it's full of holes where blood vessels sneak through.)
"We do see it in human patients today," Monge told LiveScience. "It's exactly the same kind of process and in the same place."
Fibrous dysplasia is caused by a spontaneous genetic mutation in the cells that produce bone, according to the Mayo Clinic. In some cases, the tumors are small and asymptomatic. In other cases, they cause pain and weakness. Because the researchers have only an isolated rib from this particular Neanderthal, they can't say whether his or her other bones would have been affected.
Previously, the oldest known tumors came from Egyptian mummies and dated back only 4,000 years or so. (A 1,600-year-old tumor containing teeth was found in the pelvis of an ancient Roman corpse.) That makes the Neanderthal tumor, at about 120,000 years old, the most ancient "by a lot!" Monge said.
In many ways, the Neanderthal tumor is a needle-in-a-haystack find, Frayer said.
"People of that time didn't live as long as they did today; plus, there weren't very many of them compared to the Egyptians and people today," he told LiveScience. "So finding evidence of tumors and evidence of cancers, is -- I don't know if I want to say ‘lucky’ -- but there isn't a lot of evidence for it."
Read more at Discovery News
Jun 5, 2013
Early Human Ancestor Primate Tiny, Scrappy
The oldest known fossil primate skeleton, dating to 55 million years ago reveals that one of our earliest ancestors was a scrappy tree dweller with an unusual combination of features.
The discovery, made in central China's Hubei Province and reported in the journal Nature, strengthens the theory that Asia was the center of primate evolution. The new species, Archicebus achilles, also suggests that our earliest ancestors were very small.
"Archicebus was a tiny primate weighing less than 1 ounce," co-author Daniel Gebo of Northern Illinois University told Discovery News. "It would easily fit in the palm of your hand. Its eye orbits were not large, suggesting it was active during the daytime."
He added, "Archicebus likely bounced and climbed around the canopy, being entirely arboreal, looking for food items out on the terminal branches of trees. It had incredibly long legs and was an adept leaper. Think of little lemurs moving through the branches of trees within a rainforest setting."
Analysis, including state-of-the-art Synchrotron CT scanning, determined that the skeleton of Archicebus is about 7 million years older than the oldest fossil primate skeletons known previously, which include Darwinius from Germany and Notharctus from Wyoming.
The tiny primate lived close to the evolutionary divergence between the lineage leading to modern monkeys, apes and humans (collectively known as anthropoids) and the lineage leading to living tarsiers.
Gebo thinks the split might have happened as "each lineage tried to make themselves anatomically and ecologically different to avoid direct competition with each other, since this leads to extinction."
Given its root placement on the primate family tree, Archicebus had a mish-mash of characteristics.
"Archicebus is a quite odd creature," lead author Xijun Ni of the Institute of Vertebrate Paleontology and Paleoanthropology at the Chinese Academy of Sciences, told Discovery News. "It has many features that support its tarsiform (like tarsiers) affinity, but also has many features typically seen in anthropoids."
It had the feet of a small monkey, but the arms, legs, skull and teeth of a very primitive primate. The researchers were surprised that it had such small eyes. Modern tarsiers have some of the largest eyes, relative to body size, in the animal kingdom. They allow the tiny primates to see well at night.
Although Archicebus hailed from Asia, the earliest known humans came from Africa.
"This suggests that a primitive anthropoid colonized Africa from Asia, and from these early African anthropoids all later catarrhines (monkeys, apes and humans) evolved," Gebo said.
As for the small size of Archicebus, other mammal lineages often started small and evolved to be bigger over time. The phenomenon is known as "Cope's Rule." No one is entirely sure why this happened among mammals, but the environment must have only supported such a size in terms of climate, food sources and other factors.
Read more at Discovery News
The discovery, made in central China's Hubei Province and reported in the journal Nature, strengthens the theory that Asia was the center of primate evolution. The new species, Archicebus achilles, also suggests that our earliest ancestors were very small.
"Archicebus was a tiny primate weighing less than 1 ounce," co-author Daniel Gebo of Northern Illinois University told Discovery News. "It would easily fit in the palm of your hand. Its eye orbits were not large, suggesting it was active during the daytime."
He added, "Archicebus likely bounced and climbed around the canopy, being entirely arboreal, looking for food items out on the terminal branches of trees. It had incredibly long legs and was an adept leaper. Think of little lemurs moving through the branches of trees within a rainforest setting."
Analysis, including state-of-the-art Synchrotron CT scanning, determined that the skeleton of Archicebus is about 7 million years older than the oldest fossil primate skeletons known previously, which include Darwinius from Germany and Notharctus from Wyoming.
The tiny primate lived close to the evolutionary divergence between the lineage leading to modern monkeys, apes and humans (collectively known as anthropoids) and the lineage leading to living tarsiers.
Gebo thinks the split might have happened as "each lineage tried to make themselves anatomically and ecologically different to avoid direct competition with each other, since this leads to extinction."
Given its root placement on the primate family tree, Archicebus had a mish-mash of characteristics.
"Archicebus is a quite odd creature," lead author Xijun Ni of the Institute of Vertebrate Paleontology and Paleoanthropology at the Chinese Academy of Sciences, told Discovery News. "It has many features that support its tarsiform (like tarsiers) affinity, but also has many features typically seen in anthropoids."
It had the feet of a small monkey, but the arms, legs, skull and teeth of a very primitive primate. The researchers were surprised that it had such small eyes. Modern tarsiers have some of the largest eyes, relative to body size, in the animal kingdom. They allow the tiny primates to see well at night.
Although Archicebus hailed from Asia, the earliest known humans came from Africa.
"This suggests that a primitive anthropoid colonized Africa from Asia, and from these early African anthropoids all later catarrhines (monkeys, apes and humans) evolved," Gebo said.
As for the small size of Archicebus, other mammal lineages often started small and evolved to be bigger over time. The phenomenon is known as "Cope's Rule." No one is entirely sure why this happened among mammals, but the environment must have only supported such a size in terms of climate, food sources and other factors.
Read more at Discovery News
The Battle for Earth's Early Oceans
Stromatolites ruled the fossil record for 2 billion years. The squishy, sticky mounds of communal-living microbes dominated shallow-water environments everywhere on Earth during life's early days. Then, long before algae-munching animals appeared 550 million years ago, stromatolites mysteriously plummeted in number.
Now scientists think they've found a possible culprit: another microbe called foraminifera. A billion years ago, these two single-celled species battled for supremacy in the world's oceans, and stromatolites lost, according to a study published May 27 in the journal Proceedings of the National Academy of Sciences.
"We'll never be able to prove what happened back in the Proterozoic, but we've at least shown there's a potential explanation," said Joan Bernhard, lead study author and a scientist at Woods Hole Oceanographic Institution in Woods Hole, Mass.
Early ocean throw-down
Stromatolite mounds grow tall when waves cover the top layer of algae with mud or sand and a new algae layer covers the sunlight-choking sediment. The trapped sediments turn into distinctive rippled fossils. But the wavy layers disappear beginning about a billion years ago, replaced by thrombolites — clumpy, jumbled microbial mats.
Researchers suspect the decline is due to a change in ocean chemistry or the sudden appearance of a creature that found the stromatolites especially tasty — though there's no fossil evidence for this.
Bernhard said DNA evidence triggered her suspicion that forams (short for foraminifera) were guilty of turning stromatolites into thrombolites. Foraminifera are tiny organisms, usually the size of a sand grain, that grow hard shells. Their shells don't show up in the fossil record until just before the Cambrian period, about 550 million years ago. However, DNA evidence called a molecular clock suggests the first forams were shell-free, and evolved 400 million years earlier. (Without their shells, evidence of these early forams was less likely to survive in the fossil record.)
The rise of the forams thus neatly coincides with the demise of stromatolites, but a real-world test was needed to back up this idea. Bernhard and her colleagues collected modern stromatolites from the Bahamas, one of the few remaining spots where the microbial mounds survive today, and threw them in the ring with forams to see who came out the victor.
In this corner we have…
In a lab, the researchers seeded the stromatolites with forams from the same Bahamas bay. Ever so slowly, the foraminifera fanned out their hairlike pseudopods into the algae layers. Pseudopods help forams eat, move and explore their environment. After six months, the effect was devastating for the stromatolites. Their layers were scrambled. But during a control experiment, in which forams were treated with a chemical that kept them from using their pseudopods, the stromatolites were still pristinely layered at the end of the test. The team also found forams living in thrombolites from the Bahamas, supporting their hypothesis that forams turn stromatolites into thrombolites.
Read more at Discovery News
Now scientists think they've found a possible culprit: another microbe called foraminifera. A billion years ago, these two single-celled species battled for supremacy in the world's oceans, and stromatolites lost, according to a study published May 27 in the journal Proceedings of the National Academy of Sciences.
"We'll never be able to prove what happened back in the Proterozoic, but we've at least shown there's a potential explanation," said Joan Bernhard, lead study author and a scientist at Woods Hole Oceanographic Institution in Woods Hole, Mass.
Early ocean throw-down
Stromatolite mounds grow tall when waves cover the top layer of algae with mud or sand and a new algae layer covers the sunlight-choking sediment. The trapped sediments turn into distinctive rippled fossils. But the wavy layers disappear beginning about a billion years ago, replaced by thrombolites — clumpy, jumbled microbial mats.
Researchers suspect the decline is due to a change in ocean chemistry or the sudden appearance of a creature that found the stromatolites especially tasty — though there's no fossil evidence for this.
Bernhard said DNA evidence triggered her suspicion that forams (short for foraminifera) were guilty of turning stromatolites into thrombolites. Foraminifera are tiny organisms, usually the size of a sand grain, that grow hard shells. Their shells don't show up in the fossil record until just before the Cambrian period, about 550 million years ago. However, DNA evidence called a molecular clock suggests the first forams were shell-free, and evolved 400 million years earlier. (Without their shells, evidence of these early forams was less likely to survive in the fossil record.)
The rise of the forams thus neatly coincides with the demise of stromatolites, but a real-world test was needed to back up this idea. Bernhard and her colleagues collected modern stromatolites from the Bahamas, one of the few remaining spots where the microbial mounds survive today, and threw them in the ring with forams to see who came out the victor.
In this corner we have…
In a lab, the researchers seeded the stromatolites with forams from the same Bahamas bay. Ever so slowly, the foraminifera fanned out their hairlike pseudopods into the algae layers. Pseudopods help forams eat, move and explore their environment. After six months, the effect was devastating for the stromatolites. Their layers were scrambled. But during a control experiment, in which forams were treated with a chemical that kept them from using their pseudopods, the stromatolites were still pristinely layered at the end of the test. The team also found forams living in thrombolites from the Bahamas, supporting their hypothesis that forams turn stromatolites into thrombolites.
Read more at Discovery News
Kepler Stars (And Planets) Are Bigger Than Thought
A team checking up on results from NASA's planet-hunting Kepler space telescope finds that most of Kepler's target stars -- and therefore any orbiting planets -- are bigger than expected. The discovery makes the search for small, Earth-like worlds more difficult.
The team used a relatively large ground-based telescope at Kitt Peak National Observatory to probe 268 stars of the nearly 3,000 target stars Kepler was watching.
The telescope, which was launched in 2009, last month lost use of its pointing system and currently is not operating. Troubleshooting efforts remain under way.
The telescope works by measuring slight changes in the amount of light coming from selected sun-like stars. The idea is that some planets passing by will temporarily blot out a smidgen of light, relative to the telescope's line of sight. The percentage of light blocked relates directly to the size of a transiting planet or planets.
For example, a telescope positioned to peer into our solar system would see about 1 percent of the sun's light dimmed during Jupiter's transits.
"If we want to know the radius of the planet very accurately, we need to know the radius of the star that that planet transits. It's as simple as that," astronomer Steve Howell, with NASA's Ames Research Center, said at the American Astronomical Society meeting in Indianapolis this week.
Howell and colleagues found that most of the Kepler stars they studied were slightly larger than original estimates and about one-quarter of the stars were at least 35 percent larger than expected.
"That means the exoplanets are larger than we thought," Howell said, adding that Kepler's confirmed planets, which currently number 132, would have had follow-up work done to accurately pinpoint the host stars' sizes.
The analysis, however, is relevant to the 2,740 planet candidates still awaiting confirmation.
By implication, the new results reduce the number of potential Earth-size planets found by Kepler, noted astronomer Mark Everett, with the National Optical Astronomy Observatory, which operates the Kitt Peak telescopes, among others.
Another implication of the research is that bigger stars' so-called "habitable zones" -- the regions where orbiting planets could have surface temperatures suitable for liquid water, a condition believed to be necessary for life -- are farther away than original estimates because bigger stars are brighter and radiate more heat.
"You would need to have planets that are in slightly longer period orbits to stay in the habitable zone," of stars that are slightly bigger than expected, Howell said.
It also means that some planets believed to be rocky worlds, based on how close they are to their parent stars, may actually be icy, gas bodies.
Read more at Discovery News
The team used a relatively large ground-based telescope at Kitt Peak National Observatory to probe 268 stars of the nearly 3,000 target stars Kepler was watching.
The telescope, which was launched in 2009, last month lost use of its pointing system and currently is not operating. Troubleshooting efforts remain under way.
The telescope works by measuring slight changes in the amount of light coming from selected sun-like stars. The idea is that some planets passing by will temporarily blot out a smidgen of light, relative to the telescope's line of sight. The percentage of light blocked relates directly to the size of a transiting planet or planets.
For example, a telescope positioned to peer into our solar system would see about 1 percent of the sun's light dimmed during Jupiter's transits.
"If we want to know the radius of the planet very accurately, we need to know the radius of the star that that planet transits. It's as simple as that," astronomer Steve Howell, with NASA's Ames Research Center, said at the American Astronomical Society meeting in Indianapolis this week.
Howell and colleagues found that most of the Kepler stars they studied were slightly larger than original estimates and about one-quarter of the stars were at least 35 percent larger than expected.
"That means the exoplanets are larger than we thought," Howell said, adding that Kepler's confirmed planets, which currently number 132, would have had follow-up work done to accurately pinpoint the host stars' sizes.
The analysis, however, is relevant to the 2,740 planet candidates still awaiting confirmation.
By implication, the new results reduce the number of potential Earth-size planets found by Kepler, noted astronomer Mark Everett, with the National Optical Astronomy Observatory, which operates the Kitt Peak telescopes, among others.
Another implication of the research is that bigger stars' so-called "habitable zones" -- the regions where orbiting planets could have surface temperatures suitable for liquid water, a condition believed to be necessary for life -- are farther away than original estimates because bigger stars are brighter and radiate more heat.
"You would need to have planets that are in slightly longer period orbits to stay in the habitable zone," of stars that are slightly bigger than expected, Howell said.
It also means that some planets believed to be rocky worlds, based on how close they are to their parent stars, may actually be icy, gas bodies.
Read more at Discovery News
Hubble Captures Huge Explosion on Faraway Star
NASA's Hubble Space Telescope has given astronomers a rare look at an enormous stellar eruption, allowing them to map out the aftermath of such blasts in unprecedented detail.
Hubble photographed an April 2011 explosion in the double-star system T Pyxidis (T Pyx for short), which goes off every 12 to 50 years. The new images reveal that material ejected by previous T Pyx outbursts did not escape into space, instead sticking around to form a debris disk about 1 light-year wide around the system.
This information came as a surprise to the research team.
"We fully expected this to be a spherical shell," study co-author Arlin Crotts of Columbia University said in a statement. "This observation shows it is a disk, and it is populated with fast-moving ejecta from previous outbursts."
The erupting T Pyx star is a white dwarf, the burned-out core of a star much like our own sun. White dwarfs are small but incredibly dense, often packing the mass of the sun into a volume the size of Earth.
T Pyx's white dwarf has a companion star, from which it siphons off hydrogen fuel. When enough of this hydrogen builds up on the white dwarf's surface, it detonates like a gigantic hydrogen bomb, increasing the white dwarf's brightness by a factor of 10,000 over a single day or so.
This happens again and again. T Pyx is known to have erupted in 1890, 1902, 1920, 1944, and 1966, in addition to the 2011 event.
Such recurrent outbursts are known as nova explosions. (Nova is Latin for "new," referring to how suddenly novas appear in the sky.) Novas are distinct from supernovas, even more dramatic blasts that involve the destruction of an entire star.
The new study clarifies just what happens to the material ejected by such outbursts.
"We've all seen how light from fireworks shells during the grand finale will light up the smoke and soot from shells earlier in the show," co-author Stephen Lawrence of Hofstra University said in a statement. "In an analogous way, we're using light from T Pyx's latest outburst and its propagation at the speed of light to dissect its fireworks displays from decades past."
The study represents the first time the area around an erupting star has been mapped in three dimensions, researchers said.
The new Hubble Space Telescope observations also help refine the distance to T Pyx, pegging it at 15,600 light-years from Earth. (Past estimates have ranged between 6,500 and 16,000 light-years.)
Read more at Discovery News
Hubble photographed an April 2011 explosion in the double-star system T Pyxidis (T Pyx for short), which goes off every 12 to 50 years. The new images reveal that material ejected by previous T Pyx outbursts did not escape into space, instead sticking around to form a debris disk about 1 light-year wide around the system.
This information came as a surprise to the research team.
"We fully expected this to be a spherical shell," study co-author Arlin Crotts of Columbia University said in a statement. "This observation shows it is a disk, and it is populated with fast-moving ejecta from previous outbursts."
The erupting T Pyx star is a white dwarf, the burned-out core of a star much like our own sun. White dwarfs are small but incredibly dense, often packing the mass of the sun into a volume the size of Earth.
T Pyx's white dwarf has a companion star, from which it siphons off hydrogen fuel. When enough of this hydrogen builds up on the white dwarf's surface, it detonates like a gigantic hydrogen bomb, increasing the white dwarf's brightness by a factor of 10,000 over a single day or so.
This happens again and again. T Pyx is known to have erupted in 1890, 1902, 1920, 1944, and 1966, in addition to the 2011 event.
Such recurrent outbursts are known as nova explosions. (Nova is Latin for "new," referring to how suddenly novas appear in the sky.) Novas are distinct from supernovas, even more dramatic blasts that involve the destruction of an entire star.
The new study clarifies just what happens to the material ejected by such outbursts.
"We've all seen how light from fireworks shells during the grand finale will light up the smoke and soot from shells earlier in the show," co-author Stephen Lawrence of Hofstra University said in a statement. "In an analogous way, we're using light from T Pyx's latest outburst and its propagation at the speed of light to dissect its fireworks displays from decades past."
The study represents the first time the area around an erupting star has been mapped in three dimensions, researchers said.
The new Hubble Space Telescope observations also help refine the distance to T Pyx, pegging it at 15,600 light-years from Earth. (Past estimates have ranged between 6,500 and 16,000 light-years.)
Read more at Discovery News
Jun 4, 2013
Animal Origin Stories: Myth vs. Science
How the Turtle Got Its Shell
In the book "Myths and Legends of the Australian Aborigines", there's a story about how the first turtle fashioned a shallow water dish -- known as a coolamon to indigenous Australians -- from a tree, and tied the coolamon to his back for protection along with a strip of bark on his stomach.
Scientists writing in the latest issue of Current Biology, however, have a different story to tell.
The shells, the researchers found, are composed of 50 bones held together in a structure that evolved over millions of years, with its origin reaching back to before the dinosaurs. More than 45 fossils belonging to a 260-million-year-old reptile from South Africa known as Eunotosaurus show that the turtles' ancestors developed a shell as their ribs broadened and then fused together.
While there are countless myths and legends from different cultures around the globe about how the turtle and other animals acquired their unique traits, ultimately scientists pouring through the available evidence in the fossil record and the genes of animals living today to learn the real story.
How the Zebra Got Its Stripes
The zebra developed its stripes as a means of evading the bites of voracious, disease-carrying horse flies, according to a study published last year. Pest prevention might not be the only function of the zebra's stripes, which could also help with regulating heat and escaping large predators.
Myths surrounding the zebra's patterned pelage tell a different story. According to African bushman legend, back in the days when the Earth was young, water was scarce. A watering hole could be an important resource worth guarding, as a baboon once did, chasing off other animals who came near and building a fire to get through the nights.
One day, a zebra, which was all white at the time, confronted the baboon, and in the confrontation, got burned by the still-burning sticks from the baboon's fire. After being injured, the zebra ran into the savannah, no longer a single color but striped instead.
How the Baboon Got Its Bottom
The story of how the zebra got its stripes is also the tale of how the baboon got its bright red bottom. The zebra didn't merely lose the fight and run away. Instead, it kicked the baboon as hard as it could, sending the primate flying into the air and crashing to the ground right on its butt. The legend is meant to explain not only the baboon's anatomy but also its tempestuous demeanor.
Baboon behinds, of course, aren't the result of injury but rather evolution, according to scientists. Baboons spend a lot of their time sitting. Given that their buttocks are composed of nerveless callouses, they have evolved to do so comfortably for hours on end.
When a female is fertile, she alerts male baboons of her readiness with her swollen, red behind. The larger the swelling, the younger and more often the female tends to breed, according to a 2001 study. So the baboon's red buttocks is not just a built-in seat, but also a signal to other primates.
How the Leopard Got Its Spots
Certainly the most famous story of how the leopard got its spots comes from Rudyard Kipling. According to the British author and adventurer, the leopard first lived on the sandy High Veldt, where the cat looked much like its environment. Eventually, its prey left the High Veldt, grew stripes, spots and blotches, and headed into the forest where they could hide. Advised by a wise baboon, the leopard was told to "go into other spots" and soon realized how the other animals were evading his detection.
Kipling's story -- racist overtones omitted from this retelling aside -- wasn't too far off from how the leopard in fact did evolve spots.
Habitat and behavior, such as moving through trees or being active at night, can determine a cat coat's color and pattern, according to a study published last year in the Proceedings of the Royal Society B. Leopards, jaguars and cats with dark-colored coats typically are active day and night, and roam a variety of habitats. Cats with solid-colored coats tend to be active during the daytime and in open environments.
Read more at Discovery News
If there's one thing that turtle tall tales seem to have in
common, it's that we shouldn't underestimate these slow, steady
creatures.
|
Scientists writing in the latest issue of Current Biology, however, have a different story to tell.
The shells, the researchers found, are composed of 50 bones held together in a structure that evolved over millions of years, with its origin reaching back to before the dinosaurs. More than 45 fossils belonging to a 260-million-year-old reptile from South Africa known as Eunotosaurus show that the turtles' ancestors developed a shell as their ribs broadened and then fused together.
While there are countless myths and legends from different cultures around the globe about how the turtle and other animals acquired their unique traits, ultimately scientists pouring through the available evidence in the fossil record and the genes of animals living today to learn the real story.
How the Zebra Got Its Stripes
The zebra developed its stripes as a means of evading the bites of voracious, disease-carrying horse flies, according to a study published last year. Pest prevention might not be the only function of the zebra's stripes, which could also help with regulating heat and escaping large predators.
Myths surrounding the zebra's patterned pelage tell a different story. According to African bushman legend, back in the days when the Earth was young, water was scarce. A watering hole could be an important resource worth guarding, as a baboon once did, chasing off other animals who came near and building a fire to get through the nights.
One day, a zebra, which was all white at the time, confronted the baboon, and in the confrontation, got burned by the still-burning sticks from the baboon's fire. After being injured, the zebra ran into the savannah, no longer a single color but striped instead.
How the Baboon Got Its Bottom
The story of how the zebra got its stripes is also the tale of how the baboon got its bright red bottom. The zebra didn't merely lose the fight and run away. Instead, it kicked the baboon as hard as it could, sending the primate flying into the air and crashing to the ground right on its butt. The legend is meant to explain not only the baboon's anatomy but also its tempestuous demeanor.
Baboon behinds, of course, aren't the result of injury but rather evolution, according to scientists. Baboons spend a lot of their time sitting. Given that their buttocks are composed of nerveless callouses, they have evolved to do so comfortably for hours on end.
When a female is fertile, she alerts male baboons of her readiness with her swollen, red behind. The larger the swelling, the younger and more often the female tends to breed, according to a 2001 study. So the baboon's red buttocks is not just a built-in seat, but also a signal to other primates.
How the Leopard Got Its Spots
Certainly the most famous story of how the leopard got its spots comes from Rudyard Kipling. According to the British author and adventurer, the leopard first lived on the sandy High Veldt, where the cat looked much like its environment. Eventually, its prey left the High Veldt, grew stripes, spots and blotches, and headed into the forest where they could hide. Advised by a wise baboon, the leopard was told to "go into other spots" and soon realized how the other animals were evading his detection.
Kipling's story -- racist overtones omitted from this retelling aside -- wasn't too far off from how the leopard in fact did evolve spots.
Habitat and behavior, such as moving through trees or being active at night, can determine a cat coat's color and pattern, according to a study published last year in the Proceedings of the Royal Society B. Leopards, jaguars and cats with dark-colored coats typically are active day and night, and roam a variety of habitats. Cats with solid-colored coats tend to be active during the daytime and in open environments.
Read more at Discovery News
Ancient Ball Player Statue Found in Mexico
An ancient granite statue representing a decapitated Mesoamerican ball player has been discovered at the pre-Hispanic site of Piedra Labrada, southeast of the Mexican state of Guerrero during repair work to a water pipe line.
The 5-foot-4 inch tall sculpture dates to at least 1,000 years ago and portrays a bow-legged individual with his arms crossed.
“We can say it is a ball player because of the attributes that this statue has,” Juan Pablo Sereno Uribe, an archaeologist at the National Institute of Anthropology and History (INAH), told Discovery News.
“A helmet is carved on the head, while the waist features a yugo. This is like a belt but stronger to protect this part of the body during the ball game,” Sereno Uribe said.
Extending for about 1.24 square miles, Piedra Labrada has so far revealed 50 buildings, five ball game courts and more than 20 sculptures of various sizes depicting anthropomorphic figures, snake heads and snails.
The pre-Columbian ball player was unearthed in the biggest ball game platform, an “I” shaped court about 131 feet long.
“In three of the courts we found sculptures of snake heads. No other court had a ball player statue,” Sereno Uribe said.
Little is known of the game played at the courts.
“The only thing we know, is that they used a very heavy ball made with rubber, and they threw the ball to each other from one side to the other of the court,” Sereno Uribe said.
“In some games they were supposed to hit the ball only with the wrist, which explains the protective yoke carved in the sculpture,” he added.
The statue might have been carved by the Mixtec indigeno people around 600 A.D. It was found in two pieces, the head sliced at the neck, as if it had been decapitated.
Read more at Discovery News
The 5-foot-4 inch tall sculpture dates to at least 1,000 years ago and portrays a bow-legged individual with his arms crossed.
“We can say it is a ball player because of the attributes that this statue has,” Juan Pablo Sereno Uribe, an archaeologist at the National Institute of Anthropology and History (INAH), told Discovery News.
“A helmet is carved on the head, while the waist features a yugo. This is like a belt but stronger to protect this part of the body during the ball game,” Sereno Uribe said.
Extending for about 1.24 square miles, Piedra Labrada has so far revealed 50 buildings, five ball game courts and more than 20 sculptures of various sizes depicting anthropomorphic figures, snake heads and snails.
The pre-Columbian ball player was unearthed in the biggest ball game platform, an “I” shaped court about 131 feet long.
“In three of the courts we found sculptures of snake heads. No other court had a ball player statue,” Sereno Uribe said.
Little is known of the game played at the courts.
“The only thing we know, is that they used a very heavy ball made with rubber, and they threw the ball to each other from one side to the other of the court,” Sereno Uribe said.
“In some games they were supposed to hit the ball only with the wrist, which explains the protective yoke carved in the sculpture,” he added.
The statue might have been carved by the Mixtec indigeno people around 600 A.D. It was found in two pieces, the head sliced at the neck, as if it had been decapitated.
Read more at Discovery News
Earliest Evidence of French Winemaking Discovered
An ancient limestone platform dating back to 425 B.C is the oldest wine press ever discovered on French soil.
The press is the first evidence of winemaking in what is now modern-day France, according to new research published this week in the journal Proceedings of the National Academy of Sciences. The evidence suggests inhabitants of the region of Etruria got the ancient residents of France hooked. (Etruria covered parts of modern-day Tuscany, Latium and Umbria in Italy.)
"Now we know that the ancient Etruscans lured the Gauls into the Mediterranean wine culture by importing wine into southern France," study researcher Patrick McGovern, who directs the Bimolecular Archaeology Laboratory for Cuisine, Fermented Beverages and Health at the University of Pennsylvania Museum, said in a statement. "This built up a demand that could only be met by establishing a native industry."
The spread of wine
Humans first domesticated the Eurasian grapevine some 9,000 years ago in the Near East, perhaps in what is now Turkey or Iran. Gradually, the intoxicating beverage spread across the Mediterranean Sea, conveyed by Phoenicians and Greeks. By 800 B.C., the Phoenicians were trading wine with the Etruscans, storing it in large jars called amphoras.
Shipwrecks from around 600 B.C. are filled with these Etruscan amphoras, suggesting that residents of the area that is now Italy were by then exporting their own wine. In the coastal town of Lattara, near modern-day Lattes, France, a merchant storage complex full of these amphoras has been found, dating back to the town's heyday of 525 B.C. to 475 B.C.
McGovern and his colleagues analyzed three of these amphoras to find out if they really contained wine. They also analyzed an odd limestone discovery shaped like a rounded platform with a spout, thought to be a press of some sort. Whether the locals used the press to smash olives or grapes was unknown.
Analyzing amphoras
The researchers followed careful standards for the artifacts they analyzed: Amphoras had to be excavated undisturbed and sealed, with their bases intact and available for analysis. They also had to be unwashed and had to contain possible residue.
Only 13 jars met those standards. The researchers chose three representative amphoras for molecular testing, and also tested two later amphoras that almost certainly contained wine for comparison.
The analysis revealed tartaric acid, which is found naturally in grapes and is a major component of wine. Other wine-related acids — including succinic acid, malic acid and citric acid — were all present.
This ancient wine may not have had much in common with what might be found on a tasting trip to Napa or Sonoma, Calif., today. The researchers also found traces of pine resin, likely used for flavor and as a preservative. And the wine contained compounds from herbs, likely rosemary, basil and thyme.
Today, one Greek wine called retsina still uses pine resin for flavor, even though glass bottles have removed the need for it as a preservative.
"It's hard for a palate accustomed to Cabernet and Chardonnay to get accustomed to a wine that tastes like, well, turpentine," according to wineloverspage.com, which also describes retsina wine as "neither subtle nor delicate."
The beginnings of French wine
Of course, ancient wines weren't just for recreational quaffing; they were also used as medicinal mixtures, McGovern said. More importantly, the limestone press contained traces of tartaric acid, revealing that the residents of Lattara not only imported wine, but also made it. The press was in use by about 425 B.C. to 400 B.C., making it the first known evidence of winemaking in what is now France.
The older amphoras, combined with the ancient press, suggest that residents of the area that is now southern France first imported wine and then started cultivation, probably with vines imported from Etruria. Shipwrecks from that region have been found with vine seedlings inside, according to the researchers.
Read more at Discovery News
The press is the first evidence of winemaking in what is now modern-day France, according to new research published this week in the journal Proceedings of the National Academy of Sciences. The evidence suggests inhabitants of the region of Etruria got the ancient residents of France hooked. (Etruria covered parts of modern-day Tuscany, Latium and Umbria in Italy.)
"Now we know that the ancient Etruscans lured the Gauls into the Mediterranean wine culture by importing wine into southern France," study researcher Patrick McGovern, who directs the Bimolecular Archaeology Laboratory for Cuisine, Fermented Beverages and Health at the University of Pennsylvania Museum, said in a statement. "This built up a demand that could only be met by establishing a native industry."
The spread of wine
Humans first domesticated the Eurasian grapevine some 9,000 years ago in the Near East, perhaps in what is now Turkey or Iran. Gradually, the intoxicating beverage spread across the Mediterranean Sea, conveyed by Phoenicians and Greeks. By 800 B.C., the Phoenicians were trading wine with the Etruscans, storing it in large jars called amphoras.
Shipwrecks from around 600 B.C. are filled with these Etruscan amphoras, suggesting that residents of the area that is now Italy were by then exporting their own wine. In the coastal town of Lattara, near modern-day Lattes, France, a merchant storage complex full of these amphoras has been found, dating back to the town's heyday of 525 B.C. to 475 B.C.
McGovern and his colleagues analyzed three of these amphoras to find out if they really contained wine. They also analyzed an odd limestone discovery shaped like a rounded platform with a spout, thought to be a press of some sort. Whether the locals used the press to smash olives or grapes was unknown.
Analyzing amphoras
The researchers followed careful standards for the artifacts they analyzed: Amphoras had to be excavated undisturbed and sealed, with their bases intact and available for analysis. They also had to be unwashed and had to contain possible residue.
Only 13 jars met those standards. The researchers chose three representative amphoras for molecular testing, and also tested two later amphoras that almost certainly contained wine for comparison.
The analysis revealed tartaric acid, which is found naturally in grapes and is a major component of wine. Other wine-related acids — including succinic acid, malic acid and citric acid — were all present.
This ancient wine may not have had much in common with what might be found on a tasting trip to Napa or Sonoma, Calif., today. The researchers also found traces of pine resin, likely used for flavor and as a preservative. And the wine contained compounds from herbs, likely rosemary, basil and thyme.
Today, one Greek wine called retsina still uses pine resin for flavor, even though glass bottles have removed the need for it as a preservative.
"It's hard for a palate accustomed to Cabernet and Chardonnay to get accustomed to a wine that tastes like, well, turpentine," according to wineloverspage.com, which also describes retsina wine as "neither subtle nor delicate."
The beginnings of French wine
Of course, ancient wines weren't just for recreational quaffing; they were also used as medicinal mixtures, McGovern said. More importantly, the limestone press contained traces of tartaric acid, revealing that the residents of Lattara not only imported wine, but also made it. The press was in use by about 425 B.C. to 400 B.C., making it the first known evidence of winemaking in what is now France.
The older amphoras, combined with the ancient press, suggest that residents of the area that is now southern France first imported wine and then started cultivation, probably with vines imported from Etruria. Shipwrecks from that region have been found with vine seedlings inside, according to the researchers.
Read more at Discovery News
LA Pollution Is Losing Its Sting
An "eye-stinging" air pollutant in Los Angeles is decreasing due to stricter vehicle emissions standards in Southern California and the United States, a new study that examined emissions of chemicals in the City of Angels found.
The chemical, called peroyxacetyl nitrate (PAN), is associated with eye irritation during smoggy days. And it's not the only thing declining in the city's air: Ozone is also on the wane, the study found, confirming ozone measurements done by other researchers.
"To most people the important thing is that air quality has improved, but as scientists we want to understand how it has improved," lead researcher Ilana Pollack told Our Amazing Planet. Pollack works with the National Oceanographic and Atmospheric Administration's Earth System Research Laboratory in the chemical sciences division.
"Our work aims to interpret the past and present observations, with the aim of informing future decisions," added Pollack, who is also a research scientist with the Cooperative Institute for Research in Environmental Sciences (CIRES) with the University of Colorado in Boulder.
Trapped in the basin
Ozone is both good and bad for nature. High in the stratosphere, it filters ultraviolet radiation and keeps it from reaching Earth's surface. Closer to the surface, however, it can damage plant life and irritate human lungs.
Both ozone and PAN are major components of smog in Los Angeles, Pollack said. PAN is formed in a series of reactions that involve compounds found in sources like tailpipe emissions, sunlight and molecules with different combinations of nitrogen and oxygen. PAN serves as a store of the nitrogen-oxygen compounds that can be transported over long distances.
The scientists compiled and examined data from research aircraft (some of the measurements were made by the team in 2010, some by others in previous field studies), and also included archived data from roadside monitors and ground-based instruments.
Los Angeles is particularly vulnerable to the effects of ozone because it lies in a basin, Pollack said.
"Precursor emissions and the secondary pollutants formed from them often get trapped in the 'bowl-like' basin of air that is created by the surrounding mountains," she told LiveScience in an email.
Vehicles still dominant source for emissions
Pollack added that her team has no immediate plans to re-examine Los Angeles for pollutants, but that she hopes to conduct follow-up studies.
"Although emissions of precursors have declined, motor vehicles are still the dominant source of emissions in Los Angeles," she said, but added that the improvement is encouraging.
Read more at Discovery News
The chemical, called peroyxacetyl nitrate (PAN), is associated with eye irritation during smoggy days. And it's not the only thing declining in the city's air: Ozone is also on the wane, the study found, confirming ozone measurements done by other researchers.
"To most people the important thing is that air quality has improved, but as scientists we want to understand how it has improved," lead researcher Ilana Pollack told Our Amazing Planet. Pollack works with the National Oceanographic and Atmospheric Administration's Earth System Research Laboratory in the chemical sciences division.
"Our work aims to interpret the past and present observations, with the aim of informing future decisions," added Pollack, who is also a research scientist with the Cooperative Institute for Research in Environmental Sciences (CIRES) with the University of Colorado in Boulder.
Trapped in the basin
Ozone is both good and bad for nature. High in the stratosphere, it filters ultraviolet radiation and keeps it from reaching Earth's surface. Closer to the surface, however, it can damage plant life and irritate human lungs.
Both ozone and PAN are major components of smog in Los Angeles, Pollack said. PAN is formed in a series of reactions that involve compounds found in sources like tailpipe emissions, sunlight and molecules with different combinations of nitrogen and oxygen. PAN serves as a store of the nitrogen-oxygen compounds that can be transported over long distances.
The scientists compiled and examined data from research aircraft (some of the measurements were made by the team in 2010, some by others in previous field studies), and also included archived data from roadside monitors and ground-based instruments.
Los Angeles is particularly vulnerable to the effects of ozone because it lies in a basin, Pollack said.
"Precursor emissions and the secondary pollutants formed from them often get trapped in the 'bowl-like' basin of air that is created by the surrounding mountains," she told LiveScience in an email.
Vehicles still dominant source for emissions
Pollack added that her team has no immediate plans to re-examine Los Angeles for pollutants, but that she hopes to conduct follow-up studies.
"Although emissions of precursors have declined, motor vehicles are still the dominant source of emissions in Los Angeles," she said, but added that the improvement is encouraging.
Read more at Discovery News
Jun 3, 2013
'Tracking in Caves': On the Trail of Pre-Historic Humans
In remote caves of the Pyrenees, lie precious remnants of the Ice Age undisturbed: foot and hand prints of prehistoric hunters. The tracks have remained untouched for millennia and are in excellent condition. Dr. Tilman Lenssen-Erz of the Forschungsstelle Afrika (Research Centre Africa) at the University of Cologne and Dr. Andreas Pastoors from the Neanderthal Museum in Mettmann are going on expedition to encode the secrets of the trails. Their idea: to involve the best trackers in the world in the project in order to learn even more about the tracks. San hunters from Namibia, also known as Bushmen, will be investigating the tracks. The scientific expedition will span two continents and seven weeks.
From the 9th until the end of June, the expedition will go to Namibia in order to prepare the San for the task in hand. The hunters are excellent trackers who can read details that evade others from trails. “The San are amongst the last known ‘trained’ hunters and gatherers of southern Africa,” explains Tilman Lenssen-Erz. “The tracks in the caves are going to be examined by people who really know something about them.”
The first press conference will be held on July 1 in the Neanderthal Museum in Mettmann before team “Tracking in Caves” sets off for the Pyrenees; it is there that the San hunters will be investigating the tracks. Andreas Pastoors wants more information pertaining to the amount and size of the tracks: “We hope to gain additional information: e.g. whether the person was in a rush, or whether they were maybe ill or carrying something. More information that will give life to the tracks.” The idea behind this is to gain a better understanding of the cultural life of prehistoric man: “Our biggest job is to interpret cave art and to find out what the people did with these cave paintings. We have to gather all information about the context of these images.”
Team “Tracking Caves”, which consists of scientists and experienced trackers Tsamkxao Cigae, C/wi /Kunta and C/wi G/aqo De!u, will then report on their discoveries from the Ice Age caves of Ariège in a press conference at the University of Cologne on July 17. The Khoisan language of Tsamkxao Cigae will be translated into English.
Dr. Tilman Lenssen-Erz from the Forschungsstelle Afrika of the University of Cologne and Dr. Andreas Pastoors from Neanderthal Museum in Mettmann are in charge of the project. The academics are cave and rock art experts. Tsamkxao Cigae works as a tracker in the Tsumkwe Country Lodge, lives in Tsumkwe; speaks good English and will act as interpreter. C/wi /Kunta works as a tracker for a professional hunter, lives in //xa/oba, a village 20 km north of Tsumkwe, which is also a “Living Hunters Museum” where the San’s contemporary and traditional living modes are exhibited.
C/wi G/aqo De!u works as a tracker for hunting teams and lives in a village ca. 20 km south-south west of Tsumkwe.
From Science Daily
From the 9th until the end of June, the expedition will go to Namibia in order to prepare the San for the task in hand. The hunters are excellent trackers who can read details that evade others from trails. “The San are amongst the last known ‘trained’ hunters and gatherers of southern Africa,” explains Tilman Lenssen-Erz. “The tracks in the caves are going to be examined by people who really know something about them.”
The first press conference will be held on July 1 in the Neanderthal Museum in Mettmann before team “Tracking in Caves” sets off for the Pyrenees; it is there that the San hunters will be investigating the tracks. Andreas Pastoors wants more information pertaining to the amount and size of the tracks: “We hope to gain additional information: e.g. whether the person was in a rush, or whether they were maybe ill or carrying something. More information that will give life to the tracks.” The idea behind this is to gain a better understanding of the cultural life of prehistoric man: “Our biggest job is to interpret cave art and to find out what the people did with these cave paintings. We have to gather all information about the context of these images.”
Team “Tracking Caves”, which consists of scientists and experienced trackers Tsamkxao Cigae, C/wi /Kunta and C/wi G/aqo De!u, will then report on their discoveries from the Ice Age caves of Ariège in a press conference at the University of Cologne on July 17. The Khoisan language of Tsamkxao Cigae will be translated into English.
Dr. Tilman Lenssen-Erz from the Forschungsstelle Afrika of the University of Cologne and Dr. Andreas Pastoors from Neanderthal Museum in Mettmann are in charge of the project. The academics are cave and rock art experts. Tsamkxao Cigae works as a tracker in the Tsumkwe Country Lodge, lives in Tsumkwe; speaks good English and will act as interpreter. C/wi /Kunta works as a tracker for a professional hunter, lives in //xa/oba, a village 20 km north of Tsumkwe, which is also a “Living Hunters Museum” where the San’s contemporary and traditional living modes are exhibited.
C/wi G/aqo De!u works as a tracker for hunting teams and lives in a village ca. 20 km south-south west of Tsumkwe.
From Science Daily
First Foodies Expanded Diet 3.5 Million Years Ago
Our ancestors used to dine almost exclusively on leaves and fruits from trees, shrubs and herbs until 3.5 million years ago when a major shift occurred, according to four new simultaneously published studies.
During this shift, early human species like Australopithecus afarensis and Kenyanthropus platyops began to also feast on grasses, sedges and succulent plants — or on animals that ate those plants — the studies, published in the latest Proceedings of the National Academy of Sciences, conclude.
“What we have is chemical information on what our ancestors ate, which in simpler terms is like a piece of food item stuck between their teeth and preserved for millions of years,” said Zeresenay Alemseged, senior curator and chair of anthropology at the California Academy of Sciences and a co-author on two of the papers, was quoted as saying in a press release.
Alemseged and the other researchers found the “chemical information” in ancient teeth from our early human ancestors.
They explained that teeth contain isotopes that lock in information about what the individual ate. Here’s how that works: Plants can be divided into three categories based on their method of photosynthesis: C3, C4 and CAM. C3 plants (trees, shrubs, and herbs) can be chemically distinguished from C4/CAM plants (grasses, sedges, and succulents) because the latter incorporate higher amounts of the heavier isotope carbon-13 into their tissues. When the plants are eaten, the isotopes become incorporated into the consumer’s tissues. These include the enamel of developing teeth.
Demonstrating the sturdiness of well-preserved teeth in the fossil record, the relative amounts of carbon-13 in such teeth can be read by scientists millions of years after the individual’s demise. Your veggie lifestyle, or not, is therefore locked into your teeth seemingly forever.
The 4 new papers, Alemseged said, “present the most exhaustive isotope-based studies on early human diets to date. Because feeding is the most important factor determining an organism’s physiology, behavior and its interaction with the environment, these finds will give us new insight into the evolutionary mechanisms that shaped our evolution.”
The findings raise some interesting questions:
Were our ancestors broadening their vegetarian diet 3.5 million years ago, or were they becoming carnivorous?
What caused the shift?
An intriguing clue goes back to an earlier paper Alemseged worked on. He and his team found tools for meat consumption dating back to 3.4 million years ago. My guess is that improved technology and perhaps environmental changes led to our becoming more omnivorous then.
Read more at Discovery News
During this shift, early human species like Australopithecus afarensis and Kenyanthropus platyops began to also feast on grasses, sedges and succulent plants — or on animals that ate those plants — the studies, published in the latest Proceedings of the National Academy of Sciences, conclude.
“What we have is chemical information on what our ancestors ate, which in simpler terms is like a piece of food item stuck between their teeth and preserved for millions of years,” said Zeresenay Alemseged, senior curator and chair of anthropology at the California Academy of Sciences and a co-author on two of the papers, was quoted as saying in a press release.
Alemseged and the other researchers found the “chemical information” in ancient teeth from our early human ancestors.
They explained that teeth contain isotopes that lock in information about what the individual ate. Here’s how that works: Plants can be divided into three categories based on their method of photosynthesis: C3, C4 and CAM. C3 plants (trees, shrubs, and herbs) can be chemically distinguished from C4/CAM plants (grasses, sedges, and succulents) because the latter incorporate higher amounts of the heavier isotope carbon-13 into their tissues. When the plants are eaten, the isotopes become incorporated into the consumer’s tissues. These include the enamel of developing teeth.
Demonstrating the sturdiness of well-preserved teeth in the fossil record, the relative amounts of carbon-13 in such teeth can be read by scientists millions of years after the individual’s demise. Your veggie lifestyle, or not, is therefore locked into your teeth seemingly forever.
The 4 new papers, Alemseged said, “present the most exhaustive isotope-based studies on early human diets to date. Because feeding is the most important factor determining an organism’s physiology, behavior and its interaction with the environment, these finds will give us new insight into the evolutionary mechanisms that shaped our evolution.”
The findings raise some interesting questions:
Were our ancestors broadening their vegetarian diet 3.5 million years ago, or were they becoming carnivorous?
What caused the shift?
An intriguing clue goes back to an earlier paper Alemseged worked on. He and his team found tools for meat consumption dating back to 3.4 million years ago. My guess is that improved technology and perhaps environmental changes led to our becoming more omnivorous then.
Read more at Discovery News
How Can You Tell a Fake Jesus?
A man in Australia claims to be Jesus. A.J. Miller is attracting hundreds of people to his seminars; dozens have moved to his land in Queensland where he calls his movement the Divine Truth. He says he remembered he was Jesus in 2004.
"There were lots of people in the first century who didn't believe I was the Messiah and were offended by what I said -- and in fact I died at the hands of some of them,” he recently told SkyNews. "Unfortunately they didn't learn love either and my suggestion is, even if you don't believe I am Jesus, at least learn how to love."
Other so-called messiahs have come and gone.
"People have done this since Jesus' time; it's not anything new," said Ron Burks, a clinical mental health counselor at Tallahassee Memorial Hospital who co-wrote the book "Damaged Disciples: Casualties of Authoritarian Churches and the Shepherding Movement," after being involved with the Fort Lauderdale/Shepherding movement for 17 years. "The apostle Paul warned of false Christs."
But why are scholars so sure that A.J. Miller isn't Jesus, and that his partner, Australian Mary Luck, is not Mary Magdalene, as she claims?
Although Jesus is one of the most studied figures in history, scholars debate many of the details of his life. Still, many agree on consistencies in his character. For example, the historical Jesus didn’t appear to seek power.
"There's a way of speaking in Greek (which has the same constructs as Aramaic) in the imperative case if you’re giving an order and expect to be obeyed. There are several times (in the Bible) when Jesus said things and he’s not using that case. He never said things in a way where people felt obligated to do what he had said," Burks said.
It's also questionable whether the first Jesus even claimed he was the Messiah.
"We have the historical Jesus vs. the portrayal in the Gospels, and we can reconstruct some reliable things about Jesus," associate professor of religious studies at Grinnell College Henry Rietz said. "We are pretty confident that he proclaimed that the kingdom of God is near. But claiming that he would be the king? Maybe, maybe not. His message was much more about establishing the social order of justice in contrast to the oppressive Roman empire."
Often, Burks says, people who claim to be Jesus simulate his attitude at first, and that makes them attractive for the same reasons people appreciated the historical Jesus.
"But once they get a following and a sense of control over people, power usually corrupts," said Burks. "What happens when groups like this progress is there is almost universally an extreme emphasis on money, sex and power."
In some instances, fake religious leaders have started out with the intention of conning people, but others start out meaning well "and end up deceiving themselves and others," Burks said.
"Once followers latch on and start repeating the leader's teachings, it becomes almost irresistible, and (the leader) start thinking, Gosh, am I really? It can be a combination of self-delusion and deluding a group of people."
Ultimately, things can end tragically, as they did in Waco in 1993 and Jonestown in 1978. To prevent such catastrophes, Rietz suggests that outsiders try to encourage a less good vs. evil approach.
"In my opinion, the guy in Australia is not Jesus; he's not the messiah," Rietz said. "We can certainly disagree with him but at the same time, I think we can co-exist; there's a place in this world for all of us. Often people in these movements think of the world in good vs. evil dualistic terms and we, in turn, portray them as evil, and that’s where things often become dangerous."
Instead, he said, we should try to "understand them as human beings and talk about our beliefs."
Read more at Discovery News
"There were lots of people in the first century who didn't believe I was the Messiah and were offended by what I said -- and in fact I died at the hands of some of them,” he recently told SkyNews. "Unfortunately they didn't learn love either and my suggestion is, even if you don't believe I am Jesus, at least learn how to love."
Other so-called messiahs have come and gone.
"People have done this since Jesus' time; it's not anything new," said Ron Burks, a clinical mental health counselor at Tallahassee Memorial Hospital who co-wrote the book "Damaged Disciples: Casualties of Authoritarian Churches and the Shepherding Movement," after being involved with the Fort Lauderdale/Shepherding movement for 17 years. "The apostle Paul warned of false Christs."
But why are scholars so sure that A.J. Miller isn't Jesus, and that his partner, Australian Mary Luck, is not Mary Magdalene, as she claims?
Although Jesus is one of the most studied figures in history, scholars debate many of the details of his life. Still, many agree on consistencies in his character. For example, the historical Jesus didn’t appear to seek power.
"There's a way of speaking in Greek (which has the same constructs as Aramaic) in the imperative case if you’re giving an order and expect to be obeyed. There are several times (in the Bible) when Jesus said things and he’s not using that case. He never said things in a way where people felt obligated to do what he had said," Burks said.
It's also questionable whether the first Jesus even claimed he was the Messiah.
"We have the historical Jesus vs. the portrayal in the Gospels, and we can reconstruct some reliable things about Jesus," associate professor of religious studies at Grinnell College Henry Rietz said. "We are pretty confident that he proclaimed that the kingdom of God is near. But claiming that he would be the king? Maybe, maybe not. His message was much more about establishing the social order of justice in contrast to the oppressive Roman empire."
Often, Burks says, people who claim to be Jesus simulate his attitude at first, and that makes them attractive for the same reasons people appreciated the historical Jesus.
"But once they get a following and a sense of control over people, power usually corrupts," said Burks. "What happens when groups like this progress is there is almost universally an extreme emphasis on money, sex and power."
In some instances, fake religious leaders have started out with the intention of conning people, but others start out meaning well "and end up deceiving themselves and others," Burks said.
"Once followers latch on and start repeating the leader's teachings, it becomes almost irresistible, and (the leader) start thinking, Gosh, am I really? It can be a combination of self-delusion and deluding a group of people."
Ultimately, things can end tragically, as they did in Waco in 1993 and Jonestown in 1978. To prevent such catastrophes, Rietz suggests that outsiders try to encourage a less good vs. evil approach.
"In my opinion, the guy in Australia is not Jesus; he's not the messiah," Rietz said. "We can certainly disagree with him but at the same time, I think we can co-exist; there's a place in this world for all of us. Often people in these movements think of the world in good vs. evil dualistic terms and we, in turn, portray them as evil, and that’s where things often become dangerous."
Instead, he said, we should try to "understand them as human beings and talk about our beliefs."
Read more at Discovery News
There's a Hole in the Sun!
During the latter part of last week, a huge void rotated across the face of the sun. But never fear, it isn’t a sign of the “End Times” or some weird sci-fi stellar malnourishment, this particular hole is a coronal hole. Though it may be a well-known phenomenon, it is noteworthy — it’s the largest coronal hole to be observed in the sun’s atmosphere for over a year.
Snapped through three of NASA Solar Dynamics Observatory‘s (SDO) extreme ultraviolet filters, this coronal hole is caused by a low density region of hot plasma.
The sun’s lower corona is threaded with powerful magnetic fields. Some are looped — or “closed” — very low in the corona, creating the beautiful, bright coronal loops that trap superheated gases that generate vast amounts of extreme ultraviolet light, radiation that is produced by multimillion degree plasma (the bright regions in the image, top).
However, there are also “open” field lines that have one end of their magnetic flux anchored in the solar photosphere. These lines fire solar plasma into interplanetary space at an accelerated rate, often intensifying space weather conditions. These regions of open field lines, or coronal holes, act like fire hoses, blasting plasma into space. These regions are the source of the the fast solar wind that accelerates solar material toward Earth, which often only takes 2-3 days to travel from the sun to Earth.
Through the SDO’s eyes, coronal holes appear dark as there is a very low density of the multimillion degree plasma generating the EUV radiation. And as this dramatic observation demonstrates, to the eyes of the SDO, the sun really does appear to have a hole.
Read more at Discovery News
Snapped through three of NASA Solar Dynamics Observatory‘s (SDO) extreme ultraviolet filters, this coronal hole is caused by a low density region of hot plasma.
The sun’s lower corona is threaded with powerful magnetic fields. Some are looped — or “closed” — very low in the corona, creating the beautiful, bright coronal loops that trap superheated gases that generate vast amounts of extreme ultraviolet light, radiation that is produced by multimillion degree plasma (the bright regions in the image, top).
However, there are also “open” field lines that have one end of their magnetic flux anchored in the solar photosphere. These lines fire solar plasma into interplanetary space at an accelerated rate, often intensifying space weather conditions. These regions of open field lines, or coronal holes, act like fire hoses, blasting plasma into space. These regions are the source of the the fast solar wind that accelerates solar material toward Earth, which often only takes 2-3 days to travel from the sun to Earth.
Through the SDO’s eyes, coronal holes appear dark as there is a very low density of the multimillion degree plasma generating the EUV radiation. And as this dramatic observation demonstrates, to the eyes of the SDO, the sun really does appear to have a hole.
Read more at Discovery News
Jun 2, 2013
A Step Closer to Artificial Livers: Researchers Identify Compounds That Help Liver Cells Grow Outside Body
Prometheus, the mythological figure who stole fire from the gods, was punished for this theft by being bound to a rock. Each day, an eagle swept down and fed on his liver, which then grew back to be eaten again the next day.
Modern scientists know there is a grain of truth to the tale, says MIT engineer Sangeeta Bhatia: The liver can indeed regenerate itself if part of it is removed. However, researchers trying to exploit that ability in hopes of producing artificial liver tissue for transplantation have repeatedly been stymied: Mature liver cells, known as hepatocytes, quickly lose their normal function when removed from the body.
"It's a paradox because we know liver cells are capable of growing, but somehow we can't get them to grow" outside the body, says Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science at MIT, a senior associate member of the Broad Institute and a member of MIT's Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.
Now, Bhatia and colleagues have taken a step toward that goal. In a paper appearing in the June 2 issue of Nature Chemical Biology, they have identified a dozen chemical compounds that can help liver cells not only maintain their normal function while grown in a lab dish, but also multiply to produce new tissue.
Cells grown this way could help researchers develop engineered tissue to treat many of the 500 million people suffering from chronic liver diseases such as hepatitis C, according to the researchers.
Lead author of the paper is Jing (Meghan) Shan, a graduate student in the Harvard-MIT Division of Health Sciences and Technology. Members of Bhatia's lab collaborated with researchers from the Broad Institute, Harvard Medical School and the University of Wisconsin.
Large-scale screen
Bhatia has previously developed a way to temporarily maintain normal liver-cell function after those cells are removed from the body, by precisely intermingling them with mouse fibroblast cells. For this study, funded by the National Institutes of Health and Howard Hughes Medical Institute, the research team adapted the system so that the liver cells could grow, in layers with the fibroblast cells, in small depressions in a lab dish. This allowed the researchers to perform large-scale, rapid studies of how 12,500 different chemicals affect liver-cell growth and function.
The liver has about 500 functions, divided into four general categories: drug detoxification, energy metabolism, protein synthesis and bile production. David Thomas, an associate researcher working with Todd Golub at the Broad Institute, measured expression levels of 83 liver enzymes representing some of the most finicky functions to maintain.
After screening thousands of liver cells from eight different tissue donors, the researchers identified 12 compounds that helped the cells maintain those functions, promoted liver cell division, or both.
Two of those compounds seemed to work especially well in cells from younger donors, so the researchers -- including Robert Schwartz, an IMES postdoc, and Stephen Duncan, a professor of human and molecular genetics at the University of Wisconsin -- also tested them in liver cells generated from induced pluripotent stem cells (iPSCs). Scientists have tried to create hepatocytes from iPSCs before, but such cells don't usually reach a fully mature state. However, when treated with those two compounds, the cells matured more completely.
Bhatia and her team wonder whether these compounds might launch a universal maturation program that could influence other types of cells as well. Other researchers are now testing them in a variety of cell types generated from iPSCs.
In future studies, the MIT team plans to embed the treated liver cells on polymer tissue scaffolds and implant them in mice, to test whether they could be used as replacement liver tissues. They are also pursuing the possibility of developing the compounds as drugs to help regenerate patients' own liver tissues, working with Trista North and Wolfram Goessling of Harvard Medical School.
Eric Lagasse, an associate professor of pathology at the University of Pittsburgh, says the findings represent a promising approach to overcoming the difficulties scientists have encountered in growing liver cells outside of the body. "Finding a way of growing functional hepatocytes in cell culture would be a major breakthrough," says Lagasse, who was not part of the research team.
Making connections
Bhatia and colleagues have also recently made progress toward solving another challenge of engineering liver tissue, which is getting the recipient's body to grow blood vessels to supply the new tissue with oxygen and nutrients. In a paper published in the Proceedings of the National Academy of Sciences in April, Bhatia and Christopher Chen, a professor at the University of Pennsylvania, showed that if preformed cords of endothelial cells are embedded into the tissue, they will rapidly grow into arrays of blood vessels after the tissue is implanted.
Read more at Science Daily
Modern scientists know there is a grain of truth to the tale, says MIT engineer Sangeeta Bhatia: The liver can indeed regenerate itself if part of it is removed. However, researchers trying to exploit that ability in hopes of producing artificial liver tissue for transplantation have repeatedly been stymied: Mature liver cells, known as hepatocytes, quickly lose their normal function when removed from the body.
"It's a paradox because we know liver cells are capable of growing, but somehow we can't get them to grow" outside the body, says Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science at MIT, a senior associate member of the Broad Institute and a member of MIT's Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.
Now, Bhatia and colleagues have taken a step toward that goal. In a paper appearing in the June 2 issue of Nature Chemical Biology, they have identified a dozen chemical compounds that can help liver cells not only maintain their normal function while grown in a lab dish, but also multiply to produce new tissue.
Cells grown this way could help researchers develop engineered tissue to treat many of the 500 million people suffering from chronic liver diseases such as hepatitis C, according to the researchers.
Lead author of the paper is Jing (Meghan) Shan, a graduate student in the Harvard-MIT Division of Health Sciences and Technology. Members of Bhatia's lab collaborated with researchers from the Broad Institute, Harvard Medical School and the University of Wisconsin.
Large-scale screen
Bhatia has previously developed a way to temporarily maintain normal liver-cell function after those cells are removed from the body, by precisely intermingling them with mouse fibroblast cells. For this study, funded by the National Institutes of Health and Howard Hughes Medical Institute, the research team adapted the system so that the liver cells could grow, in layers with the fibroblast cells, in small depressions in a lab dish. This allowed the researchers to perform large-scale, rapid studies of how 12,500 different chemicals affect liver-cell growth and function.
The liver has about 500 functions, divided into four general categories: drug detoxification, energy metabolism, protein synthesis and bile production. David Thomas, an associate researcher working with Todd Golub at the Broad Institute, measured expression levels of 83 liver enzymes representing some of the most finicky functions to maintain.
After screening thousands of liver cells from eight different tissue donors, the researchers identified 12 compounds that helped the cells maintain those functions, promoted liver cell division, or both.
Two of those compounds seemed to work especially well in cells from younger donors, so the researchers -- including Robert Schwartz, an IMES postdoc, and Stephen Duncan, a professor of human and molecular genetics at the University of Wisconsin -- also tested them in liver cells generated from induced pluripotent stem cells (iPSCs). Scientists have tried to create hepatocytes from iPSCs before, but such cells don't usually reach a fully mature state. However, when treated with those two compounds, the cells matured more completely.
Bhatia and her team wonder whether these compounds might launch a universal maturation program that could influence other types of cells as well. Other researchers are now testing them in a variety of cell types generated from iPSCs.
In future studies, the MIT team plans to embed the treated liver cells on polymer tissue scaffolds and implant them in mice, to test whether they could be used as replacement liver tissues. They are also pursuing the possibility of developing the compounds as drugs to help regenerate patients' own liver tissues, working with Trista North and Wolfram Goessling of Harvard Medical School.
Eric Lagasse, an associate professor of pathology at the University of Pittsburgh, says the findings represent a promising approach to overcoming the difficulties scientists have encountered in growing liver cells outside of the body. "Finding a way of growing functional hepatocytes in cell culture would be a major breakthrough," says Lagasse, who was not part of the research team.
Making connections
Bhatia and colleagues have also recently made progress toward solving another challenge of engineering liver tissue, which is getting the recipient's body to grow blood vessels to supply the new tissue with oxygen and nutrients. In a paper published in the Proceedings of the National Academy of Sciences in April, Bhatia and Christopher Chen, a professor at the University of Pennsylvania, showed that if preformed cords of endothelial cells are embedded into the tissue, they will rapidly grow into arrays of blood vessels after the tissue is implanted.
Read more at Science Daily
Despite Mammoth Blood, Cloning Still Unlikely
Despite the recent discovery of a stunningly preserved mammoth, the odds of scientists using it to clone a real-life mammoth anytime soon are still low, experts say.
"To clone a mammoth by finding intact cells -- and, more importantly, an intact genome -- is going to be exceptionally difficult, likely impossible," said Love Dalén, a paleogeneticist at the Swedish Museum of Natural History. "Finding this mammoth makes it slightly less impossible."
Dalen is referring to the amazingly preserved mammoth remains found recently in the icy tundra on an island off the coast of Siberia. Some of the tissue was locked beneath the ice, and when researchers struck it with a pick, blood came flowing out, they said.
ven tissue that looks as juicy and fresh as a steak, however, can be damaged on the cellular level. That would mean very little useful DNA (the molecules that carry the instructions for life), could be extracted, the researchers said.
Stunning Find
Outside researchers haven't examined the tissue and blood that was reportedly preserved in this 10,000-year-old mammoth. but Dalen's conversations with the research team suggest the fossilized beast is in incredibly good shape, he said.
"Every mammoth that has ever been found, it's either its bones or dried tissue and skin or pieces of hair, so to find a piece of frozen mammoth that's so well preserved that there's blood inside is pretty amazing," Dalen said. "From a DNA standpoint, it might be the most well-preserved specimen ever found."
Holy Grail
Because flash freezing could preserve the cells, and possibly their genetic information, everyone in the field is searching for the Holy Grail: a mammoth that fell into a frozen lake, which then immediately froze overnight and stayed perennially frozen for 10,000 years, said Hendrik Poinar, an evolutionary geneticist at McMaster University in Canada.
The chances of finding such a perfect specimen are incredibly slim, Poinar said.
Even if a mammoth were buried in ice at the time of discovery, it wouldn’t be clear how many times the fossil had thawed and refrozen over the millennia, he added.
Past mammoth discoveries have looked very good on the outside -- even shooting out what initially looked like blood -- only to later be found to have heavily damaged cells that contain unusable DNA.
"When you take back to the lab, you realize that they're pretty heavily degraded," Poinar told LiveScience.
For cloning, intact DNA is needed, as the process requires replacing the DNA in an elephant egg with a mammoth's genome, then gestating the egg inside an elephant.
But time and harsh weather conditions inexorably degrade DNA, splitting it up into millions of tiny snippets that are extremely difficult to piece together. For instance, a 2012 study found that in bone, half of the chemical bonds in DNA break down within 521 years after death, and the genetic material degrades completely by 6.8 million years.
Elephant in Disguise?
If the white blood cells -- or more likely, tissues -- from the recently discovered mammoth are in fairly good condition, then there may be longer intact DNA snippets, which should be easier to piece together, Poinar said.
Either way, the mammoth genome would probably be reconstructed by using the elephant genome as the blueprint.
Read more at Discovery News
"To clone a mammoth by finding intact cells -- and, more importantly, an intact genome -- is going to be exceptionally difficult, likely impossible," said Love Dalén, a paleogeneticist at the Swedish Museum of Natural History. "Finding this mammoth makes it slightly less impossible."
Dalen is referring to the amazingly preserved mammoth remains found recently in the icy tundra on an island off the coast of Siberia. Some of the tissue was locked beneath the ice, and when researchers struck it with a pick, blood came flowing out, they said.
ven tissue that looks as juicy and fresh as a steak, however, can be damaged on the cellular level. That would mean very little useful DNA (the molecules that carry the instructions for life), could be extracted, the researchers said.
Stunning Find
Outside researchers haven't examined the tissue and blood that was reportedly preserved in this 10,000-year-old mammoth. but Dalen's conversations with the research team suggest the fossilized beast is in incredibly good shape, he said.
"Every mammoth that has ever been found, it's either its bones or dried tissue and skin or pieces of hair, so to find a piece of frozen mammoth that's so well preserved that there's blood inside is pretty amazing," Dalen said. "From a DNA standpoint, it might be the most well-preserved specimen ever found."
Holy Grail
Because flash freezing could preserve the cells, and possibly their genetic information, everyone in the field is searching for the Holy Grail: a mammoth that fell into a frozen lake, which then immediately froze overnight and stayed perennially frozen for 10,000 years, said Hendrik Poinar, an evolutionary geneticist at McMaster University in Canada.
The chances of finding such a perfect specimen are incredibly slim, Poinar said.
Even if a mammoth were buried in ice at the time of discovery, it wouldn’t be clear how many times the fossil had thawed and refrozen over the millennia, he added.
Past mammoth discoveries have looked very good on the outside -- even shooting out what initially looked like blood -- only to later be found to have heavily damaged cells that contain unusable DNA.
"When you take back to the lab, you realize that they're pretty heavily degraded," Poinar told LiveScience.
For cloning, intact DNA is needed, as the process requires replacing the DNA in an elephant egg with a mammoth's genome, then gestating the egg inside an elephant.
But time and harsh weather conditions inexorably degrade DNA, splitting it up into millions of tiny snippets that are extremely difficult to piece together. For instance, a 2012 study found that in bone, half of the chemical bonds in DNA break down within 521 years after death, and the genetic material degrades completely by 6.8 million years.
Elephant in Disguise?
If the white blood cells -- or more likely, tissues -- from the recently discovered mammoth are in fairly good condition, then there may be longer intact DNA snippets, which should be easier to piece together, Poinar said.
Either way, the mammoth genome would probably be reconstructed by using the elephant genome as the blueprint.
Read more at Discovery News
Subscribe to:
Posts (Atom)