Archaeologists have discovered two ancient Egyptian skeletons, dating back more than 3,300 years, which were each buried with a toe ring made of copper alloy, the first time such rings have been found in ancient Egypt.
The toe rings were likely worn while the individuals were still alive, and the discovery leaves open the question of whether they were worn for fashion or magical reasons.
Supporting the magical interpretation, one of the rings was found on the right toe of a male, age 35 to 40, whose foot had suffered a fracture along with a broken femur above it.
Unique Rings in a Unique Ancient City
Both skeletons were found in a cemetery just south of the ancient city of Akhetaten, whose name means "Horizon of the Aten." Now called Amarna, the city of Akhetaten was a short-lived Egyptian capital built by Akhenaten a pharaoh who tried to focus Egypt's religion around the worship of the sun disc, the "Aten." He was also likely the father of Tutankhamun.
After Akhenaten's death, this attempt to change Egyptian religion unraveled, as his successors denounced him and the city became abandoned. Even so, Anna Stevens, the assistant director of the Amarna Project, said the newly discovered rings are unlikely to be related to the religious changes Akhenaten introduced.
The findings do appear to be the first copper alloy toe rings discovered in ancient Egypt. "I'm not aware of any, but that doesn't mean they don't exist. Bear in mind that if we found something like this in a house, for example, we would have no idea of its purpose," Stevens wrote in an email to LiveScience.
A gold toe ring was previously found on a mummy named Hornedjitef, a priest at Karnak more than 2,200 years ago. The mummy, which resides at the British Museum, has a "thick gold ring on the big toe of his left foot," writes anthropologist Joyce Filer in her book "The Mystery of the Egyptian Mummy" (British Museum Press, 2003).
A Magical Healing Device?
The man whose right foot had been injured was likely in great pain when alive.
He "showed signs of multiple antemortem [before his death] fractures, including of several ribs, the left radius, right ulna, right foot (on which the toe ring was found) and right femur," Stevens wrote. "The fracture of the right femur healed at an angle and must have caused this individual considerable ongoing pain."
The ring was placed on the toe of the injured foot, suggesting perhaps it was intended as a magical healing device of sorts.
"The act of 'binding' or 'encircling' was a powerful magical device in ancient Egypt, and a metal ring, which can be looped around something, lends itself well to this kind of action," Stevens said. "This is a possibility that we will look into further, checking through sources such as the corpus of magico-medical spells that have survived from ancient Egypt, to look for parallels."
However, the skeleton of the second individual with the toe ring, found in 2012, bore no visible signs of a medical condition. Stevens notes that this individual has yet to be studied in depth by bio-archaeologists and its sex is unknown.
Who Were They?
The skeletons were wrapped in textile and plant-stem matting, and both burials had been disturbed by tomb robbers.
None of the skeletons in the cemetery were technically "mummified" so to speak. "There is no evidence from the cemetery as a whole of attempts to mummify the bodies, in terms of the removal of internal organs (we quite often find remains of brain within the skulls) or the introduction of additives to preserve tissue (the bodies survive largely as skeletons)," Stevens wrote. "But in a way the wrapping of the bodies within textile and matting is a step towards preserving the shape of the body, and a form of simple mummification."
Figuring out who these individuals were in life is tricky, Stevens said. This cemetery appears to represent a "wide slice" of the city's society. These people were not wealthy enough to get buried in a rock-cut tomb but could afford, and were allowed, the simple burials seen at this cemetery.
"They [the two individuals] probably lived, like most citizens of Amarna, in a small house adjacent to that of a larger villa belonging to one of the city's officials, for whom they provided services and labor in exchange for basic provisions, especially grain," Stevens said.
Read more at Discovery News
Jul 6, 2013
Decoding the Higgs Boson: Is it the Real Deal?
On July 4, 2012, physicists announced the groundbreaking discovery of a subatomic particle that was “consistent” with the Higgs boson. Using data from two Large Hadron Collider (LHC) experiments — CMS and ATLAS — something with the approximate energy of the theoretical particle had been spotted.
“We have observed a new boson,” announced Joe Incandela, CMS lead physicist, to cheers from the audience at the special meeting in Geneva, Switzerland. Had the final piece of the Standard Model finally been found? Was this the end of physics as we knew it?
A year after that historic day, physicists are still trying to characterize this “new boson,” and although it certainly looks like the much sought-after Higgs boson, can the quantum hunt finally be laid to rest?
Well, in typical particle physics style, scientists are still working on it.
“We have established without a doubt that we have a new particle, and that it is a boson. What remains to be done is confirm that it is a Higgs,” said physicist Pauline Gagnon, CERN physicist and member of the 2012 discovery team.
Wait, we’re still waiting for “confirmation”?
The Higgs boson is the last piece of the Standard Model — the all-encompassing theory that describes the nature of subatomic particles in our Universe. Theorized by a team of physicists in the 1960′s, headed by Peter Higgs, the particle has been the focus of increasingly powerful particle accelerators. The 7.5 billion euro ($9.5 billion) LHC on the Franco-Swiss border was built, in part, to find the Higgs boson.
Investing all this time, energy and money into seeking out a subatomic particle has created the most complex experiment in human history, all in an effort to understand one of the most fundamental puzzles in physics. The Higgs boson is an “exchange particle” that endows all matter with mass; without it, the Universe as we know it wouldn’t exist. In short, for the Standard Model to be correct, the Higgs must exist, otherwise quantum physics is wrong and a revolution in physics awaits. (Physicists love physics revolutions, so not every scientist was overjoyed to find a boson exactly where the Higgs boson should be hiding.)
Since last year’s big announcement, the complex problem of characterizing the candidate particle has kept LHC physicists busy. Though a boson certainly exists at the energy level predicted for a Higgs boson, it’s not necessarily the Higgs boson. Some physicists support the idea that there is just one type of Higgs boson, whereas superstring theory proponents reckon there is at least five.
“Have we found the boson, or perhaps one of several predicted by other theories? Until now, everything indicates that this is the Standard Model boson,” Gagnon told the AFP news agency. “It has the allure, the look, the song and the dance of the Higgs boson.”
Indeed, shortly after the detection of the “new boson,” physicists had to work on understanding the other physical characteristics of the particle. To be a Higgs boson, the particle must have zero spin and positive parity. “Spin” is a quantum measurement of angular momentum and “parity” is a measure of how a quantum particle’s mirror image behaves. After analyzing 2.5 times more data than was available last year, physicists in March announced that their Higgs candidate had “no spin and positive parity.” So far, so good.
The more analysis that is done, the more it seems that the boson is looking like a bona fide Higgs. But some physicists are still trying to rule out the possibility that they are being duped by nature while others aren’t convinced that they’ll ever be able to say that this Higgs is the one and only Higgs.
Read more at Discovery News
“We have observed a new boson,” announced Joe Incandela, CMS lead physicist, to cheers from the audience at the special meeting in Geneva, Switzerland. Had the final piece of the Standard Model finally been found? Was this the end of physics as we knew it?
A year after that historic day, physicists are still trying to characterize this “new boson,” and although it certainly looks like the much sought-after Higgs boson, can the quantum hunt finally be laid to rest?
Well, in typical particle physics style, scientists are still working on it.
“We have established without a doubt that we have a new particle, and that it is a boson. What remains to be done is confirm that it is a Higgs,” said physicist Pauline Gagnon, CERN physicist and member of the 2012 discovery team.
Wait, we’re still waiting for “confirmation”?
The Higgs boson is the last piece of the Standard Model — the all-encompassing theory that describes the nature of subatomic particles in our Universe. Theorized by a team of physicists in the 1960′s, headed by Peter Higgs, the particle has been the focus of increasingly powerful particle accelerators. The 7.5 billion euro ($9.5 billion) LHC on the Franco-Swiss border was built, in part, to find the Higgs boson.
Investing all this time, energy and money into seeking out a subatomic particle has created the most complex experiment in human history, all in an effort to understand one of the most fundamental puzzles in physics. The Higgs boson is an “exchange particle” that endows all matter with mass; without it, the Universe as we know it wouldn’t exist. In short, for the Standard Model to be correct, the Higgs must exist, otherwise quantum physics is wrong and a revolution in physics awaits. (Physicists love physics revolutions, so not every scientist was overjoyed to find a boson exactly where the Higgs boson should be hiding.)
Since last year’s big announcement, the complex problem of characterizing the candidate particle has kept LHC physicists busy. Though a boson certainly exists at the energy level predicted for a Higgs boson, it’s not necessarily the Higgs boson. Some physicists support the idea that there is just one type of Higgs boson, whereas superstring theory proponents reckon there is at least five.
“Have we found the boson, or perhaps one of several predicted by other theories? Until now, everything indicates that this is the Standard Model boson,” Gagnon told the AFP news agency. “It has the allure, the look, the song and the dance of the Higgs boson.”
Indeed, shortly after the detection of the “new boson,” physicists had to work on understanding the other physical characteristics of the particle. To be a Higgs boson, the particle must have zero spin and positive parity. “Spin” is a quantum measurement of angular momentum and “parity” is a measure of how a quantum particle’s mirror image behaves. After analyzing 2.5 times more data than was available last year, physicists in March announced that their Higgs candidate had “no spin and positive parity.” So far, so good.
The more analysis that is done, the more it seems that the boson is looking like a bona fide Higgs. But some physicists are still trying to rule out the possibility that they are being duped by nature while others aren’t convinced that they’ll ever be able to say that this Higgs is the one and only Higgs.
Read more at Discovery News
Jul 5, 2013
Seeing Sea Stars: The Missing Link in Eye Evolution?
A study has shown for the first time that starfish use primitive eyes at the tip of their arms to visually navigate their environment. Research headed by Dr. Anders Garm at the Marine Biological Section of the University of Copenhagen in Denmark, showed that starfish eyes are image-forming and could be an essential stage in eye evolution.
The researchers removed starfish with and without eyes from their food rich habitat, the coral reef, and placed them on the sand bottom one metre away, where they would starve. They monitored the starfishes' behaviour from the surface and found that while starfish with intact eyes head towards the direction of the reef, starfish without eyes walk randomly.
Dr Garm said: "The results show that the starfish nervous system must be able to process visual information, which points to a clear underestimation of the capacity found in the circular and somewhat dispersed central nervous system of echinoderms."
Analysing the morphology of the photoreceptors in the starfish eyes the researchers further confirmed that they constitute an intermediate state between the two large known groups of rhabdomeric and ciliary photoreceptors, in that they have both microvilli and a modified cilium.
Dr Garm added: "From an evolutionary point of view it is interesting because the morphology of the starfish eyes along with their optical quality (quality of the image) is close to the theoretical eye early in eye evolution when image formation first appeared. In this way it can help clarify what the first visual tasks were that drove this important step in eye evolution, namely navigation towards the preferred habitat using large stationary objects (here the reef)."
Most known starfish species possess a compound eye at the tip of each arm, which, except for the lack of true optics, resembles arthropod compound eye. Despite being known for about two centuries, no visually guided behaviour has ever been documented before.
From Science Daily
The researchers removed starfish with and without eyes from their food rich habitat, the coral reef, and placed them on the sand bottom one metre away, where they would starve. They monitored the starfishes' behaviour from the surface and found that while starfish with intact eyes head towards the direction of the reef, starfish without eyes walk randomly.
Dr Garm said: "The results show that the starfish nervous system must be able to process visual information, which points to a clear underestimation of the capacity found in the circular and somewhat dispersed central nervous system of echinoderms."
Analysing the morphology of the photoreceptors in the starfish eyes the researchers further confirmed that they constitute an intermediate state between the two large known groups of rhabdomeric and ciliary photoreceptors, in that they have both microvilli and a modified cilium.
Dr Garm added: "From an evolutionary point of view it is interesting because the morphology of the starfish eyes along with their optical quality (quality of the image) is close to the theoretical eye early in eye evolution when image formation first appeared. In this way it can help clarify what the first visual tasks were that drove this important step in eye evolution, namely navigation towards the preferred habitat using large stationary objects (here the reef)."
Most known starfish species possess a compound eye at the tip of each arm, which, except for the lack of true optics, resembles arthropod compound eye. Despite being known for about two centuries, no visually guided behaviour has ever been documented before.
From Science Daily
New Insights Into the Early Bombardment History On Mercury
The surface of Mercury is rather different from those of well-known rocky bodies like the Moon and Mars. Early images from the Mariner 10 spacecraft unveiled a planet covered by smooth plains and cratered plains of unclear origin. A team led by Dr. Simone Marchi, a Fellow of the NASA Lunar Science Institute located at the Southwest Research Institute (SwRI) Boulder, Colo., office, collaborating with the MESSENGER team, including Dr. Clark Chapman of the SwRI Planetary Science Directorate, studied the surface to better understand if the plains were formed by volcanic flows or composed of material ejected from the planet's giant impact basins.
Recent images from NASA's MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) spacecraft provided new insights showing that at least the younger plains resulted from vigorous volcanic activity. Yet scientists were unable to establish limits on how far into the past this volcanic activity may have occurred, or how much of the planet's surface may have been resurfaced by very old volcanic plains.
Now, a team of scientists has concluded that the oldest visible terrains on Mercury have an age of 4 billion to 4.1 billion years, and that the first 400 to 500 million years of the planet's evolution are not recorded on its surface. To reach its conclusion, the team measured the sizes and numbers of craters on the most heavily cratered terrains using images obtained by the MESSENGER spacecraft during its first year in orbit around Mercury. Team members then extrapolated to Mercury a model that was originally developed for comparing the Moon's crater distribution to a chronology based on the ages of rock samples gathered during the Apollo missions.
"By comparing the measured craters to the number and spatial distribution of large impact basins on Mercury, we found that they started to accumulate at about the same time, suggesting that the resetting of Mercury's surface was global and likely due to volcanism," said lead author Dr. Simone Marchi, who has a joint appointment between two of NASA's Lunar Science Institutes, one at the SwRI in Boulder and another at the Lunar and Planetary Institute in Houston.
Those results set the age boundary for the oldest terrains on Mercury to be contemporary with the so-called Late Heavy Bombardment (LHB), a period of intense asteroid and comet impacts recorded in lunar and asteroidal rocks and by the numerous craters on the Moon, Earth, and Mars, as well as Mercury.
"Meanwhile, the age of the youngest and broadest volcanic provinces visible on Mercury was determined to be about 3.6 billion to 3.8 billion years ago, just after the end of the Late Heavy Bombardment," Marchi said.
Read more at Science Daily
Recent images from NASA's MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) spacecraft provided new insights showing that at least the younger plains resulted from vigorous volcanic activity. Yet scientists were unable to establish limits on how far into the past this volcanic activity may have occurred, or how much of the planet's surface may have been resurfaced by very old volcanic plains.
Now, a team of scientists has concluded that the oldest visible terrains on Mercury have an age of 4 billion to 4.1 billion years, and that the first 400 to 500 million years of the planet's evolution are not recorded on its surface. To reach its conclusion, the team measured the sizes and numbers of craters on the most heavily cratered terrains using images obtained by the MESSENGER spacecraft during its first year in orbit around Mercury. Team members then extrapolated to Mercury a model that was originally developed for comparing the Moon's crater distribution to a chronology based on the ages of rock samples gathered during the Apollo missions.
"By comparing the measured craters to the number and spatial distribution of large impact basins on Mercury, we found that they started to accumulate at about the same time, suggesting that the resetting of Mercury's surface was global and likely due to volcanism," said lead author Dr. Simone Marchi, who has a joint appointment between two of NASA's Lunar Science Institutes, one at the SwRI in Boulder and another at the Lunar and Planetary Institute in Houston.
Those results set the age boundary for the oldest terrains on Mercury to be contemporary with the so-called Late Heavy Bombardment (LHB), a period of intense asteroid and comet impacts recorded in lunar and asteroidal rocks and by the numerous craters on the Moon, Earth, and Mars, as well as Mercury.
"Meanwhile, the age of the youngest and broadest volcanic provinces visible on Mercury was determined to be about 3.6 billion to 3.8 billion years ago, just after the end of the Late Heavy Bombardment," Marchi said.
Read more at Science Daily
The Great British Alien Hunt Begins?
The Search for Extraterrestrial Intelligence, or SETI, is one of the most profound — yet speculative — scientific pursuits of this generation. There is no evidence that any extraterrestrial life exists in our galactic neighborhood, yet we still try to ‘listen’ out for a sufficiently advanced alien race across the interstellar void. And now the SETI effort won’t be restricted to US-managed radio antennae — the British are joining the hunt.
Currently, SETI efforts are funded by private donations, but the UK SETI Research Network (UKSRN), comprised of scientists from 11 institutions, is eying government funds to give their search a turbo-boost.
“If we had one part in 200 — half a percent of the money that goes into astronomy at the moment — we could make an amazing difference. We would become comparable with the American effort,” said Alan Penny, UKSRN coordinator and researcher at the University of St Andrews, in an interview with BBC News. The UKSRN carried out their first meeting at the National Astronomy Meeting (NAM2013) at St. Andrews, Scotland, on Friday (July 5).
“I don’t know whether (aliens) are out there, but I’m desperate to find out. It’s quite possible that we’re alone in the Universe. And think about the implications of that: if we’re alone in the Universe then the whole purpose in the Universe is in us. If we’re not alone, that’s interesting in a very different way.”
“There are billions of planets out there. It would be remiss of us not to at least have half an ear open to any signals that might be being sent to us,” added Tim O’Brien of Jodrell Bank, a radio antennae that has been used for SETI projects in the past.
The network is applying for one million pounds ($1.5 million) per year for time on radio telescopes and data analysis.
Sadly, justifying public funds to back a project with no definite final outcome can be a tricky proposition, especially in the existing climate of government science cuts and fiscal woes. It therefore seems difficult to see why the Science and Technology Facilities Council (STFC), the primary funding body of UK science and technology, would support such an effort.
“Continued flat-cash science budget awards are constantly eroding STFC’s buying powers, causing the UK to withdraw from existing productive facilities such as the United Kingdom Infrared Telescope and the James Clerk Maxwell Telescope,” said Paul Crowther of Sheffield University. “(British astronomy) faces the prospect of a reduced volume of research grants, and participation in future high-impact facilities is threatened. I would be shocked if STFC’s advisory panels rated the support of UKSRN higher than such scientifically compelling competition.”
But in an ideal universe, where science receives the funding it deserves, justifying money on the hunt for intelligent extraterrestrials isn’t such a hard-sell.
For starters, analyzing radio antennae data for artificial signals isn’t such a resource-heavy project. Using existing radio antennae and groups of antennae (hooked up as interferometers), SETI projects can “piggyback” on surveys being carried out by other research groups and vice versa. Also, the development of technologies to whittle out artificial messages from cosmic noise will have tangible benefits for radio astronomy and communications techniques.
And then there’s the public interest in SETI projects. Undoubtedly there will be those who see any SETI effort a waste of time, but to be at the level of intelligence and technological know-how to actually conceive the prospect of life on Earth not being the only life in our galaxy is a profound philosophical epoch for the evolution of our species.
As embodied in the privately-funded Lone Signal project that was launched last month, the public interest in “reaching out” to the stars appears to be unwavering. Lone Signal is a Messaging Extraterrestrial Intelligence (METI) project that aims to be active for many decades, beaming crowd-sourced messages to the stars in the hope that some benevolent ETI is listening and asking the same questions we are.
Of course, as with any METI effort, whether we should be beaming “proof of life” radio waves to nearby stars at all is questionable — who knows if the Milky Way’s inhabitants are friendly? We could be living in a interstellar ecosystem where humanity is a mere ants nest. Should we really be transmitting our presence in spite of the risk of getting trampled?
Alternatively, should we endeavor to be “radio silent,” and risk a lonely existence, never to make contact with that neighboring, yet invisible, hypothetical alien civilization?
Read more at Discovery News
Currently, SETI efforts are funded by private donations, but the UK SETI Research Network (UKSRN), comprised of scientists from 11 institutions, is eying government funds to give their search a turbo-boost.
“If we had one part in 200 — half a percent of the money that goes into astronomy at the moment — we could make an amazing difference. We would become comparable with the American effort,” said Alan Penny, UKSRN coordinator and researcher at the University of St Andrews, in an interview with BBC News. The UKSRN carried out their first meeting at the National Astronomy Meeting (NAM2013) at St. Andrews, Scotland, on Friday (July 5).
“I don’t know whether (aliens) are out there, but I’m desperate to find out. It’s quite possible that we’re alone in the Universe. And think about the implications of that: if we’re alone in the Universe then the whole purpose in the Universe is in us. If we’re not alone, that’s interesting in a very different way.”
“There are billions of planets out there. It would be remiss of us not to at least have half an ear open to any signals that might be being sent to us,” added Tim O’Brien of Jodrell Bank, a radio antennae that has been used for SETI projects in the past.
The network is applying for one million pounds ($1.5 million) per year for time on radio telescopes and data analysis.
Sadly, justifying public funds to back a project with no definite final outcome can be a tricky proposition, especially in the existing climate of government science cuts and fiscal woes. It therefore seems difficult to see why the Science and Technology Facilities Council (STFC), the primary funding body of UK science and technology, would support such an effort.
“Continued flat-cash science budget awards are constantly eroding STFC’s buying powers, causing the UK to withdraw from existing productive facilities such as the United Kingdom Infrared Telescope and the James Clerk Maxwell Telescope,” said Paul Crowther of Sheffield University. “(British astronomy) faces the prospect of a reduced volume of research grants, and participation in future high-impact facilities is threatened. I would be shocked if STFC’s advisory panels rated the support of UKSRN higher than such scientifically compelling competition.”
But in an ideal universe, where science receives the funding it deserves, justifying money on the hunt for intelligent extraterrestrials isn’t such a hard-sell.
For starters, analyzing radio antennae data for artificial signals isn’t such a resource-heavy project. Using existing radio antennae and groups of antennae (hooked up as interferometers), SETI projects can “piggyback” on surveys being carried out by other research groups and vice versa. Also, the development of technologies to whittle out artificial messages from cosmic noise will have tangible benefits for radio astronomy and communications techniques.
And then there’s the public interest in SETI projects. Undoubtedly there will be those who see any SETI effort a waste of time, but to be at the level of intelligence and technological know-how to actually conceive the prospect of life on Earth not being the only life in our galaxy is a profound philosophical epoch for the evolution of our species.
As embodied in the privately-funded Lone Signal project that was launched last month, the public interest in “reaching out” to the stars appears to be unwavering. Lone Signal is a Messaging Extraterrestrial Intelligence (METI) project that aims to be active for many decades, beaming crowd-sourced messages to the stars in the hope that some benevolent ETI is listening and asking the same questions we are.
Of course, as with any METI effort, whether we should be beaming “proof of life” radio waves to nearby stars at all is questionable — who knows if the Milky Way’s inhabitants are friendly? We could be living in a interstellar ecosystem where humanity is a mere ants nest. Should we really be transmitting our presence in spite of the risk of getting trampled?
Alternatively, should we endeavor to be “radio silent,” and risk a lonely existence, never to make contact with that neighboring, yet invisible, hypothetical alien civilization?
Read more at Discovery News
Save Your Pennies: Copper Blocks Fish Sense
When Frank Sinatra threw “three coins in a fountain,” Old Blue Eyes may have endangered fish swimming in those waters. In an experiment, copper-contaminated waters blocked fishes’ ability to smell the odor released by other fish when in danger.
However, people willing to spend a bit more on their wishes don’t stifle fishes’ senses. The metal nickel didn’t seem to block the detection of danger signaling scents released by fish during a predator attack.
“Our research shows that copper affects the function of a specific type of olfactory neurons in fish, preventing them from detecting important olfactory signals used to detect fish injured by predation,” said Bill Dew of the University of Lethbridge in Canada in a press release. “This means that fish in an environment contaminated with copper would not be able to detect compounds released during a predation event and potentially not avoid predators, while fish in a nickel contaminated environment would be able to detect these compounds and undertake predator-avoidance behaviors.”
Dew used fathead minnows in his experiment. Dew’s study will be presented today at a meeting of the Society for Experimental Biology.
Copper contamination can occur when hopeful humans toss pennies into natural wells. For example, in Bermuda, tourists make wishes before throwing coins into deep cave pools filled with salty water, reported the National Oceanic and Atmospheric Administration. The copper in the coins dissolves rapidly in salt water and results in toxic levels of copper in the pools.
Copper mining and its byproducts can contaminate water with copper, as well. Copper mines and waste from the operations can pollute water with so much metal that the water turns turquoise blue. Runoff of copper-based fungicides from farms and vineyards also can pollute waterways.
Read more at Discovery News
However, people willing to spend a bit more on their wishes don’t stifle fishes’ senses. The metal nickel didn’t seem to block the detection of danger signaling scents released by fish during a predator attack.
“Our research shows that copper affects the function of a specific type of olfactory neurons in fish, preventing them from detecting important olfactory signals used to detect fish injured by predation,” said Bill Dew of the University of Lethbridge in Canada in a press release. “This means that fish in an environment contaminated with copper would not be able to detect compounds released during a predation event and potentially not avoid predators, while fish in a nickel contaminated environment would be able to detect these compounds and undertake predator-avoidance behaviors.”
Dew used fathead minnows in his experiment. Dew’s study will be presented today at a meeting of the Society for Experimental Biology.
Copper contamination can occur when hopeful humans toss pennies into natural wells. For example, in Bermuda, tourists make wishes before throwing coins into deep cave pools filled with salty water, reported the National Oceanic and Atmospheric Administration. The copper in the coins dissolves rapidly in salt water and results in toxic levels of copper in the pools.
Copper mining and its byproducts can contaminate water with copper, as well. Copper mines and waste from the operations can pollute water with so much metal that the water turns turquoise blue. Runoff of copper-based fungicides from farms and vineyards also can pollute waterways.
Read more at Discovery News
Jul 4, 2013
Earliest Evidence of Using Flower Beds for Burial Found in Raqefet Cave in Mt. Carmel
The earliest evidence of using flower beds for burial, dating back to 13,700 years ago, was discovered in Raqefet Cave in Mt. Carmel (northern Israel), during excavations led by the University of Haifa. In four different graves from the Natufian period, dating back to 13,700-11,700 years ago, dozens of impressions of Salvia plants and other species of sedges and mints (the Lamiaceae family), were found under human skeletons.
"This is another evidence that as far back as 13,700 years ago, our ancestors, the Natufians, had burial rituals similar to ours, nowadays," said Prof. Dani Nadel, from the University of Haifa, who led the excavations.
The Natufians, who lived some 15,000-11,500 years ago, were of the first in the world to abandon nomadic life and settle in permanent settlements, setting up structures with stone foundations. They were also among the first to establish cemeteries -- confined areas in which they buried their community members for generations. The cemeteries were usually located at the first chambers of caves or on terraces located below the caves. In contrast, earlier cultures used to bury their dead (if at all) randomly. Mt. Carmel was one of the most important and densely populated areas in the Natufian settlement system. Its sites have been explored by University of Haifa archeologists for dozens of years.
A Natufian cemetery containing 29 skeletons of babies, children and adults was discovered at Raqefet cave. Most of the burials were single interments, although some were double, in which two bodies were interred together in the same pit. In fours graves, researchers found plant impressions on a thin layer of mud veneer which was presumably spread like plaster inside the grave. Before burying the bodies, the Natufians spread a bed of blooming green plants inside the graves. The impressions are mostly of plants with square stems, common among the mint family. In one incident, flowering stems of Judean Sage were found, one of three Sage species currently growing in the vicinity of the cave. This led the researchers to suggest that the burials were conducted in springtime, using colorful and aromatic flowers. The Raqefet cave remains are the earliest example found of graves lined with green and flowering plants.
According to the researchers, apparently flowerbeds were not restricted to adults alone and graves of children and adolescents were also lined with flowers. Since the mud veneer doesn't include impressions of stone objects and bones, despite the presence of thousands of these hard and durable artifacts within the cave and grave fills, the researchers suggest that the green lining was thick and continuous, covering the entire grave floor and sides, preventing objects from leaving impressions on the moist mud veneer.
The researchers even found evidence of Natufian bedrock chiseling in the graveyard, demonstrating grave preparation to fit their needs. The Natufians also chiseled a variety of mortars and cupmarks in close vicinity to the graves and on rock exposures on the terrace below the cave. The graves were directly radiocarbon dated. Samples from three different human skeletons were dated to 13,700-11,700 years ago.
"The Natufians lived at a time of many changes -- a time when population density was rising and the struggle for land, food and resources was increasing. The establishment of grave yards and unique burial rituals reflects the complexity of the Natufian society. Communal burial sites and elaborate rituals such as funeral ceremonies must have strengthened the sense of solidarity among the community members, and their feeling of unity in the face of other groups," concluded Prof. Nadel.
The project was led by researchers from the Zinman Institute of Archaelogy at the University of Haifa, with expert partners from the Hebrew University of Jerusalem, the Weizmann Institute, the Max Planck Institute (Germany), The Centre National de la Recherche Scientifique (Paris) and the Anthropology Department at the University of Texas at Austin (USA). The research results were published in the Proceedings of the National Academy of Sciences.
Read more at Science Daily
"This is another evidence that as far back as 13,700 years ago, our ancestors, the Natufians, had burial rituals similar to ours, nowadays," said Prof. Dani Nadel, from the University of Haifa, who led the excavations.
The Natufians, who lived some 15,000-11,500 years ago, were of the first in the world to abandon nomadic life and settle in permanent settlements, setting up structures with stone foundations. They were also among the first to establish cemeteries -- confined areas in which they buried their community members for generations. The cemeteries were usually located at the first chambers of caves or on terraces located below the caves. In contrast, earlier cultures used to bury their dead (if at all) randomly. Mt. Carmel was one of the most important and densely populated areas in the Natufian settlement system. Its sites have been explored by University of Haifa archeologists for dozens of years.
A Natufian cemetery containing 29 skeletons of babies, children and adults was discovered at Raqefet cave. Most of the burials were single interments, although some were double, in which two bodies were interred together in the same pit. In fours graves, researchers found plant impressions on a thin layer of mud veneer which was presumably spread like plaster inside the grave. Before burying the bodies, the Natufians spread a bed of blooming green plants inside the graves. The impressions are mostly of plants with square stems, common among the mint family. In one incident, flowering stems of Judean Sage were found, one of three Sage species currently growing in the vicinity of the cave. This led the researchers to suggest that the burials were conducted in springtime, using colorful and aromatic flowers. The Raqefet cave remains are the earliest example found of graves lined with green and flowering plants.
According to the researchers, apparently flowerbeds were not restricted to adults alone and graves of children and adolescents were also lined with flowers. Since the mud veneer doesn't include impressions of stone objects and bones, despite the presence of thousands of these hard and durable artifacts within the cave and grave fills, the researchers suggest that the green lining was thick and continuous, covering the entire grave floor and sides, preventing objects from leaving impressions on the moist mud veneer.
The researchers even found evidence of Natufian bedrock chiseling in the graveyard, demonstrating grave preparation to fit their needs. The Natufians also chiseled a variety of mortars and cupmarks in close vicinity to the graves and on rock exposures on the terrace below the cave. The graves were directly radiocarbon dated. Samples from three different human skeletons were dated to 13,700-11,700 years ago.
"The Natufians lived at a time of many changes -- a time when population density was rising and the struggle for land, food and resources was increasing. The establishment of grave yards and unique burial rituals reflects the complexity of the Natufian society. Communal burial sites and elaborate rituals such as funeral ceremonies must have strengthened the sense of solidarity among the community members, and their feeling of unity in the face of other groups," concluded Prof. Nadel.
The project was led by researchers from the Zinman Institute of Archaelogy at the University of Haifa, with expert partners from the Hebrew University of Jerusalem, the Weizmann Institute, the Max Planck Institute (Germany), The Centre National de la Recherche Scientifique (Paris) and the Anthropology Department at the University of Texas at Austin (USA). The research results were published in the Proceedings of the National Academy of Sciences.
Read more at Science Daily
Farming Sprang Up In Multiple Places
In the dry foothills of Iran’s Zagros Mountains, a new picture of mankind’s first farmers is emerging from an archaeological dig that has turned up a jackpot of artifacts and plant grains.
People who lived in the region began cultivating cereal grains as early as 11,700 years ago, according to the new analysis, which adds Iran to the list of places in the Near East where the first inklings of farming emerged just after the end of the last Ice Age.
Once people figured out how to cultivate, and then domesticate, plants and animals, they eventually developed settlements and agricultural economies that formed the foundation of modern civilizations, said Nicholas Conrad, head of the archaeological team that made the new discoveries.
Along with previous work in nearby regions, the new study suggests that farming began simultaneously over a widespread area, offering an important window into a pivotal time in human history.
“There is not just one village where you can say, ‘This is where domestication occurred,’” said Conrad, of the University of Tübingen in Germany. “It wasn’t as if the development of agriculture was like someone flipped a light in one place and from that point of origin, agriculture spread. It’s a process that occurred in a whole range of places.”
The study of farming’s origins have long focused on a region known as the Fertile Crescent, which encompasses the land around modern-day Syria, Israel, Jordan, Turkey and Iraq. When the last Ice Age ended around 12,000 years ago, the Fertile Crescent’s climate and terrain became ripe for crops to grow. Previous digs have turned up evidence of the very beginnings of cultivation in a handful of sites in the western part of that region.
In 2009 and 2010, archaeologists were finally able to excavate a site called Chogha Golan at the base of Iran’s Zagros mountains on the eastern edge of the Fertile Crescent, much further east than previous searches for evidence of early farming.
As they dug through 26 feet of sediment dating back nearly 12,000 years, the researchers unearthed an amazing array of artifacts, including clay figurines, animal bones, ornaments, mortars, grinding tools and signs of burials. The sequence of objects and materials showed that fairly large groups of people lived in the area between 12,000 and 9,800 years ago, the researchers report today in the journal Science.
For the new study, which is likely to be the first of many that will emerge from the site, Conrad and colleagues focused on an extraordinarily rich bounty of botanical remains, including grains of barley and wheat. Out of 717 collected samples, the new paper reports on analyses of just 25, Conard said, which turned up 21,000 plant remains.
Over 2,000 years of prehistoric living, changes in the structure of plant remains allowed the team to see progress from crude plant management to true domestication. In the earliest days of occupation at the site, people were planting wild varieties of barley, wheat, lentils, grass peas and other plants. Over time, the part of the plants where the grains attach changed in ways that suggest people began breeding and domesticating the crops to be better for harvesting and processing.
The new discoveries push eastward the boundaries of the region where experts now think agriculture began, said George Wilcox, an archaeologist at the French National Center for Scientific Research and the University of Lyon.
Without a written record, no one can say whether ideas spread from population to population at the time or if people migrated, bringing crops and knowledge with them. It’s also possible that different groups of people came up with the same ideas around the same time.
Whatever the details, the new work suggests that there was no single point where farming began.
Read more at Discovery News
People who lived in the region began cultivating cereal grains as early as 11,700 years ago, according to the new analysis, which adds Iran to the list of places in the Near East where the first inklings of farming emerged just after the end of the last Ice Age.
Once people figured out how to cultivate, and then domesticate, plants and animals, they eventually developed settlements and agricultural economies that formed the foundation of modern civilizations, said Nicholas Conrad, head of the archaeological team that made the new discoveries.
Along with previous work in nearby regions, the new study suggests that farming began simultaneously over a widespread area, offering an important window into a pivotal time in human history.
“There is not just one village where you can say, ‘This is where domestication occurred,’” said Conrad, of the University of Tübingen in Germany. “It wasn’t as if the development of agriculture was like someone flipped a light in one place and from that point of origin, agriculture spread. It’s a process that occurred in a whole range of places.”
The study of farming’s origins have long focused on a region known as the Fertile Crescent, which encompasses the land around modern-day Syria, Israel, Jordan, Turkey and Iraq. When the last Ice Age ended around 12,000 years ago, the Fertile Crescent’s climate and terrain became ripe for crops to grow. Previous digs have turned up evidence of the very beginnings of cultivation in a handful of sites in the western part of that region.
In 2009 and 2010, archaeologists were finally able to excavate a site called Chogha Golan at the base of Iran’s Zagros mountains on the eastern edge of the Fertile Crescent, much further east than previous searches for evidence of early farming.
As they dug through 26 feet of sediment dating back nearly 12,000 years, the researchers unearthed an amazing array of artifacts, including clay figurines, animal bones, ornaments, mortars, grinding tools and signs of burials. The sequence of objects and materials showed that fairly large groups of people lived in the area between 12,000 and 9,800 years ago, the researchers report today in the journal Science.
For the new study, which is likely to be the first of many that will emerge from the site, Conrad and colleagues focused on an extraordinarily rich bounty of botanical remains, including grains of barley and wheat. Out of 717 collected samples, the new paper reports on analyses of just 25, Conard said, which turned up 21,000 plant remains.
Over 2,000 years of prehistoric living, changes in the structure of plant remains allowed the team to see progress from crude plant management to true domestication. In the earliest days of occupation at the site, people were planting wild varieties of barley, wheat, lentils, grass peas and other plants. Over time, the part of the plants where the grains attach changed in ways that suggest people began breeding and domesticating the crops to be better for harvesting and processing.
The new discoveries push eastward the boundaries of the region where experts now think agriculture began, said George Wilcox, an archaeologist at the French National Center for Scientific Research and the University of Lyon.
Without a written record, no one can say whether ideas spread from population to population at the time or if people migrated, bringing crops and knowledge with them. It’s also possible that different groups of people came up with the same ideas around the same time.
Whatever the details, the new work suggests that there was no single point where farming began.
Read more at Discovery News
Mystery Intergalactic Radio Bursts Detected
Astronomers were on a celestial fishing expedition for pulsing neutron stars and other radio bursts when they found something unexpected in archived sky sweeps conducted by the Parkes radio telescope in New South Wales, Australia.
The powerful signal, which lasted for just milliseconds, could have been a fluke, but then the team found three more equally energetic transient flashes all far removed from the galactic plane and coming from different points in the sky.
Analysis later indicated that, unlike most cosmic radio signals that originate in the Milky Way or a nearby neighbor galaxy, these four seem to have come from beyond.
Whatever triggered the bursts has come and gone. The signals, detected between February 2011 and January 2012, were one-time events so little follow-up work can be done.
What is known is that in just a few milliseconds, each of the signals released about as much energy as the sun emits in 300,000 years.
“They have come such a long way that by the time they reach the Earth, the Parkes telescope would have to operate for 1 million years to collect enough to have the equivalent energy of a flying mosquito,” astronomer Dan Thornton, with the University of Manchester in the United Kingdom, wrote in an email to Discovery News.
Scientists have all kinds of theories about what exotic phenomena may have triggered the bursts. The contenders include colliding magnetars, which are neutron stars with super-strong magnetic fields; evaporating black holes; and gamma ray bursts that involve a supernova.
Or, as Cornell University astronomer James Cordes points out, the bursts could be from an entirely new type of high-energy astrophysical event.
“It is still early days for identifying the astrophysical origins of such common but (so far) rarely detected events,” Cordes wrote in an article published in this week’s Science.
Whatever is happening is probably a relatively common, though difficult to detect, phenomenon. Extrapolating from the research, astronomers estimate there are as many as about 10,000 similar high-energy millisecond radio bursts happening across the sky every day.
“This might seem common, and it is, but you need a big telescope to detect them,” Thornton said.
Typically, telescopes only look at a very small patch of the sky at any one time, he added, “so you have to look for a long time before seeing many. This is why we have only detected a handful so far.”
Similar radio signals have been found before, but astronomers could never nail down whether they came from inside or beyond the galaxy.
Read more at Discovery News
The powerful signal, which lasted for just milliseconds, could have been a fluke, but then the team found three more equally energetic transient flashes all far removed from the galactic plane and coming from different points in the sky.
Analysis later indicated that, unlike most cosmic radio signals that originate in the Milky Way or a nearby neighbor galaxy, these four seem to have come from beyond.
Whatever triggered the bursts has come and gone. The signals, detected between February 2011 and January 2012, were one-time events so little follow-up work can be done.
What is known is that in just a few milliseconds, each of the signals released about as much energy as the sun emits in 300,000 years.
“They have come such a long way that by the time they reach the Earth, the Parkes telescope would have to operate for 1 million years to collect enough to have the equivalent energy of a flying mosquito,” astronomer Dan Thornton, with the University of Manchester in the United Kingdom, wrote in an email to Discovery News.
Scientists have all kinds of theories about what exotic phenomena may have triggered the bursts. The contenders include colliding magnetars, which are neutron stars with super-strong magnetic fields; evaporating black holes; and gamma ray bursts that involve a supernova.
Or, as Cornell University astronomer James Cordes points out, the bursts could be from an entirely new type of high-energy astrophysical event.
“It is still early days for identifying the astrophysical origins of such common but (so far) rarely detected events,” Cordes wrote in an article published in this week’s Science.
Whatever is happening is probably a relatively common, though difficult to detect, phenomenon. Extrapolating from the research, astronomers estimate there are as many as about 10,000 similar high-energy millisecond radio bursts happening across the sky every day.
“This might seem common, and it is, but you need a big telescope to detect them,” Thornton said.
Typically, telescopes only look at a very small patch of the sky at any one time, he added, “so you have to look for a long time before seeing many. This is why we have only detected a handful so far.”
Similar radio signals have been found before, but astronomers could never nail down whether they came from inside or beyond the galaxy.
Read more at Discovery News
White Dwarf Morphs into Massive Pulsing Crystal
Astronomy lets us peer into some of the strangest corners of physics in a way that are incredibly hard (or impossible) to reproduce in a laboratory setting. For example, a recent discovery of pulsations from a massive white dwarf star has allowed astronomers to imagine a crystallized, semi-solid ball of oxygen and neon the size of our planet.
To understand where white dwarfs come from, you must first look at the evolution of “normal” or main-sequence stars. These begin their lives by fusing hydrogen into helium in their cores, powering the heat and light of the star as some of the mass of that interaction is turned into energy. For most stars, the core will eventually reach a state when there is not enough hydrogen in the core for this process to continue, and the star evolves and eventually dies.
For a star with the mass of our sun, helium can undergo nuclear fusion eventually creating carbon and oxygen for a short time. Stars more massive than the Sun will do this as well, and those with seven times the sun’s mass will even achieve stable carbon fusion to produce neon. However, for such stars, that’s the limit, and once that fusion process has run down, the nuclear power plant at the center shuts down and the outer layers of the star are lost to interstellar space. What is left behind is the former core of the star, now called a white dwarf.
A white dwarf is an extremely dense and hot ember of a star. Typical white dwarf masses are a little more than half the mass of our sun. There is great astrophysical interest, however, in the higher mass white dwarfs, since these are the ones that create novae and even Type 1a supernovae, which have become an important tool for measuring the acceleration of the Universe.
A team using the 2.1-meter telescope at McDonald Observatory in Texas set out to find and characterize these high mass white dwarf stars. They came across GD 518, which by its spectrum was shown to have a surface temperature of 12,000 degrees Celsuis, twice the temperature of the surface of our sun. The mass was determined by looking at the absorption lines in the spectrum due to hydrogen. These were wide lines, “distorted” by a high surface gravity nine times that of what we feel at Earth’s surface. This indicates that it has a mass of 1.2 times the mass of the sun and, according to stellar models, should be made of oxygen and neon.
Since astronomers can’t go out and sample the interior of a star, they need other methods for understanding what is inside. In addition to the theoretical models, the remnants of bright novae, or partially exploded white dwarf stars, have shown oxygen, carbon, and other such materials left behind. But the group in Texas were looking for pulsations, or variability in these high mass white dwarfs.
So, with careful observations, they discovered that GD 518 was indeed varying in brightness on a time scale of six to ten minutes. Variable stars change brightness because they actually “pulse,” expanding and contracting ever so slightly because of some instability inside the star. Predictions of white dwarf pulsations depend also on how much of the interior has crystallized, or solidified. The scale of these pulsations of GC 518 indicate that a significant fraction of its oxygen-nitrogen interior is in this crystal state.
This careful following of clues using just the light of the star, coupled with predictions of the physics of stars, thus leads us to the conclusion that in this faint, hot white dwarf star, we are seeing the crystallized remains of what was once a large, brightly burning star, yet one not quite large enough to blow itself apart in a supernova. The continued study of these rare white dwarfs will provide insight into other types of supernovae, however, and ensure that we’re making the right measurements of the Universe on the largest scales.
Read more at Discovery News
To understand where white dwarfs come from, you must first look at the evolution of “normal” or main-sequence stars. These begin their lives by fusing hydrogen into helium in their cores, powering the heat and light of the star as some of the mass of that interaction is turned into energy. For most stars, the core will eventually reach a state when there is not enough hydrogen in the core for this process to continue, and the star evolves and eventually dies.
For a star with the mass of our sun, helium can undergo nuclear fusion eventually creating carbon and oxygen for a short time. Stars more massive than the Sun will do this as well, and those with seven times the sun’s mass will even achieve stable carbon fusion to produce neon. However, for such stars, that’s the limit, and once that fusion process has run down, the nuclear power plant at the center shuts down and the outer layers of the star are lost to interstellar space. What is left behind is the former core of the star, now called a white dwarf.
A white dwarf is an extremely dense and hot ember of a star. Typical white dwarf masses are a little more than half the mass of our sun. There is great astrophysical interest, however, in the higher mass white dwarfs, since these are the ones that create novae and even Type 1a supernovae, which have become an important tool for measuring the acceleration of the Universe.
A team using the 2.1-meter telescope at McDonald Observatory in Texas set out to find and characterize these high mass white dwarf stars. They came across GD 518, which by its spectrum was shown to have a surface temperature of 12,000 degrees Celsuis, twice the temperature of the surface of our sun. The mass was determined by looking at the absorption lines in the spectrum due to hydrogen. These were wide lines, “distorted” by a high surface gravity nine times that of what we feel at Earth’s surface. This indicates that it has a mass of 1.2 times the mass of the sun and, according to stellar models, should be made of oxygen and neon.
Since astronomers can’t go out and sample the interior of a star, they need other methods for understanding what is inside. In addition to the theoretical models, the remnants of bright novae, or partially exploded white dwarf stars, have shown oxygen, carbon, and other such materials left behind. But the group in Texas were looking for pulsations, or variability in these high mass white dwarfs.
So, with careful observations, they discovered that GD 518 was indeed varying in brightness on a time scale of six to ten minutes. Variable stars change brightness because they actually “pulse,” expanding and contracting ever so slightly because of some instability inside the star. Predictions of white dwarf pulsations depend also on how much of the interior has crystallized, or solidified. The scale of these pulsations of GC 518 indicate that a significant fraction of its oxygen-nitrogen interior is in this crystal state.
This careful following of clues using just the light of the star, coupled with predictions of the physics of stars, thus leads us to the conclusion that in this faint, hot white dwarf star, we are seeing the crystallized remains of what was once a large, brightly burning star, yet one not quite large enough to blow itself apart in a supernova. The continued study of these rare white dwarfs will provide insight into other types of supernovae, however, and ensure that we’re making the right measurements of the Universe on the largest scales.
Read more at Discovery News
Jul 3, 2013
Great Ape Genetic Diversity Catalog Frames Primate Evolution and Future Conservation
A model of great ape history during the past 15 million years has been fashioned through the study of genetic variation in a large panel of humans, chimpanzees, gorillas and orangutans. The catalog of great ape genetic diversity, the most comprehensive ever, elucidates the evolution and population histories of great apes from Africa and Indonesia. The resource will likely also aid in current and future conservation efforts which strive to preserve natural genetic diversity in populations.
More than 75 scientists and wildlife conservationists from around the world assisted the genetic analysis of 79 wild and captive-born great apes. They represent all six great ape species: chimpanzee, bonobo, Sumatran orangutan, Bornean orangutan, eastern gorilla, and western lowland gorilla, and seven subspecies. Nine human genomes were included in the sampling.
Javier Prado-Martinez, working with Tomas Marques-Bonet at the Universitat Pompeu Fabra in Barcelona, Spain, and Peter H. Sudmant, with Evan Eichler at the University of Washington in Seattle, led the project. The report appears today, July 3, in the journal Nature.
"The research provided us the deepest survey to date of great ape genetic diversity with evolutionary insights into the divergence and emergence of great-ape species," noted Eichler, a UW professor of genome sciences and a Howard Hughes Medical Institute Investigator.
Genetic variation among great apes had been largeley uncharted, due to the difficuty in obtaining genetic specimens from wild apes. Conservationists in many countries, some of them in dangerous or isolated locations, helped in this recent effort, and the research team credits them for the success of the project.
Sudmant, a UW graduate student in genome sciences, said, "Gathering this data is critical to understanding differences between great ape species, and separating aspects of the genetic code that distinguish humans from other primates." Analysis of great ape genetic diversity is likely to reveal ways that natural selection, population growth and collapse, geographic isolation and migration, climate and geological changes, and other factors shaped primate evolution.
Sudmant added that learning more about great ape genetic diversity also contributes to knowledge about disease susceptibility among various primate species. Such questions are important to both conservation efforts and to human health. The ebola virus is responsible for thousands of gorilla and chimpanzee deaths in Africa and the origin of HIV, the virus which causes AIDs, is SIV, simian immunodeficiency virus.
Sudmant works in a lab that studies both primate evolutionary biology and neuropsychiatric diseases such as autism, schizophrenia, developmental delay, and cognitive and behavioral disorders.
"Because the way we think, communicate and act is what makes us distinctively human," Sudmant said, "we are specifically looking for the genetic differences between humans and other great apes that might confer these traits." Those species differences may direct researchers to portions of the human genome associated with cognition, speech or behavior, providing clues to which mutations might underlie neurological disease.
In a companion paper published this week in Genome Research, Sudmant and Eichler wrote that they inadvertently found the first genetic evidence in a chimpanzee of a disorder resembling Smith-Magenis syndrome, a disabling physical, mental and behavioral condition in humans. Strikingly, the veterinary records of this chimpanzee named Suzie-A, matched almost exactly to the symptoms of human Smith Magenis patients; she was overweight, rage-prone, had a curved-spine chimp and died from kidney failure.
The discovery came about while researchers were exploring and comparing the accumulation of copy number variants during great ape evolution. Copy number variants are differences between individuals, populations or species in the number of times specific segments of DNA appear. Duplication and deletion of DNA segments have re-structured the genomes of humans and great apes , and are behind many genetic diseases.
In addition to offering a view of the origins of humans and their disorders, the new resource of ape genetic diversity will help address the challenging plight of great ape species on the brink extinction. The resource provides an important tool to enable biologists to identify the origin of great apes poached for their body parts or hunted down for bush meat. The research also explains why current zoo breeding programs, which have attempted to increase the genetic diversity of captive great ape populations, have resulted in captive ape populations that are genetically dissimilar to their wild counterparts. .
"By avoiding inbreeding to produce a diverse population, zoos and conservation groups may be entirely eroding genetic signals specific to certain populations in specific geographic locations in the wild" Sudmant said. One of the captive-bred apes studied by the researchers, Donald, had the genetic makeup of two distinct chimpanzee subspecies, located >2000km away from each other.
The research also delineates the many changes that occurred along each of the ape lineages as they became separated from each other through migration, geological change and climate events. The formation of rivers, the partition of islands from the mainland, and other natural disturbances have all served to isolate groups of apes. Isolated populations may then be exposed to a unique set of environmental pressures, resulting in population fluctuations and adaptations depending on the circumstances.
Even though early human-like species were present at the same time as the ancestors of some present day great apes, the researchers found that the evolutionary history of ancestral great ape populations was far more complex than that of humans. Compared to our closest relatives, chimpanzees, human history appears "almost boring" conclude Sudmant and his mentor Evan Eicher. The last few million years of chimpanzee evolutionary history are fraught with population explosions followed by implosions demonstrating remarkable plasticity. The reasons for these fluctuations in chimpanzee population size long before our own population explosion are still unknown however.
Read more at Science Daily
More than 75 scientists and wildlife conservationists from around the world assisted the genetic analysis of 79 wild and captive-born great apes. They represent all six great ape species: chimpanzee, bonobo, Sumatran orangutan, Bornean orangutan, eastern gorilla, and western lowland gorilla, and seven subspecies. Nine human genomes were included in the sampling.
Javier Prado-Martinez, working with Tomas Marques-Bonet at the Universitat Pompeu Fabra in Barcelona, Spain, and Peter H. Sudmant, with Evan Eichler at the University of Washington in Seattle, led the project. The report appears today, July 3, in the journal Nature.
"The research provided us the deepest survey to date of great ape genetic diversity with evolutionary insights into the divergence and emergence of great-ape species," noted Eichler, a UW professor of genome sciences and a Howard Hughes Medical Institute Investigator.
Genetic variation among great apes had been largeley uncharted, due to the difficuty in obtaining genetic specimens from wild apes. Conservationists in many countries, some of them in dangerous or isolated locations, helped in this recent effort, and the research team credits them for the success of the project.
Sudmant, a UW graduate student in genome sciences, said, "Gathering this data is critical to understanding differences between great ape species, and separating aspects of the genetic code that distinguish humans from other primates." Analysis of great ape genetic diversity is likely to reveal ways that natural selection, population growth and collapse, geographic isolation and migration, climate and geological changes, and other factors shaped primate evolution.
Sudmant added that learning more about great ape genetic diversity also contributes to knowledge about disease susceptibility among various primate species. Such questions are important to both conservation efforts and to human health. The ebola virus is responsible for thousands of gorilla and chimpanzee deaths in Africa and the origin of HIV, the virus which causes AIDs, is SIV, simian immunodeficiency virus.
Sudmant works in a lab that studies both primate evolutionary biology and neuropsychiatric diseases such as autism, schizophrenia, developmental delay, and cognitive and behavioral disorders.
"Because the way we think, communicate and act is what makes us distinctively human," Sudmant said, "we are specifically looking for the genetic differences between humans and other great apes that might confer these traits." Those species differences may direct researchers to portions of the human genome associated with cognition, speech or behavior, providing clues to which mutations might underlie neurological disease.
In a companion paper published this week in Genome Research, Sudmant and Eichler wrote that they inadvertently found the first genetic evidence in a chimpanzee of a disorder resembling Smith-Magenis syndrome, a disabling physical, mental and behavioral condition in humans. Strikingly, the veterinary records of this chimpanzee named Suzie-A, matched almost exactly to the symptoms of human Smith Magenis patients; she was overweight, rage-prone, had a curved-spine chimp and died from kidney failure.
The discovery came about while researchers were exploring and comparing the accumulation of copy number variants during great ape evolution. Copy number variants are differences between individuals, populations or species in the number of times specific segments of DNA appear. Duplication and deletion of DNA segments have re-structured the genomes of humans and great apes , and are behind many genetic diseases.
In addition to offering a view of the origins of humans and their disorders, the new resource of ape genetic diversity will help address the challenging plight of great ape species on the brink extinction. The resource provides an important tool to enable biologists to identify the origin of great apes poached for their body parts or hunted down for bush meat. The research also explains why current zoo breeding programs, which have attempted to increase the genetic diversity of captive great ape populations, have resulted in captive ape populations that are genetically dissimilar to their wild counterparts. .
"By avoiding inbreeding to produce a diverse population, zoos and conservation groups may be entirely eroding genetic signals specific to certain populations in specific geographic locations in the wild" Sudmant said. One of the captive-bred apes studied by the researchers, Donald, had the genetic makeup of two distinct chimpanzee subspecies, located >2000km away from each other.
The research also delineates the many changes that occurred along each of the ape lineages as they became separated from each other through migration, geological change and climate events. The formation of rivers, the partition of islands from the mainland, and other natural disturbances have all served to isolate groups of apes. Isolated populations may then be exposed to a unique set of environmental pressures, resulting in population fluctuations and adaptations depending on the circumstances.
Even though early human-like species were present at the same time as the ancestors of some present day great apes, the researchers found that the evolutionary history of ancestral great ape populations was far more complex than that of humans. Compared to our closest relatives, chimpanzees, human history appears "almost boring" conclude Sudmant and his mentor Evan Eicher. The last few million years of chimpanzee evolutionary history are fraught with population explosions followed by implosions demonstrating remarkable plasticity. The reasons for these fluctuations in chimpanzee population size long before our own population explosion are still unknown however.
Read more at Science Daily
Insecticide Alters Honey Bee Genes
Once upon a time all honey bees had to worry about were silly old bears. Now there may be some hard evidence that a new kind of insecticides called neonicotinoids could be weakening and killing bees. And since bees are critical to the production of more than a quarter of our food, new evidence of a danger is nothing to sneeze at.
The study, led by Reinhard Stöger of Nottingham University, demonstrated that just 2 parts per billion of the neonicotinoid called imidacloprid had an effect on the workings of some honey bee genes. Genes involved in combating toxins and other functions were affected so that cells basically had to work a lot harder. These kinds of changes are known to shorten the lifespan of fruit flies (the most studied insect in the work) and to reduce the numbers reaching adulthood.
So it's not that the insecticide is outright killing bees (unless they are exposed to a massive dose). It's a lot more subtle. The larvae of the honey bees in the study could still grow and develop in the presence of imidacloprid, the researchers explained, but their development was compromised. This also makes bees more vulnerable to other stresses, like disease or mites or even difficult weather. And since there are always other stresses, the insecticide puts bees at greater risk.
The study was published in the scientific journal PLOS ONE, and appears to support the recent decision by the European Commission to ban three neonicotinoids because they are suspected of killing bees. U.S. researchers are still trying to determine if low doses of neonicotinoids are causing enough effects to threaten bees.
Ironically, this class of insecticide was developed in the mid-1990s partially because they were less toxic to honey bees than the previously used organophosphate and carbamate insecticides, according to the U.S. Department of Agriculture.
Read more at Discovery News
The study, led by Reinhard Stöger of Nottingham University, demonstrated that just 2 parts per billion of the neonicotinoid called imidacloprid had an effect on the workings of some honey bee genes. Genes involved in combating toxins and other functions were affected so that cells basically had to work a lot harder. These kinds of changes are known to shorten the lifespan of fruit flies (the most studied insect in the work) and to reduce the numbers reaching adulthood.
So it's not that the insecticide is outright killing bees (unless they are exposed to a massive dose). It's a lot more subtle. The larvae of the honey bees in the study could still grow and develop in the presence of imidacloprid, the researchers explained, but their development was compromised. This also makes bees more vulnerable to other stresses, like disease or mites or even difficult weather. And since there are always other stresses, the insecticide puts bees at greater risk.
The study was published in the scientific journal PLOS ONE, and appears to support the recent decision by the European Commission to ban three neonicotinoids because they are suspected of killing bees. U.S. researchers are still trying to determine if low doses of neonicotinoids are causing enough effects to threaten bees.
Ironically, this class of insecticide was developed in the mid-1990s partially because they were less toxic to honey bees than the previously used organophosphate and carbamate insecticides, according to the U.S. Department of Agriculture.
Read more at Discovery News
Ancient Anchors from Punic Wars Found Off Sicily
A key episode of the Punic Wars has emerged from the waters near the small Sicilian island of Pantelleria as archaeologists discovered a cluster of more than 30 ancient anchors.
Found at a depth between 160 and 270 feet in Cala Levante, one of the island’s most scenic spots, the anchors date to more than 2,000 years ago.
According to Leonardo Abelli, an archaeologist from the University of Sassari, the anchors are startling evidence of the Romans’ and Carthaginians’ struggle to conquer the Mediterranean during the First Punic War (264 to 241 B.C.).
“They were deliberately abandoned. The Carthaginian ships were hiding from the Romans and could not waste time trying to retrieve heavy anchors at such depths,” Abelli told Discovery News.
Lying strategically between Africa and Sicily, Pantelleria became a bone of contention between the Romans and Carthaginians during the third century B.C.
Rome captured the small Mediterranean island in the First Punic War in 255 B.C., but lost it a year later.
In 217 B.C., in the Second Punic War, Rome finally regained the island, and even celebrated the event with commemorative coins and a holiday.
Following the first conquer in 255 B.C., Rome took control of the island with a fleet of over 300 ships.
“The Carthaginian ships that were stationing near Pantelleria had no other choice than hiding near the northern coast and trying to escape. To do so, they cut the anchors free and left them in the sea. They also abandoned part of their cargo to lighten the ships and gain speed,” Abelli said.
Indeed, Abelli’s team found many jars in clusters of 4-10 pieces near the spectacular Punta Tracino, not far from where the anchors were found.
Two years ago, the same team found 3,500 Punic coins about 68 feet down. Dating between 264 and 241 B.C., the bronze coins featured the same iconography, suggesting that the money served for an institutional payment, possibly to sustain anti-Roman troops.
Carried on a Carthaginian ship headed to Sicily, the money was deliberately left on the bottom of the sea, in relatively low waters, with the hope of recovering it later.
Read more at Discovery News
Found at a depth between 160 and 270 feet in Cala Levante, one of the island’s most scenic spots, the anchors date to more than 2,000 years ago.
According to Leonardo Abelli, an archaeologist from the University of Sassari, the anchors are startling evidence of the Romans’ and Carthaginians’ struggle to conquer the Mediterranean during the First Punic War (264 to 241 B.C.).
“They were deliberately abandoned. The Carthaginian ships were hiding from the Romans and could not waste time trying to retrieve heavy anchors at such depths,” Abelli told Discovery News.
Lying strategically between Africa and Sicily, Pantelleria became a bone of contention between the Romans and Carthaginians during the third century B.C.
Rome captured the small Mediterranean island in the First Punic War in 255 B.C., but lost it a year later.
In 217 B.C., in the Second Punic War, Rome finally regained the island, and even celebrated the event with commemorative coins and a holiday.
Following the first conquer in 255 B.C., Rome took control of the island with a fleet of over 300 ships.
“The Carthaginian ships that were stationing near Pantelleria had no other choice than hiding near the northern coast and trying to escape. To do so, they cut the anchors free and left them in the sea. They also abandoned part of their cargo to lighten the ships and gain speed,” Abelli said.
Indeed, Abelli’s team found many jars in clusters of 4-10 pieces near the spectacular Punta Tracino, not far from where the anchors were found.
Two years ago, the same team found 3,500 Punic coins about 68 feet down. Dating between 264 and 241 B.C., the bronze coins featured the same iconography, suggesting that the money served for an institutional payment, possibly to sustain anti-Roman troops.
Carried on a Carthaginian ship headed to Sicily, the money was deliberately left on the bottom of the sea, in relatively low waters, with the hope of recovering it later.
Read more at Discovery News
Ancient Galaxy Holds Planet Chemistry Surprise
When did planets first form in the Universe? Though we’ve been finding hundred of exoplanets and thousands of planet candidates in our Milky Way Galaxy, we need to look at processes in much more distant galaxies to find the earliest hints of our chemical ancestry.
Life as we know it evolves on a planet. Planets form from the debris left over when a star is born. Planetary formation requires elements heavier than hydrogen and helium, but the very first stars were made of just those two elements formed in the Big Bang. So, it had to take some time and several cycles of stellar life and death to build up the heavier elements through nuclear fusion and supernovae. But the question remains… how early in the Universe’s history were these elements around to form planets?
A group of astronomers led by Jens-Kristian Krogager, a Ph.D candidate at the Niels Bohr Institute, took a detailed inventory of a very distant galaxy, around at a time when the Universe was about 2.8 billion years old — around 11 billion years ago. (For reference, our sun is about 5 billion years old, so this was well before it was formed in its own nascent cloud.)
The galaxy blocks some of the light from an even more distant quasar, so its spectrum can be studied for absorption lines. Individual elements in a gas can remove or block certain wavelengths of light from a background source, and for these kinds of systems, that tells you the redshift, giving you the distance to the galaxy. This galaxy also have spectral emissions lines from gas that has been excited by radiation given off by the star formation regions.
Using the Very Large Telescope in Chile and the Hubble Space Telescope, the astronomers looked at various emission and absorption lines of oxygen, nitrogen, zinc, iron, silicon and magnesium to accurately determine how much of these heavier and potentially planet-building elements existed in the gas forming new stars. They determined it to be about one-third of the heavy elements found in the sun. These elements had to have been formed by earlier generations of stars that lived and died, making way now for the potential of planet formation 6 billion years before our sun was even born.
The imaging results were added to get a more complete picture of the galaxy, which appears to be forming stars at a rate of about 13 solar masses per year (compared to our Galaxy’s paltry one solar mass per year). The galaxy is a small, elongated disk shape, probably see nearly edge-on, with a mass of 2 billion solar masses, much smaller than the grand spirals and ellipticals we see in the Universe today.
The gas that was studied in absorption and emission lies well outside the disk, indicating that a “galactic fountain” is at work. This occurs when so much star formation creates a large number of supernovae that expel interstellar gas outside of the galaxy, thus shutting down the star formation. That gas can later “rain” back down on the disk, starting a new wave of formation.
Read more at Discovery News
Life as we know it evolves on a planet. Planets form from the debris left over when a star is born. Planetary formation requires elements heavier than hydrogen and helium, but the very first stars were made of just those two elements formed in the Big Bang. So, it had to take some time and several cycles of stellar life and death to build up the heavier elements through nuclear fusion and supernovae. But the question remains… how early in the Universe’s history were these elements around to form planets?
A group of astronomers led by Jens-Kristian Krogager, a Ph.D candidate at the Niels Bohr Institute, took a detailed inventory of a very distant galaxy, around at a time when the Universe was about 2.8 billion years old — around 11 billion years ago. (For reference, our sun is about 5 billion years old, so this was well before it was formed in its own nascent cloud.)
The galaxy blocks some of the light from an even more distant quasar, so its spectrum can be studied for absorption lines. Individual elements in a gas can remove or block certain wavelengths of light from a background source, and for these kinds of systems, that tells you the redshift, giving you the distance to the galaxy. This galaxy also have spectral emissions lines from gas that has been excited by radiation given off by the star formation regions.
Using the Very Large Telescope in Chile and the Hubble Space Telescope, the astronomers looked at various emission and absorption lines of oxygen, nitrogen, zinc, iron, silicon and magnesium to accurately determine how much of these heavier and potentially planet-building elements existed in the gas forming new stars. They determined it to be about one-third of the heavy elements found in the sun. These elements had to have been formed by earlier generations of stars that lived and died, making way now for the potential of planet formation 6 billion years before our sun was even born.
The imaging results were added to get a more complete picture of the galaxy, which appears to be forming stars at a rate of about 13 solar masses per year (compared to our Galaxy’s paltry one solar mass per year). The galaxy is a small, elongated disk shape, probably see nearly edge-on, with a mass of 2 billion solar masses, much smaller than the grand spirals and ellipticals we see in the Universe today.
The gas that was studied in absorption and emission lies well outside the disk, indicating that a “galactic fountain” is at work. This occurs when so much star formation creates a large number of supernovae that expel interstellar gas outside of the galaxy, thus shutting down the star formation. That gas can later “rain” back down on the disk, starting a new wave of formation.
Read more at Discovery News
Jul 2, 2013
Scientists Discover Molecular Communication Network in Human Stem Cells
Scientists at A*STAR's Genome Institute of Singapore (GIS) and the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin (Germany) have discovered a molecular network in human embryonic stem cells (hESCs) that integrates cell communication signals to keep the cell in its stem cell state. These findings were reported in the June 2013 issue of Molecular Cell.
Human embryonic stem cells have the remarkable property that they can form all human cell types. Scientists around the world study these cells to be able to use them for medical applications in the future. Many factors are required for stem cells to keep their special state, amongst others the use of cell communication pathways.
Cell communication is of key importance in multicellular organisms. For example, the coordinated development of tissues in the embryo to become any specific organ requires that cells receive signals and respond accordingly. If there are errors in the signals, the cell will respond differently, possibly leading to diseases such as cancer. The communication signals which are used in hESCs activate a chain of reactions (called the extracellular regulated kinase (ERK) pathway) within each cell, causing the cell to respond by activating genetic information.
Scientists at the GIS and MPIMG studied which genetic information is activated in the cell, and thereby discovered a network for molecular communication in hESCs. They mapped the kinase interactions across the entire genome, and discovered that ERK2, a protein that belongs to the ERK signaling family, targets important sites such as non-coding genes and histones, cell cycle, metabolism and also stem cell-specific genes.
The ERK signaling pathway involves an additional protein, ELK1 which interacts with ERK2 to activate the genetic information. Interestingly, the team also discovered that ELK1 has a second, totally opposite function. At genomic sites which are not targeted by ERK signaling, ELK1 silences genetic information, thereby keeping the cell in its undifferentiated state. The authors propose a model that integrates this bi-directional control to keep the cell in the stem cell state.
These findings are particularly relevant for stem cell research, but they might also help research in other related fields.
First author Dr Jonathan Göke from Stem Cell and Developmental Biology at the GIS said, "The ERK signaling pathway has been known for many years, but this is the first time we are able to see the full spectrum of the response in the genome of stem cells. We have found many biological processes that are associated with this signaling pathway, but we also found new and unexpected patterns such as this dual mode of ELK1. It will be interesting to see how this communication network changes in other cells, tissues, or in disease."
"A remarkable feature of this study is, how the information was extracted by computational means from the experimental data," said Prof Martin Vingron from MPIMG and co-author of this study.
Read more at Science Daily
Human embryonic stem cells have the remarkable property that they can form all human cell types. Scientists around the world study these cells to be able to use them for medical applications in the future. Many factors are required for stem cells to keep their special state, amongst others the use of cell communication pathways.
Cell communication is of key importance in multicellular organisms. For example, the coordinated development of tissues in the embryo to become any specific organ requires that cells receive signals and respond accordingly. If there are errors in the signals, the cell will respond differently, possibly leading to diseases such as cancer. The communication signals which are used in hESCs activate a chain of reactions (called the extracellular regulated kinase (ERK) pathway) within each cell, causing the cell to respond by activating genetic information.
Scientists at the GIS and MPIMG studied which genetic information is activated in the cell, and thereby discovered a network for molecular communication in hESCs. They mapped the kinase interactions across the entire genome, and discovered that ERK2, a protein that belongs to the ERK signaling family, targets important sites such as non-coding genes and histones, cell cycle, metabolism and also stem cell-specific genes.
The ERK signaling pathway involves an additional protein, ELK1 which interacts with ERK2 to activate the genetic information. Interestingly, the team also discovered that ELK1 has a second, totally opposite function. At genomic sites which are not targeted by ERK signaling, ELK1 silences genetic information, thereby keeping the cell in its undifferentiated state. The authors propose a model that integrates this bi-directional control to keep the cell in the stem cell state.
These findings are particularly relevant for stem cell research, but they might also help research in other related fields.
First author Dr Jonathan Göke from Stem Cell and Developmental Biology at the GIS said, "The ERK signaling pathway has been known for many years, but this is the first time we are able to see the full spectrum of the response in the genome of stem cells. We have found many biological processes that are associated with this signaling pathway, but we also found new and unexpected patterns such as this dual mode of ELK1. It will be interesting to see how this communication network changes in other cells, tissues, or in disease."
"A remarkable feature of this study is, how the information was extracted by computational means from the experimental data," said Prof Martin Vingron from MPIMG and co-author of this study.
Read more at Science Daily
Pluto's New Moons Get Names From Hell
Congratulations Pluto! We Earthlings have named two of your offspring Kerberos and Styx. You may have noticed that, in keeping with your Hellish roots, we’ve named your cute little bundles of rock after deities of the Underworld. What’s that? You’d rather one be named after a science fiction planet?! And Captain Kirk gave you his blessing?! Tough.
Yes, the day has come, Pluto’s two newly-discovered moons (originally designated “P4″ and “P5″) have been officially been named by the International Astronomical Union (IAU). And they took into consideration the Pluto Rocks! naming poll that was organized by the SETI Institute. But they didn’t exactly agree with the outright winner.
“The IAU is pleased to announce that today it has officially recognized the names Kerberos and Styx for the fourth and fifth moons of Pluto respectively,” the IAU said in a statement Tuesday. “These names were backed by voters in a recently held popular contest, aimed at allowing the public to suggest names for the two recently discovered moons of the most famous dwarf planet in the Solar System.”
Kerberos was discovered in 2011 and Styx in 2012. The pair were uncovered by Hubble Space Telescope surveys of the volume of space surrounding the dwarf planet in support of the 2015 NASA New Horizons flyby. The search was led by SETI Institute astronomers. New Horizons is currently flying through interplanetary space a little under 5 Astronomical Units (AU) from Pluto, but since launch in 2006, astronomers have grown concerned about rocky debris that could surround Pluto. Should the spacecraft slam into a previously unnoticed cloud of debris during the flyby, the mission could be wiped out.
The discovery of two more moons, in addition to Pluto’s original trio (Charon, Nix and Hydra), indicates there could be more rocky satellites out there. Although concern was growing, after extensive observational efforts, the New Horizons team will keep the spacecraft on its planned trajectory that will see a Pluto flyby in a little over 2 years time (although an emergency “bail-out” trajectory can be used if the Pluto neighborhood is deemed too rough).
The SETI Pluto moons naming poll was wildly successful, especially after William Shatner, a.k.a. Captain James T. Kirk, suggested one of the moons should be named “Vulcan” after his Star Trek second in command Spock’s homeworld.
Shatner’s celebrity threw the poll into the limelight, ensuring a win for “Vulcan.” Although arguments were made for the suitability of the name, it didn’t quite fit. The astronomical naming convention has seen all the bodies in the Plutonian system named after mythological Greek and Roman deities of the Underworld.
“I was overwhelmed by the public response to the naming campaign,” said Mark Showalter, Senior Research Scientist at the SETI Institute. Nearly 500,000 votes were cast and 30,000 write-ins for name suggestions were received.
Hades, god of the underworld, who was also known as “Plouton” (meaning “Rich One”), was Latinized by the Romans to, simply, Pluto. In a nice little tidbit of astronomical history, the ninth planetary body from the sun was given that name by 11-year old schoolgirl Venetia Burney shortly after the small world was discovered by Clyde Tombaugh at Lowell Observatory in 1930. The mythological name for the dark and cold world started a tradition that has seen Pluto’s biggest satellite named after Charon (the ferryman of Hades who carries souls of the dead across the rivers Styx and Acheron) and two smaller moons Nix (is the Greek goddess of the night) and Hydra (the many-headed serpent).
Read more at Discovery News
Yes, the day has come, Pluto’s two newly-discovered moons (originally designated “P4″ and “P5″) have been officially been named by the International Astronomical Union (IAU). And they took into consideration the Pluto Rocks! naming poll that was organized by the SETI Institute. But they didn’t exactly agree with the outright winner.
“The IAU is pleased to announce that today it has officially recognized the names Kerberos and Styx for the fourth and fifth moons of Pluto respectively,” the IAU said in a statement Tuesday. “These names were backed by voters in a recently held popular contest, aimed at allowing the public to suggest names for the two recently discovered moons of the most famous dwarf planet in the Solar System.”
Kerberos was discovered in 2011 and Styx in 2012. The pair were uncovered by Hubble Space Telescope surveys of the volume of space surrounding the dwarf planet in support of the 2015 NASA New Horizons flyby. The search was led by SETI Institute astronomers. New Horizons is currently flying through interplanetary space a little under 5 Astronomical Units (AU) from Pluto, but since launch in 2006, astronomers have grown concerned about rocky debris that could surround Pluto. Should the spacecraft slam into a previously unnoticed cloud of debris during the flyby, the mission could be wiped out.
The discovery of two more moons, in addition to Pluto’s original trio (Charon, Nix and Hydra), indicates there could be more rocky satellites out there. Although concern was growing, after extensive observational efforts, the New Horizons team will keep the spacecraft on its planned trajectory that will see a Pluto flyby in a little over 2 years time (although an emergency “bail-out” trajectory can be used if the Pluto neighborhood is deemed too rough).
The SETI Pluto moons naming poll was wildly successful, especially after William Shatner, a.k.a. Captain James T. Kirk, suggested one of the moons should be named “Vulcan” after his Star Trek second in command Spock’s homeworld.
Shatner’s celebrity threw the poll into the limelight, ensuring a win for “Vulcan.” Although arguments were made for the suitability of the name, it didn’t quite fit. The astronomical naming convention has seen all the bodies in the Plutonian system named after mythological Greek and Roman deities of the Underworld.
“I was overwhelmed by the public response to the naming campaign,” said Mark Showalter, Senior Research Scientist at the SETI Institute. Nearly 500,000 votes were cast and 30,000 write-ins for name suggestions were received.
Hades, god of the underworld, who was also known as “Plouton” (meaning “Rich One”), was Latinized by the Romans to, simply, Pluto. In a nice little tidbit of astronomical history, the ninth planetary body from the sun was given that name by 11-year old schoolgirl Venetia Burney shortly after the small world was discovered by Clyde Tombaugh at Lowell Observatory in 1930. The mythological name for the dark and cold world started a tradition that has seen Pluto’s biggest satellite named after Charon (the ferryman of Hades who carries souls of the dead across the rivers Styx and Acheron) and two smaller moons Nix (is the Greek goddess of the night) and Hydra (the many-headed serpent).
Read more at Discovery News
200-Year-Old Fish Caught Off Alaska
In 1813, President James Madison occupied the White House, Americans occupied Fort George in Canada (a result of the War of 1812) and a rockfish was born somewhere in the North Pacific.
Two hundred years later, that same rockfish was caught off the coast of Alaska by Seattle resident Henry Liebman — possibly setting a record for the oldest rockfish ever landed.
Troy Tydingco of the Alaska Department of Fish and Game told the Daily Sitka Sentinel that the longevity record for the shortraker rockfish (Sebastes borealis) is 175 years, but that fish "was quite a bit smaller than the one Henry caught."
"That fish was 32-and-a-half inches [83 centimeters] long, where Henry's was almost 41 inches [104 cm] — so his could be substantially older," Tydingco said.
Samples of the rockfish have been sent to a lab in Juneau, where the actual age of Liebman's fish will be determined, according to the Sentinel.
Scientists can estimate the age of a fish by examining an ear bone known as the otolith, which contains growth rings similar to the annual age rings found in a tree trunk.
Animal longevity remains a puzzle to biologists. Some researchers have found that smaller individuals within a species tend to live longer than their bigger brethren. This may be due to the abnormal cell growth that accompanies both larger body size and the risk of cancer.
The longest-lived animal ever found was a quahog clam scooped from the waters off Iceland. The tiny mollusk was estimated to be 400 years old.
At 39.08 pounds (17.73 kilograms), Liebman's fish may also set a record for the largest rockfish ever caught.
"I knew it was abnormally big, [but I] didn't know it was a record until on the way back — we looked in the Alaska guidebook that was on the boat," Liebman told the Sentinel.
Read more at Discovery News
Two hundred years later, that same rockfish was caught off the coast of Alaska by Seattle resident Henry Liebman — possibly setting a record for the oldest rockfish ever landed.
Troy Tydingco of the Alaska Department of Fish and Game told the Daily Sitka Sentinel that the longevity record for the shortraker rockfish (Sebastes borealis) is 175 years, but that fish "was quite a bit smaller than the one Henry caught."
"That fish was 32-and-a-half inches [83 centimeters] long, where Henry's was almost 41 inches [104 cm] — so his could be substantially older," Tydingco said.
Samples of the rockfish have been sent to a lab in Juneau, where the actual age of Liebman's fish will be determined, according to the Sentinel.
Scientists can estimate the age of a fish by examining an ear bone known as the otolith, which contains growth rings similar to the annual age rings found in a tree trunk.
Animal longevity remains a puzzle to biologists. Some researchers have found that smaller individuals within a species tend to live longer than their bigger brethren. This may be due to the abnormal cell growth that accompanies both larger body size and the risk of cancer.
The longest-lived animal ever found was a quahog clam scooped from the waters off Iceland. The tiny mollusk was estimated to be 400 years old.
At 39.08 pounds (17.73 kilograms), Liebman's fish may also set a record for the largest rockfish ever caught.
"I knew it was abnormally big, [but I] didn't know it was a record until on the way back — we looked in the Alaska guidebook that was on the boat," Liebman told the Sentinel.
Read more at Discovery News
Real Doomsday: Earth Dead in 2.8 Billion Years
The Rolling Stones’ Mick Jaggar crooned “Time Is On My Side” in the 1964 classic rock hit of the same title. Sadly, that’s not the case for habitable planets orbiting sun-like stars according to a recent computer simulation by astrobiologist Jack O’Malley-James of the University of St Andrews in the United Kingdom.
“A combination of slow and rapid environmental changes will result in the extinction of all species on Earth, with the last inhabitants disappearing within 2.8 billion years from now,” O’Malley-James predicts.
He says that we’ve got about 2 billion years left before the oceans will have evaporated leaving behind a desiccated sand dune landscape as alien-looking as that of Mars. The last vestiges of life on Earth will have retreated to the few scattered reservoirs of water left on our planet.
As is well-known from stellar evolution theory the sun will remain stable over the next few billion years but become steadily brighter as fusion reactions in the core change.
His modeling shows that within the next billion years, increased evaporation rates and future chemical reactions with rainwater will draw more and more carbon dioxide from the Earth’s atmosphere. The falling levels of carbon dioxide will lead to the extinction of plants and animals and Earth will become a world of microbes. At the same time, the Earth will be depleted of oxygen and will be drying out as the rising temperatures lead to the evaporation of the oceans.
“The far-future Earth will be very hostile to life by this point.” O’Malley-James says. “All living things require liquid water, so any remaining life will be restricted to pockets of liquid water, perhaps at cooler, higher altitudes (as with the lakes on Titan, pictured right) or in caves or underground. This life will need to cope with many extremes like high temperatures and intense ultraviolet radiation.”
This gloomy forecast is sobering because there has been a lot of news about finding Earth-sized planets in the habitable zones around other stars. But what’s mostly overlooked is the temporal dimension. How old are the planets? What is their stage of evolution?
Though the sun burns as a main sequence star for 10 billion years, the window of opportunity for advanced life on Earth is about 25 percent of the sun’s lifetime, according to this latest model.
Those exoplanets where conditions have deteriorated to where life has moved underground (such as is likely the case with Mars) have feeble or no chemical biotracers to study from light-years away. “Dying Earths will have a nitrogen and carbon-dioxide atmosphere with methane being the only sign of active life,” O’Malley-James predicts.
Recent estimates for the number of Earth-like planets in the galaxy range from 17 to 100 billion. Let’s be especially conservative and say 10 billion are Earth clones. Most of these will orbit red dwarf stars that are far more long-lived that our sun. This leaves us with 1 billion Earths orbiting solar-type stars. But roughly 250 million of these are at a stage right now where they can support complex life according to O’Malley-James’ model.
Still, these are not bad odds for finding someone else out there in the galaxy.
I would further argue that alien civilizations orbiting a sunlike star are more likely to pursue interstellar colonization because of the comparatively short lifespan of their home star. And extraterrestrials living in binary systems (like Alpha Centauri) would further be motivated to pursue space-faring because they want to explore inhabitable planets orbiting the companion star.
Read more at Discovery News
“A combination of slow and rapid environmental changes will result in the extinction of all species on Earth, with the last inhabitants disappearing within 2.8 billion years from now,” O’Malley-James predicts.
He says that we’ve got about 2 billion years left before the oceans will have evaporated leaving behind a desiccated sand dune landscape as alien-looking as that of Mars. The last vestiges of life on Earth will have retreated to the few scattered reservoirs of water left on our planet.
As is well-known from stellar evolution theory the sun will remain stable over the next few billion years but become steadily brighter as fusion reactions in the core change.
His modeling shows that within the next billion years, increased evaporation rates and future chemical reactions with rainwater will draw more and more carbon dioxide from the Earth’s atmosphere. The falling levels of carbon dioxide will lead to the extinction of plants and animals and Earth will become a world of microbes. At the same time, the Earth will be depleted of oxygen and will be drying out as the rising temperatures lead to the evaporation of the oceans.
“The far-future Earth will be very hostile to life by this point.” O’Malley-James says. “All living things require liquid water, so any remaining life will be restricted to pockets of liquid water, perhaps at cooler, higher altitudes (as with the lakes on Titan, pictured right) or in caves or underground. This life will need to cope with many extremes like high temperatures and intense ultraviolet radiation.”
This gloomy forecast is sobering because there has been a lot of news about finding Earth-sized planets in the habitable zones around other stars. But what’s mostly overlooked is the temporal dimension. How old are the planets? What is their stage of evolution?
Though the sun burns as a main sequence star for 10 billion years, the window of opportunity for advanced life on Earth is about 25 percent of the sun’s lifetime, according to this latest model.
Those exoplanets where conditions have deteriorated to where life has moved underground (such as is likely the case with Mars) have feeble or no chemical biotracers to study from light-years away. “Dying Earths will have a nitrogen and carbon-dioxide atmosphere with methane being the only sign of active life,” O’Malley-James predicts.
Recent estimates for the number of Earth-like planets in the galaxy range from 17 to 100 billion. Let’s be especially conservative and say 10 billion are Earth clones. Most of these will orbit red dwarf stars that are far more long-lived that our sun. This leaves us with 1 billion Earths orbiting solar-type stars. But roughly 250 million of these are at a stage right now where they can support complex life according to O’Malley-James’ model.
Still, these are not bad odds for finding someone else out there in the galaxy.
I would further argue that alien civilizations orbiting a sunlike star are more likely to pursue interstellar colonization because of the comparatively short lifespan of their home star. And extraterrestrials living in binary systems (like Alpha Centauri) would further be motivated to pursue space-faring because they want to explore inhabitable planets orbiting the companion star.
Read more at Discovery News
Jul 1, 2013
Wiggling Worms Make Waves in Gene Pool
The idea that worms can be seen as waveforms allowed scientists at Rice University to find new links in gene networks that control movement.
The work led by Rice biochemist Weiwei Zhong, which will appear online this week in the Proceedings of the National Academy of Sciences Early Edition, involved analyzing video records of the movement of thousands of mutant worms of the species Caenorhabditis elegans to identify the neuronal pathways that drive locomotion.
One result was the discovery of 87 genes that, when inactivated, caused movement defects in worms. Fifty of those genes had never been associated with such defects, and 37 have implications in human diseases, the researchers found.
Another discovery was the existence of several network modules among these genes. One module detects environmental conditions. Another resides in all "excitable cells" -- those types that respond to electrical signals -- in the worm's neurons, muscles and digestive tracts. Another coordinates signals in the motor neurons.
The team also uncovered new details about a protein-signaling pathway found in all animals, Zhong said.
Zhong said the study is the first to provide a system-level understanding of how neuronal signaling genes coordinate movement and shows the value of a quantitative approach to genetic studies. She said the approach could be useful in studies of gene-to-drug or drug-to-drug interactions.
What made the research possible is the fact that cameras and computers are able to see variations in movement that are too small for eyes and minds to notice, Zhong said. "The idea is that if a gene is required for maintaining normal movement and we pick a mutant, the computer should be able to detect the defects," she said.
"I'm very observant," she said, "and I thought I could tell the worms with abnormal behaviors. I was surprised to see there were so many things I missed that the computer picked up."
The Rice researchers, with help from associates at the California Institute of Technology and Howard Hughes Medical Institute (HHMI), analyzed 239 mutant C. elegans, a common worm used in studies since the 1970s. Including a set of "wild-type" C. elegans that was used as a baseline, the Rice lab studied more than 4,400 worms. Each type was ordered from the Caenorhabditis Genetics Center and separated by mutation.
The worms were filmed one at a time. Each was placed in a petri dish (seeded with E. coli bacteria for food) on a motorized platform and filmed by a computer-controlled camera/microscope. The computer re-centered the camera on the worms any time they moved near the edge of the camera's field of view.
Zhong said the computer tracked 13 points along the length of each worm to analyze 10 parameters of its sine wave-like movement: velocity, flex, frequency, amplitude and wavelength, both forward and backward. "Some moved slower; some moved faster; some had exaggerated body bends. But in our database, it all turns into numbers to describe the abnormalities," she said. "It gives us a detailed profile of the worm's movement that's almost like a fingerprint."
As a practical matter, each worm was filmed for four minutes. Even at that, it took nearly a year to capture all 4,400 mutants in motion.
The Rice researchers analyzed at least 10 worms of each mutant type to see if their particular mutations caused the animals to move in similar ways -- which, for the most part, they did. Then they analyzed all mutant data to see whether different mutants move in similar ways. "If they have the same symptoms, then we think these genes are probably involved in the same disorder," Zhong said.
To find how gene networks control particular movements, the team cross-matched metrics that were captured by the computer with data about each gene. "Once we knew how many genes were required for maintaining normal locomotion, we then tried to figure out how these genes interact with each other, how they function together as networks," she said.
The computed gene networks showed interesting features, she said. "Some genes are closely connected to each other but loosely connected to others. When we grouped them, we found several communities," she said. One appears to sense environment via sensory neurons; a second connects neurons, muscles and the digestive tract, "probably encoding some basic machinery in excitable cells." The third network contains genes in "the motor neurons that we expected," Zhong said.
She was most interested to see that the third network revealed evidence that a protein known as G-alpha-Q that also appears in other species -- including humans -- has a previously unknown target in a signaling pathway that regulates locomotion. The team conducted further experiments to confirm the existence of the new target gene, PLC-gamma. She said previous studies likely missed this target because they were too coarse to detect the subtle movement abnormalities caused by a defect in PLC-gamma.
Read more at Science Daily
The work led by Rice biochemist Weiwei Zhong, which will appear online this week in the Proceedings of the National Academy of Sciences Early Edition, involved analyzing video records of the movement of thousands of mutant worms of the species Caenorhabditis elegans to identify the neuronal pathways that drive locomotion.
One result was the discovery of 87 genes that, when inactivated, caused movement defects in worms. Fifty of those genes had never been associated with such defects, and 37 have implications in human diseases, the researchers found.
Another discovery was the existence of several network modules among these genes. One module detects environmental conditions. Another resides in all "excitable cells" -- those types that respond to electrical signals -- in the worm's neurons, muscles and digestive tracts. Another coordinates signals in the motor neurons.
The team also uncovered new details about a protein-signaling pathway found in all animals, Zhong said.
Zhong said the study is the first to provide a system-level understanding of how neuronal signaling genes coordinate movement and shows the value of a quantitative approach to genetic studies. She said the approach could be useful in studies of gene-to-drug or drug-to-drug interactions.
What made the research possible is the fact that cameras and computers are able to see variations in movement that are too small for eyes and minds to notice, Zhong said. "The idea is that if a gene is required for maintaining normal movement and we pick a mutant, the computer should be able to detect the defects," she said.
"I'm very observant," she said, "and I thought I could tell the worms with abnormal behaviors. I was surprised to see there were so many things I missed that the computer picked up."
The Rice researchers, with help from associates at the California Institute of Technology and Howard Hughes Medical Institute (HHMI), analyzed 239 mutant C. elegans, a common worm used in studies since the 1970s. Including a set of "wild-type" C. elegans that was used as a baseline, the Rice lab studied more than 4,400 worms. Each type was ordered from the Caenorhabditis Genetics Center and separated by mutation.
The worms were filmed one at a time. Each was placed in a petri dish (seeded with E. coli bacteria for food) on a motorized platform and filmed by a computer-controlled camera/microscope. The computer re-centered the camera on the worms any time they moved near the edge of the camera's field of view.
Zhong said the computer tracked 13 points along the length of each worm to analyze 10 parameters of its sine wave-like movement: velocity, flex, frequency, amplitude and wavelength, both forward and backward. "Some moved slower; some moved faster; some had exaggerated body bends. But in our database, it all turns into numbers to describe the abnormalities," she said. "It gives us a detailed profile of the worm's movement that's almost like a fingerprint."
As a practical matter, each worm was filmed for four minutes. Even at that, it took nearly a year to capture all 4,400 mutants in motion.
The Rice researchers analyzed at least 10 worms of each mutant type to see if their particular mutations caused the animals to move in similar ways -- which, for the most part, they did. Then they analyzed all mutant data to see whether different mutants move in similar ways. "If they have the same symptoms, then we think these genes are probably involved in the same disorder," Zhong said.
To find how gene networks control particular movements, the team cross-matched metrics that were captured by the computer with data about each gene. "Once we knew how many genes were required for maintaining normal locomotion, we then tried to figure out how these genes interact with each other, how they function together as networks," she said.
The computed gene networks showed interesting features, she said. "Some genes are closely connected to each other but loosely connected to others. When we grouped them, we found several communities," she said. One appears to sense environment via sensory neurons; a second connects neurons, muscles and the digestive tract, "probably encoding some basic machinery in excitable cells." The third network contains genes in "the motor neurons that we expected," Zhong said.
She was most interested to see that the third network revealed evidence that a protein known as G-alpha-Q that also appears in other species -- including humans -- has a previously unknown target in a signaling pathway that regulates locomotion. The team conducted further experiments to confirm the existence of the new target gene, PLC-gamma. She said previous studies likely missed this target because they were too coarse to detect the subtle movement abnormalities caused by a defect in PLC-gamma.
Read more at Science Daily
Curious Mix of Precision and Brawn in a Pouched Super-Predator
A bizarre, pouched super-predator that terrorised South America millions of years ago had huge sabre-like teeth but its bite was weaker than that of a domestic cat, new research shows.
Australian and American marsupials are among the closest living relatives of the extinct Thylacosmilus atrox, which had tooth roots extending rearwards almost into its small braincase.
"Thylacosmilus looked and behaved like nothing alive today," says UNSW palaeontologist, Dr Stephen Wroe, leader of the research team.
"To achieve a kill the animal must have secured and immobilised large prey using its extremely powerful forearms, before inserting the sabre-teeth into the windpipe or major arteries of the neck -- a mix of brute force and delicate precision."
The iconic North American sabre-toothed 'tiger', Smilodon fatalis, is often regarded as the archetypal mammalian super-predator.
However, Smilodon -- a true cat -- was just the end point in one of at least five independent 'experiments' in sabre-tooth evolution through the Age of Mammals, which spanned some 65 million years.
Thylacosmilus atrox is the best preserved species of one of these evolutionary lines -- pouched sabre-tooths that terrorised South America until around 3.5 million years ago.
For its size, its huge canine teeth were larger than those of any other known sabre-tooth.
Smilodon's killing behaviour has long attracted controversy, but scientists now mostly agree that powerful neck muscles, as well as jaw muscles, played an important role in driving the sabre-teeth into the necks of large prey.
Little was known about the predatory behaviour in the pouched Thylacosmilus.
To shed light on this super-predator mystery, Dr Wroe's team of Australian and US scientists constructed and compared sophisticated computer models of Smilodon and Thylacosmilus, as well as a living conical-toothed cat, the leopard.
These models were digitally 'crash-tested' in simulations of biting and killing behaviour. The results are published in the journal PLoS ONE.
"We found that both sabre-tooth species were similar in possessing weak jaw-muscle-driven bites compared to the leopard, but the mechanical performance of the sabre-tooths skulls showed that they were both well-adapted to resist forces generated by very powerful neck muscles," says Dr Wroe.
"But compared to the placental Smilodon, Thylacosmilus was even more extreme."
"Frankly, the jaw muscles of Thylacosmilus were embarrassing. With its jaws wide open this 80-100 kg 'super-predator' had a bite less powerful than a domestic cat. On the other hand -- its skull easily outperformed that of the placental Smilodon in response to strong forces from hypothetical neck muscles."
"Bottom line is that the huge sabres of Thylacosmilus were driven home by the neck muscles alone and -- because the sabre-teeth were actually quite fragile -- this must have been achieved with surprising precision."
"For Thylacosmilus -- and other sabre-tooths -- it was all about a quick kill."
Read more at Science Daily
Australian and American marsupials are among the closest living relatives of the extinct Thylacosmilus atrox, which had tooth roots extending rearwards almost into its small braincase.
"Thylacosmilus looked and behaved like nothing alive today," says UNSW palaeontologist, Dr Stephen Wroe, leader of the research team.
"To achieve a kill the animal must have secured and immobilised large prey using its extremely powerful forearms, before inserting the sabre-teeth into the windpipe or major arteries of the neck -- a mix of brute force and delicate precision."
The iconic North American sabre-toothed 'tiger', Smilodon fatalis, is often regarded as the archetypal mammalian super-predator.
However, Smilodon -- a true cat -- was just the end point in one of at least five independent 'experiments' in sabre-tooth evolution through the Age of Mammals, which spanned some 65 million years.
Thylacosmilus atrox is the best preserved species of one of these evolutionary lines -- pouched sabre-tooths that terrorised South America until around 3.5 million years ago.
For its size, its huge canine teeth were larger than those of any other known sabre-tooth.
Smilodon's killing behaviour has long attracted controversy, but scientists now mostly agree that powerful neck muscles, as well as jaw muscles, played an important role in driving the sabre-teeth into the necks of large prey.
Little was known about the predatory behaviour in the pouched Thylacosmilus.
To shed light on this super-predator mystery, Dr Wroe's team of Australian and US scientists constructed and compared sophisticated computer models of Smilodon and Thylacosmilus, as well as a living conical-toothed cat, the leopard.
These models were digitally 'crash-tested' in simulations of biting and killing behaviour. The results are published in the journal PLoS ONE.
"We found that both sabre-tooth species were similar in possessing weak jaw-muscle-driven bites compared to the leopard, but the mechanical performance of the sabre-tooths skulls showed that they were both well-adapted to resist forces generated by very powerful neck muscles," says Dr Wroe.
"But compared to the placental Smilodon, Thylacosmilus was even more extreme."
"Frankly, the jaw muscles of Thylacosmilus were embarrassing. With its jaws wide open this 80-100 kg 'super-predator' had a bite less powerful than a domestic cat. On the other hand -- its skull easily outperformed that of the placental Smilodon in response to strong forces from hypothetical neck muscles."
"Bottom line is that the huge sabres of Thylacosmilus were driven home by the neck muscles alone and -- because the sabre-teeth were actually quite fragile -- this must have been achieved with surprising precision."
"For Thylacosmilus -- and other sabre-tooths -- it was all about a quick kill."
Read more at Science Daily
Gettysburg: What If the South Had Won?
The rolling hills and forested ridges of Gettysburg, Pa., hold many stories about the clash of armies that occurred 150 years ago this week. But perhaps the most enduring is what would have happened if the South had won.
Would the direction of the war have shifted in favor of the Confederacy, or just prolonged its agony by a few more months? Would President Lincoln have been re-elected the following year, or would he have been turned out by a peace and accommodation movement led by Democrats?
Tens of thousands of visitors will be descending on Gettysburg National Park this week to commemorate the deadliest land battle in U.S. history, and remember the men who died there. At the same time, scholars of the Civil War continue to ponder the importance of this three-day fight that bloodied both sides, but led the Confederacy to retreat back to Virginia.
One historian believes the battle between Confederate General Robert E. Lee and the Union’s Army of the Potomac led by General George Meade truly was decisive
“If Lee had been victorious, the Army of the Potomac would have dissolved,” said Alan Guelzo, history professor at Gettysburg College and author the new book “Gettysburg: The Last Invasion.” “There were a number of soldiers who wrote before the battle about how the army had reeled from defeat to defeat, and if it happened one more time they would desert.”
Guelzo firmly believes that the battle was decisive from a political standpoint as well. The Union army had lost at Chancellorsville weeks earlier, and Lincoln was facing trouble across the country. Not only did Lincoln have to manage the war, he also had to maintain support for his agenda of abolishing slavery. That wasn’t as popular as we may believe today.
After the Emancipation Proclamation was issued in 1862, the Republicans lost 36 members in the House of Representatives as well as the governorships of New York and New Jersey, Guelzo said. In the fall of 1863, the governorships of several states including Ohio and Pennsylvania were in danger of turning from Republican to Democrat. If that would have occurred, Guelzo believes they would have likely recalled their state militias from the Union army, leaving it weaker against the Confederates.
A loss at Gettysburg would have given the pro-peace Democrats the upper hand, he said.
“The Democrats believed that Lincoln and the Republicans were a collection of radicals who were as much to blame as secessionists themselves,” Guelzo said. “They believed the Republicans were radical abolitionists, and what we need to do is get the moderate people together.”
Ten days after the battle of Gettysburg, which lasted from July 1 to July 3, 1863, the North began drafting young men, leading to the “draft riots” in New York, Boston and smaller cities like Toledo. Had Meade lost at Gettysburg and Lee began a military campaign in Pennsylvania and “you can imagine the political fallout in the coming weeks, and it’s not going to be good for the Union.”
As with all “what-if” games of history, not all historians agree with Guelzo’s scenarios.
“In the long term, the north had a winning strategy,” said Elizabeth Varon, professor of history at the University of Virginia. “They had the numbers and the resources.”
Varon notes that while Meade was ousted after Gettysburg, Gen. Ulysses S. Grant, was winning victories in the western part of the country, such as Vicksburg.
“The superior of leadership of Lincoln as president, and superior generalship and command harmony of Lincoln and his team in spring of 1864 was decisive,” Varon said. “People raise the issue of Southern morale, and it would have been good (with a victory), but the once the Union had a team of generals who worked well with Lincoln and put advantages to bear, you were going to have a Union victory.”
Meade’s victory -- at a cost of 45,000 to 50,000 killed, wounded or missing on both sides -- sent Lee back home.
“The war in 1864 is being fought in Virginia,” said Ian Isherwood, assistant director of the Civil War Institute in Gettysburg, Pa. “That means the south is defending its soil. That’s a major difference.”
As for the battle itself, many historians look back at the famous “Pickett’s Charge” in which 12,500 units under Confederate General George Pickett attacked 3,500 Union forces on Cemetery Ridge. The Union held and the disastrous engagement on July 3 has since been referred to as the high water mark of the Confederacy. But Guelzo said the charge wasn’t the only moment that could have tipped the battle, the war, and possibly American history, the other way.
Read more at Discovery News
Would the direction of the war have shifted in favor of the Confederacy, or just prolonged its agony by a few more months? Would President Lincoln have been re-elected the following year, or would he have been turned out by a peace and accommodation movement led by Democrats?
Tens of thousands of visitors will be descending on Gettysburg National Park this week to commemorate the deadliest land battle in U.S. history, and remember the men who died there. At the same time, scholars of the Civil War continue to ponder the importance of this three-day fight that bloodied both sides, but led the Confederacy to retreat back to Virginia.
One historian believes the battle between Confederate General Robert E. Lee and the Union’s Army of the Potomac led by General George Meade truly was decisive
“If Lee had been victorious, the Army of the Potomac would have dissolved,” said Alan Guelzo, history professor at Gettysburg College and author the new book “Gettysburg: The Last Invasion.” “There were a number of soldiers who wrote before the battle about how the army had reeled from defeat to defeat, and if it happened one more time they would desert.”
Guelzo firmly believes that the battle was decisive from a political standpoint as well. The Union army had lost at Chancellorsville weeks earlier, and Lincoln was facing trouble across the country. Not only did Lincoln have to manage the war, he also had to maintain support for his agenda of abolishing slavery. That wasn’t as popular as we may believe today.
After the Emancipation Proclamation was issued in 1862, the Republicans lost 36 members in the House of Representatives as well as the governorships of New York and New Jersey, Guelzo said. In the fall of 1863, the governorships of several states including Ohio and Pennsylvania were in danger of turning from Republican to Democrat. If that would have occurred, Guelzo believes they would have likely recalled their state militias from the Union army, leaving it weaker against the Confederates.
A loss at Gettysburg would have given the pro-peace Democrats the upper hand, he said.
“The Democrats believed that Lincoln and the Republicans were a collection of radicals who were as much to blame as secessionists themselves,” Guelzo said. “They believed the Republicans were radical abolitionists, and what we need to do is get the moderate people together.”
Ten days after the battle of Gettysburg, which lasted from July 1 to July 3, 1863, the North began drafting young men, leading to the “draft riots” in New York, Boston and smaller cities like Toledo. Had Meade lost at Gettysburg and Lee began a military campaign in Pennsylvania and “you can imagine the political fallout in the coming weeks, and it’s not going to be good for the Union.”
As with all “what-if” games of history, not all historians agree with Guelzo’s scenarios.
“In the long term, the north had a winning strategy,” said Elizabeth Varon, professor of history at the University of Virginia. “They had the numbers and the resources.”
Varon notes that while Meade was ousted after Gettysburg, Gen. Ulysses S. Grant, was winning victories in the western part of the country, such as Vicksburg.
“The superior of leadership of Lincoln as president, and superior generalship and command harmony of Lincoln and his team in spring of 1864 was decisive,” Varon said. “People raise the issue of Southern morale, and it would have been good (with a victory), but the once the Union had a team of generals who worked well with Lincoln and put advantages to bear, you were going to have a Union victory.”
Meade’s victory -- at a cost of 45,000 to 50,000 killed, wounded or missing on both sides -- sent Lee back home.
“The war in 1864 is being fought in Virginia,” said Ian Isherwood, assistant director of the Civil War Institute in Gettysburg, Pa. “That means the south is defending its soil. That’s a major difference.”
As for the battle itself, many historians look back at the famous “Pickett’s Charge” in which 12,500 units under Confederate General George Pickett attacked 3,500 Union forces on Cemetery Ridge. The Union held and the disastrous engagement on July 3 has since been referred to as the high water mark of the Confederacy. But Guelzo said the charge wasn’t the only moment that could have tipped the battle, the war, and possibly American history, the other way.
Read more at Discovery News
Diving Into Saturn's Cataclysmic Storms
If you ask people for the name of a planet which has massive storms, the first which pops into most peoples’ minds is Jupiter, with its great red spot. But Jupiter is certainly not the only gas giant with inclement weather. Our solar system’s second largest planet, Saturn, also shows some vast storms amidst its clouds, and they’re no less dramatic than Jupiter’s.
Once every Saturnian year (roughly 28 Earth years), Saturn is ravaged by storms so huge that they can be easily seen from Earth, over 1.2 billion km (745 million miles) away. Known as great white spots, these storms seem to coincide with Saturn’s summer solstice, but being so infrequent, only a handful have ever been observed.
As a great white spot ravages its way through Saturn’s upper cloud decks, it leaves a trail of carnage in its wake — encircling the entire planet, and causing immense lightning strikes.
Even though the Cassini probe has been in orbit around Saturn for years now, sending us both fascinating data and beautiful images, Saturn’s periodic storms are still quite poorly understood. And when things are poorly understood, that’s when the theoreticians can get to work.
A group of researchers from the Planetary Sciences Group, at the University of the Basque Country, took a closer look at this puzzle. Led by Enrique García Melendo from the Fundació Observatori Esteve Duran in Catalonia, they used Cassini data to construct a mathematical model of Saturn’s storms.
Saturn’s white spots disrupt the planet’s atmosphere globally, and typically consist of a “head” that leaves a trail behind it in the planet’s atmosphere. To get a better idea of what processes may be happening in Saturn’s atmosphere, researchers took a closer look at the head of a great white spot — and at the focus of the storm.
The focus is the storm’s source; the point where it all started, buried deep within the planet’s atmosphere. In fact, the focus of a great white spot originates around 300 km (186 miles) below Saturn’s familiar yellow cloud decks. This causes a huge upwelling of material from much deeper inside Saturn’s atmosphere.
Cassini imaging used in the study also shows that the head winds in these storms reach speeds of around 500 km/h (310 mph), and that the highest of those winds are found around 40 km (25 miles) higher than regular clouds. In the head region of these storms, the raging storm interacts with the rest of the planet’s atmosphere, creating intense sustained winds.
Saturn’s white spots have been observed before to show an increase in a gas called phosphine, and a decrease in acetylene, as compared with the natural state of Saturn’s clouds. The latest study seems to confirm that one chemical which is churned to the surface in a great white spot is water — meaning that these white clouds on Saturn are made of similar stuff to the white clouds here on Earth.
It’s this upwelling of water vapor being transported up to the highest levels of the planet’s atmosphere that releases such huge amounts of energy. This interacts with Saturn’s ferocious prevailing winds, which can reach peak speeds of up to 1800 km/h (1118 mph) — the second strongest in the solar system. The interaction between these savage winds serve to power Saturn’s stormy summer season.
“We did not expect to find such violent circulation in the region of the development of the storm, which is a symptom of the particularly violent interaction between the storm and the planet’s atmosphere,” explained García, whose model managed to accurately recreate the storm in a computer simulation.
Read more at Discovery News
Once every Saturnian year (roughly 28 Earth years), Saturn is ravaged by storms so huge that they can be easily seen from Earth, over 1.2 billion km (745 million miles) away. Known as great white spots, these storms seem to coincide with Saturn’s summer solstice, but being so infrequent, only a handful have ever been observed.
As a great white spot ravages its way through Saturn’s upper cloud decks, it leaves a trail of carnage in its wake — encircling the entire planet, and causing immense lightning strikes.
Even though the Cassini probe has been in orbit around Saturn for years now, sending us both fascinating data and beautiful images, Saturn’s periodic storms are still quite poorly understood. And when things are poorly understood, that’s when the theoreticians can get to work.
A group of researchers from the Planetary Sciences Group, at the University of the Basque Country, took a closer look at this puzzle. Led by Enrique García Melendo from the Fundació Observatori Esteve Duran in Catalonia, they used Cassini data to construct a mathematical model of Saturn’s storms.
Saturn’s white spots disrupt the planet’s atmosphere globally, and typically consist of a “head” that leaves a trail behind it in the planet’s atmosphere. To get a better idea of what processes may be happening in Saturn’s atmosphere, researchers took a closer look at the head of a great white spot — and at the focus of the storm.
The focus is the storm’s source; the point where it all started, buried deep within the planet’s atmosphere. In fact, the focus of a great white spot originates around 300 km (186 miles) below Saturn’s familiar yellow cloud decks. This causes a huge upwelling of material from much deeper inside Saturn’s atmosphere.
Cassini imaging used in the study also shows that the head winds in these storms reach speeds of around 500 km/h (310 mph), and that the highest of those winds are found around 40 km (25 miles) higher than regular clouds. In the head region of these storms, the raging storm interacts with the rest of the planet’s atmosphere, creating intense sustained winds.
Saturn’s white spots have been observed before to show an increase in a gas called phosphine, and a decrease in acetylene, as compared with the natural state of Saturn’s clouds. The latest study seems to confirm that one chemical which is churned to the surface in a great white spot is water — meaning that these white clouds on Saturn are made of similar stuff to the white clouds here on Earth.
It’s this upwelling of water vapor being transported up to the highest levels of the planet’s atmosphere that releases such huge amounts of energy. This interacts with Saturn’s ferocious prevailing winds, which can reach peak speeds of up to 1800 km/h (1118 mph) — the second strongest in the solar system. The interaction between these savage winds serve to power Saturn’s stormy summer season.
“We did not expect to find such violent circulation in the region of the development of the storm, which is a symptom of the particularly violent interaction between the storm and the planet’s atmosphere,” explained García, whose model managed to accurately recreate the storm in a computer simulation.
Read more at Discovery News
Jun 30, 2013
The Quantum Secret to Alcohol Reactions in Space
Chemists have discovered that an 'impossible' reaction at cold temperatures actually occurs with vigour, which could change our understanding of how alcohols are formed and destroyed in space.
To explain the impossible, the researchers propose that a quantum mechanical phenomenon, known as 'quantum tunnelling', is revving up the chemical reaction. They found that the rate at which the reaction occurs is 50 times greater at minus 210 degrees Celsius than at room temperature.
It's the harsh environment that makes space-based chemistry so difficult to understand; the extremely cold conditions should put a stop to chemical reactions, as there isn't sufficient energy to rearrange chemical bonds. It has previously been suggested that dust grains -- found in interstellar clouds, for example -- could lend a hand in bringing chemical reactions about.
The idea is that the dust grains act as a staging post for the reactions to occur, with the ingredients of complex molecules clinging to the solid surface. However, last year, a highly reactive molecule called the 'methoxy radical' was detected in space and its formation couldn't be explained in this way.
Laboratory experiments showed that when an icy mixture containing methanol was blasted with radiation -- like would occur in space, with intense radiation from nearby stars, for example -methoxy radicals weren't released in the emitted gases. The findings suggested that methanol gas was involved in the production of the methoxy radicals found in space, rather than any process on the surface of dust grains. But this brings us back to the problem of how the gases can react under extremely cold conditions.
"The answer lies in quantum mechanics," says Professor Dwayne Heard, Head of the School of Chemistry at the University of Leeds, who led the research.
"Chemical reactions get slower as temperatures decrease, as there is less energy to get over the 'reaction barrier'. But quantum mechanics tells us that it is possible to cheat and dig through this barrier instead of going over it. This is called 'quantum tunnelling'."
To succeed in digging through the reaction barrier, incredibly cold temperatures -- like those that exist in interstellar space and in the atmosphere of some planetary bodies, such as Titan -- are needed. "We suggest that an 'intermediary product' forms in the first stage of the reaction, which can only survive long enough for quantum tunnelling to occur at extremely cold temperatures," says Heard.
The researchers were able to recreate the cold environment of space in the laboratory and observe a reaction of the alcohol methanol and an oxidising chemical called the 'hydroxyl radical' at minus 210 degrees Celsius. They found that not only do these gases react to create methoxy radicals at this incredibly cold temperature, but that the rate of reaction is 50 times faster than at room temperature.
To achieve this, the researchers had to create a new experimental setup. "The problem is that the gases condense as soon as they hit a cold surface," says Robin Shannon from the University of Leeds, who performed the experiments. "So we took inspiration from the boosters used for the Apollo Saturn V rockets to create collimated jets of gas that could react without ever touching a surface."
Read more at Science Daily
To explain the impossible, the researchers propose that a quantum mechanical phenomenon, known as 'quantum tunnelling', is revving up the chemical reaction. They found that the rate at which the reaction occurs is 50 times greater at minus 210 degrees Celsius than at room temperature.
It's the harsh environment that makes space-based chemistry so difficult to understand; the extremely cold conditions should put a stop to chemical reactions, as there isn't sufficient energy to rearrange chemical bonds. It has previously been suggested that dust grains -- found in interstellar clouds, for example -- could lend a hand in bringing chemical reactions about.
The idea is that the dust grains act as a staging post for the reactions to occur, with the ingredients of complex molecules clinging to the solid surface. However, last year, a highly reactive molecule called the 'methoxy radical' was detected in space and its formation couldn't be explained in this way.
Laboratory experiments showed that when an icy mixture containing methanol was blasted with radiation -- like would occur in space, with intense radiation from nearby stars, for example -methoxy radicals weren't released in the emitted gases. The findings suggested that methanol gas was involved in the production of the methoxy radicals found in space, rather than any process on the surface of dust grains. But this brings us back to the problem of how the gases can react under extremely cold conditions.
"The answer lies in quantum mechanics," says Professor Dwayne Heard, Head of the School of Chemistry at the University of Leeds, who led the research.
"Chemical reactions get slower as temperatures decrease, as there is less energy to get over the 'reaction barrier'. But quantum mechanics tells us that it is possible to cheat and dig through this barrier instead of going over it. This is called 'quantum tunnelling'."
To succeed in digging through the reaction barrier, incredibly cold temperatures -- like those that exist in interstellar space and in the atmosphere of some planetary bodies, such as Titan -- are needed. "We suggest that an 'intermediary product' forms in the first stage of the reaction, which can only survive long enough for quantum tunnelling to occur at extremely cold temperatures," says Heard.
The researchers were able to recreate the cold environment of space in the laboratory and observe a reaction of the alcohol methanol and an oxidising chemical called the 'hydroxyl radical' at minus 210 degrees Celsius. They found that not only do these gases react to create methoxy radicals at this incredibly cold temperature, but that the rate of reaction is 50 times faster than at room temperature.
To achieve this, the researchers had to create a new experimental setup. "The problem is that the gases condense as soon as they hit a cold surface," says Robin Shannon from the University of Leeds, who performed the experiments. "So we took inspiration from the boosters used for the Apollo Saturn V rockets to create collimated jets of gas that could react without ever touching a surface."
Read more at Science Daily
Surprise! Megaquakes Caught Sinking Volcanoes
We already know that megaquakes can level cities and launch tsunamis, but they've now been implicated in the sinking of volcanoes in Chile and Japan as well.
Two teams of scientists working independently on volcanoes in Japan and Chile discovered that after mega earthquakes in 2011 and 2010, some nearby volcanoes dropped as much as 15 centimeters (6 inches). The two teams have published their findings in a pair of papers in the June 30 issue of the journal Nature Geoscience.
"The observations are so similar in both places," commented Matthew Pritchard of Cornell University, the lead author of one of the papers. "It's just a spectacular observation."
In both locations the scientists used satellite data to look for deforming ground around the volcanoes before and after the massive 2011 magnitude 9.0 Tohoku earthquake in Japan and the 2010 magnitude 8.8 Maule earthquake in Chile. In the Maule case, Pritchard's team wasn't even looking for subsidence. They were on an entirely different search -- for any signs of increased volcanic activity -- when they stumbled onto the changes in the sinking volcanoes.
"There's probably nothing special about it," Pritchard told DNews. Similar subsidence is probably happening after the biggest quakes in Alaska, Indonesia and other major subduction zones in which megaquakes are possible. These two events are just the first to be detected because they happened when the right instruments were in orbit to get the data, he explained.
Pritchard also pointed out that the sinking ground is very local, and has nothing to do with the larger, overall mountain building going on in these places, caused by the colliding tectonic plates which are the cause of the megaquakes and volcanoes in the first place.
As for why the volcanoes sank at all, nobody is sure, but they have some ideas. The Japanese researchers Youichiro Takada and Yo Fukushima of Kyoto University suspect that the violent quaking caused subsidence of magma and heat-weakened rocks inside the five volcanoes found to have subsided, which then caused the ground above to fall as well.
For their part, Pritchard's team wonders if the megaquake rattled loose mineral deposits in the hydrothermal system of five Chilean volcanoes – essentially clearing the pipes – so that trapped fluids could escape and deflate the volcanoes somewhat. It's even possible that there were different mechanisms for the sinking in the different locations, all of which could have implications for how the volcanoes behave in the future.
Read more at Discovery News
Two teams of scientists working independently on volcanoes in Japan and Chile discovered that after mega earthquakes in 2011 and 2010, some nearby volcanoes dropped as much as 15 centimeters (6 inches). The two teams have published their findings in a pair of papers in the June 30 issue of the journal Nature Geoscience.
"The observations are so similar in both places," commented Matthew Pritchard of Cornell University, the lead author of one of the papers. "It's just a spectacular observation."
In both locations the scientists used satellite data to look for deforming ground around the volcanoes before and after the massive 2011 magnitude 9.0 Tohoku earthquake in Japan and the 2010 magnitude 8.8 Maule earthquake in Chile. In the Maule case, Pritchard's team wasn't even looking for subsidence. They were on an entirely different search -- for any signs of increased volcanic activity -- when they stumbled onto the changes in the sinking volcanoes.
"There's probably nothing special about it," Pritchard told DNews. Similar subsidence is probably happening after the biggest quakes in Alaska, Indonesia and other major subduction zones in which megaquakes are possible. These two events are just the first to be detected because they happened when the right instruments were in orbit to get the data, he explained.
Pritchard also pointed out that the sinking ground is very local, and has nothing to do with the larger, overall mountain building going on in these places, caused by the colliding tectonic plates which are the cause of the megaquakes and volcanoes in the first place.
As for why the volcanoes sank at all, nobody is sure, but they have some ideas. The Japanese researchers Youichiro Takada and Yo Fukushima of Kyoto University suspect that the violent quaking caused subsidence of magma and heat-weakened rocks inside the five volcanoes found to have subsided, which then caused the ground above to fall as well.
For their part, Pritchard's team wonders if the megaquake rattled loose mineral deposits in the hydrothermal system of five Chilean volcanoes – essentially clearing the pipes – so that trapped fluids could escape and deflate the volcanoes somewhat. It's even possible that there were different mechanisms for the sinking in the different locations, all of which could have implications for how the volcanoes behave in the future.
Read more at Discovery News
Subscribe to:
Posts (Atom)