An international team of researchers from England and the Charité -- Universitätsmedizin Berlin has presented new findings regarding the function of muscle stem cells, which are published in the current issue of the journal Nature Genetics. The researchers investigated several families with children suffering from a progressive muscle disease. Using a genetic analysis technique known as "next generation sequencing" the scientists identified a defective gene called MEGF10 responsible for the muscle weakness.
The children suffer from severe weakness of the body musculature and of the inner organs like the diaphragm, the main breathing muscle. The consequences are that the little patients are only able to move in a wheelchair and need continuous artificial respiration. These children often have to be tube-fed as well since the musculature of the esophagus does not work properly.
But which role plays the discovered gene here and is involved in muscle growth? In healthy humans the muscle stem cells, so called "satellite cells" stick on muscle fibers and normally remain inactive. If a muscle fiber becomes damaged or muscle growth is stimulated, as it is in muscle training, the satellite cells start to divide, fuse with the muscle fiber and so cause muscle growth.
This process is disrupted in the ill children. For them, the necessary protein which is responsible for the attachment of the satellite cells cannot be developed by the mutated MEGF10 gene. Therefore, these cells cannot stick on the muscle fibre -- the muscle cannot be repaired any longer.
Prof. Markus Schuelke from the NeuroCure Clinical Research Center of the Cluster of Excellence NeuroCure and the Department of Neuropediatrics of the Charité and Prof. Colin A. Johnson from the Institute of Molecular Medicine of the University Leeds, who jointly directed this research project have emphasized the importance of these new methods for genome analysis and give a positive outlook for the future. "This is good news for families with unexplained rare genetic disorders. These methods enable us to sequence hundreds or even thousands of genes at the same time and discover novel genetic defects even in single patients quickly but also cost effective" explains Markus Schuelke.
Read more at Science Daily
Nov 26, 2011
Smart Phones Power Consumption Cut by More Than 70 Percent
Researchers at Aalto University in Finland have designed a network proxy that can cut the power consumption of 3G smart phones up to 74 percent. This device enhances performance and significantly reduces power usage by serving as a middleman for mobile devices to connect to the Internet and handling the majority of the data transfer for the smart phone. Historically, the high energy requirements of mobile phones have slowed the adoption of mobile Internet services in developing countries.
This new solution is particularly valuable in developing countries because it provides significantly more effective Internet access to a much larger number of people. At the moment, only a small percent can access the Internet from a wired connection, but 90 percent of the African population lives in areas with mobile phone network coverage. Mobile phone usage is increasing rapidly, however the use of mobile Internet services is hindered by users not having access to the power grid to recharge their phones," says Professor Jukka Manner from Aalto University.
The case study conducted at Aalto University examined Internet usage in three East African countries: Tanzania, Uganda and Kenya. Researchers developed energy-saving solutions for smart phones that could be easily deployed across a mobile network and in particular in areas without reliable sources of electricity. In addition to the new, optimized proxy solution, the researchers found that the power consumption of smart phones could also be significantly reduced by mobile optimized websites, HTTP compression and more efficient use of data caching.
Read more at Science Daily
This new solution is particularly valuable in developing countries because it provides significantly more effective Internet access to a much larger number of people. At the moment, only a small percent can access the Internet from a wired connection, but 90 percent of the African population lives in areas with mobile phone network coverage. Mobile phone usage is increasing rapidly, however the use of mobile Internet services is hindered by users not having access to the power grid to recharge their phones," says Professor Jukka Manner from Aalto University.
The case study conducted at Aalto University examined Internet usage in three East African countries: Tanzania, Uganda and Kenya. Researchers developed energy-saving solutions for smart phones that could be easily deployed across a mobile network and in particular in areas without reliable sources of electricity. In addition to the new, optimized proxy solution, the researchers found that the power consumption of smart phones could also be significantly reduced by mobile optimized websites, HTTP compression and more efficient use of data caching.
Read more at Science Daily
Nov 25, 2011
Scientists Turn On Fountain of Youth in Yeast
Collaborations between Johns Hopkins and National Taiwan University researchers have successfully manipulated the life span of common, single-celled yeast organisms by figuring out how to remove and restore protein functions related to yeast aging.
A chemical variation of a "fuel-gauge" enzyme that senses energy in yeast acts like a life span clock: It is present in young organisms and progressively diminished as yeast cells age.
In a report in the September 16 edition of Cell, the scientists describe their identification of a new level of regulation of this age-related protein variant, showing that when they remove it, the organism's life span is cut short and when they restore it, life span is dramatically extended.
In the case of yeast, the discovery reveals molecular components of an aging pathway that appears related to one that regulates longevity and lifespan in humans, according to Jef Boeke, Ph.D., professor of molecular biology, genetics and oncology, and director of the HiT Center and Technology Center for Networks and Pathways, Johns Hopkins University School of Medicine.
"This control of longevity is independent of the type described previously in yeast which had to do with calorie restriction," Boeke says. "We believe that for the first time, we have a biochemical route to youth and aging that has nothing to do with diet." The chemical variation, known as acetylation because it adds an acetyl group to an existing molecule, is a kind of "decoration" that goes on and off a protein -- in this case, the protein Sip2 -- much like an ornament can be put on and taken off a Christmas tree, Boeke says. Acetylation can profoundly change protein function in order to help an organism or system adapt quickly to its environment. Until now, acetylation had not been directly implicated in the aging pathway, so this is an all-new role and potential target for prevention or treatment strategies, the researchers say.
The team showed that acetylation of the protein Sip2 affected longevity defined in terms of how many times a yeast cell can divide, or "replicative life span." The normal replicative lifespan in natural yeast is 25. In the yeast genetically modified by researchers to restore the chemical modification, life span extended to 38, an increase of about 50 percent.
The researchers were able to manipulate the yeast life span by mutating certain chemical residues to mimic the acetylated and deacetylated forms of the protein Sip2. They worked with live yeast in a dish, measuring and comparing the life spans of natural and genetically altered types by removing buds from the yeast every 90 minutes. The average lifespan in normal yeast is about 25 generations, which meant the researchers removed 25 newly budded cells from the mother yeast cell. As yeast cells age, each new generation takes longer to develop, so each round of the experiment lasted two to four weeks.
"We performed anti-aging therapy on yeast," says the study's first author, Jin-Ying Lu, M.D., Ph.D., of National Taiwan University. "When we give back this protein acetylation, we rescued the life span shortening in old cells. Our next task is to prove that this phenomenon also happens in mammalian cells."
The research was supported by the National Science Council, National Taiwan University Hospital, National Taiwan University, Liver Disease Prevention & Treatment Research Foundation of Taiwan, and the NIH Common Fund.
Read more at Science Daily
A chemical variation of a "fuel-gauge" enzyme that senses energy in yeast acts like a life span clock: It is present in young organisms and progressively diminished as yeast cells age.
In a report in the September 16 edition of Cell, the scientists describe their identification of a new level of regulation of this age-related protein variant, showing that when they remove it, the organism's life span is cut short and when they restore it, life span is dramatically extended.
In the case of yeast, the discovery reveals molecular components of an aging pathway that appears related to one that regulates longevity and lifespan in humans, according to Jef Boeke, Ph.D., professor of molecular biology, genetics and oncology, and director of the HiT Center and Technology Center for Networks and Pathways, Johns Hopkins University School of Medicine.
"This control of longevity is independent of the type described previously in yeast which had to do with calorie restriction," Boeke says. "We believe that for the first time, we have a biochemical route to youth and aging that has nothing to do with diet." The chemical variation, known as acetylation because it adds an acetyl group to an existing molecule, is a kind of "decoration" that goes on and off a protein -- in this case, the protein Sip2 -- much like an ornament can be put on and taken off a Christmas tree, Boeke says. Acetylation can profoundly change protein function in order to help an organism or system adapt quickly to its environment. Until now, acetylation had not been directly implicated in the aging pathway, so this is an all-new role and potential target for prevention or treatment strategies, the researchers say.
The team showed that acetylation of the protein Sip2 affected longevity defined in terms of how many times a yeast cell can divide, or "replicative life span." The normal replicative lifespan in natural yeast is 25. In the yeast genetically modified by researchers to restore the chemical modification, life span extended to 38, an increase of about 50 percent.
The researchers were able to manipulate the yeast life span by mutating certain chemical residues to mimic the acetylated and deacetylated forms of the protein Sip2. They worked with live yeast in a dish, measuring and comparing the life spans of natural and genetically altered types by removing buds from the yeast every 90 minutes. The average lifespan in normal yeast is about 25 generations, which meant the researchers removed 25 newly budded cells from the mother yeast cell. As yeast cells age, each new generation takes longer to develop, so each round of the experiment lasted two to four weeks.
"We performed anti-aging therapy on yeast," says the study's first author, Jin-Ying Lu, M.D., Ph.D., of National Taiwan University. "When we give back this protein acetylation, we rescued the life span shortening in old cells. Our next task is to prove that this phenomenon also happens in mammalian cells."
The research was supported by the National Science Council, National Taiwan University Hospital, National Taiwan University, Liver Disease Prevention & Treatment Research Foundation of Taiwan, and the NIH Common Fund.
Read more at Science Daily
Nov 24, 2011
Astronomers Take a Photograph of the Youngest Supernova Right After Its Explosion
Astronomers have obtained a never-before achieved radio astronomical photograph of the youngest supernova. Fourteen days after the explosion of a star in the galaxy Galàxia del Remolí (M51) last June, coordinated telescopes around Europe have taken a photograph of the cosmic explosion in great detail -- equivalent to seeing a golf ball on the surface of the moon.
The University of Valencia and the Institute of Astrophysics of Andalusia took part in this research. The results will be published this week in the journal Astronomy & Astrophysics. The telescopes participating in the research were NASA's telescopes at Robledo de Chavela (Madrid) and those of the National Geographic Institute in Yebes (Guadalajara).
Barely at 23 million light years from Earth, in the constellation of Llebrers, Galàxia El Remolí can be the scene of one of the most violent phenomena in the universe, despite its beautiful appearance: the death of a star in the shape of a supernova explosion. Several combined telescopes spread over Spain, Sweden, Germay and Finland, and the data processing by means of a supercomputer in The Netherlands, enable to have the capacity of a telescope measuring thousands of kilometres. Thus, a really clear image has been obtained, with a detail a hundred times greater than that of the space telescope Hubble. This technique, known as radio interferometry, has allowed Iván Martí and his team to take a photograph of the supernovova SN2011dh just some days after its explosion.
This experiment is beating a record: 'this is the earliest high resolution image of a supernova explosion. From this photograph, we can define the expansion velocity of the shock wave created in the explosion', states Iván Martí from the Institut Max Planck of Radio Astronomy in Bonn (Germany). The full professor in Astronomy and Astrophysics from the University of Valencia, Jon Marcaid, argues that 'with this precision, we can look for the previous star on the earlier galaxy photographs, as well as weigh up better our future observations.'
Supernovas are one of the most spectacular phenomena in the universe. Antxon Alberdi, from the Institute of Astrophysics of Andalusia states that 'if we are lucky, like we were this time, we can obtain really clear and high resolution images of the supernovas, thanks to the VLBI technique (Very Long Baseline Interferometry).'
Read more at Science Daily
The University of Valencia and the Institute of Astrophysics of Andalusia took part in this research. The results will be published this week in the journal Astronomy & Astrophysics. The telescopes participating in the research were NASA's telescopes at Robledo de Chavela (Madrid) and those of the National Geographic Institute in Yebes (Guadalajara).
Barely at 23 million light years from Earth, in the constellation of Llebrers, Galàxia El Remolí can be the scene of one of the most violent phenomena in the universe, despite its beautiful appearance: the death of a star in the shape of a supernova explosion. Several combined telescopes spread over Spain, Sweden, Germay and Finland, and the data processing by means of a supercomputer in The Netherlands, enable to have the capacity of a telescope measuring thousands of kilometres. Thus, a really clear image has been obtained, with a detail a hundred times greater than that of the space telescope Hubble. This technique, known as radio interferometry, has allowed Iván Martí and his team to take a photograph of the supernovova SN2011dh just some days after its explosion.
This experiment is beating a record: 'this is the earliest high resolution image of a supernova explosion. From this photograph, we can define the expansion velocity of the shock wave created in the explosion', states Iván Martí from the Institut Max Planck of Radio Astronomy in Bonn (Germany). The full professor in Astronomy and Astrophysics from the University of Valencia, Jon Marcaid, argues that 'with this precision, we can look for the previous star on the earlier galaxy photographs, as well as weigh up better our future observations.'
Supernovas are one of the most spectacular phenomena in the universe. Antxon Alberdi, from the Institute of Astrophysics of Andalusia states that 'if we are lucky, like we were this time, we can obtain really clear and high resolution images of the supernovas, thanks to the VLBI technique (Very Long Baseline Interferometry).'
Read more at Science Daily
Ancient Environment Found to Drive Marine Biodiversity
Much of our knowledge about past life has come from the fossil record -- but how accurately does that reflect the true history and drivers of biodiversity on Earth?
"It's a question that goes back a long way to the time of Darwin, who looked at the fossil record and tried to understand what it tells us about the history of life," says Shanan Peters, an assistant professor of geoscience at the University of Wisconsin-Madison.
In fact, the fossil record can tell us a great deal, he says in a new study. In a report published on Nov. 25 in Science magazine, he and colleague Bjarte Hannisdal, of the University of Bergen in Norway, show that the evolution of marine life over the past 500 million years has been robustly and independently driven by both ocean chemistry and sea level changes.
The time period studied covered most of the Phanerozoic eon, which extends to the present and includes the evolution of most plant and animal life.
Hannisdal and Peters analyzed fossil data from the Paleobiology Database along with paleoenvironmental proxy records and data on the rock record that link to ancient global climates, tectonic movement, continental flooding, and changes in biogeochemistry, particularly with respect to oxygen, carbon, and sulfur cycles. They used a method called information transfer that allowed them to identify causal relationships -- not just general associations -- between diversity and environmental proxy records.
"We find an interesting web of connections between these different systems that combine to drive what we see in the fossil record," Peters says. "Genus diversity carries a very direct and strong signal of the sulfur isotopic signal. Similarly, the signal from sea level, how much the continents are covered by shallow seas, independently propagates into the history of marine animal diversity."
The dramatic changes in biodiversity seen in the fossil record at many different timescales -- including both proliferations and mass extinctions as marine animals diversified, evolved, and moved onto land -- likely arose through biological responses to changes in the global carbon and sulfur cycles and sea level through geologic time.
The strength of the interactions also shows that the fossil record, despite its incompleteness and the influence of sampling, is a good representation of marine biodiversity over the past half-billion years.
"These results show that the number of species in the oceans through time has been influenced by the amount and availability of carbon, oxygen and sulfur, and by sea level," says Lisa Boush, program director in the National Science Foundation's Division of Earth Sciences, which funded the research. "The study allows us to better understand how modern changes in the environment might affect biodiversity today and in the future."
Peters says the findings also emphasize the interconnectedness of physical, chemical, and biological processes on Earth.
Read more at Science Daily
"It's a question that goes back a long way to the time of Darwin, who looked at the fossil record and tried to understand what it tells us about the history of life," says Shanan Peters, an assistant professor of geoscience at the University of Wisconsin-Madison.
In fact, the fossil record can tell us a great deal, he says in a new study. In a report published on Nov. 25 in Science magazine, he and colleague Bjarte Hannisdal, of the University of Bergen in Norway, show that the evolution of marine life over the past 500 million years has been robustly and independently driven by both ocean chemistry and sea level changes.
The time period studied covered most of the Phanerozoic eon, which extends to the present and includes the evolution of most plant and animal life.
Hannisdal and Peters analyzed fossil data from the Paleobiology Database along with paleoenvironmental proxy records and data on the rock record that link to ancient global climates, tectonic movement, continental flooding, and changes in biogeochemistry, particularly with respect to oxygen, carbon, and sulfur cycles. They used a method called information transfer that allowed them to identify causal relationships -- not just general associations -- between diversity and environmental proxy records.
"We find an interesting web of connections between these different systems that combine to drive what we see in the fossil record," Peters says. "Genus diversity carries a very direct and strong signal of the sulfur isotopic signal. Similarly, the signal from sea level, how much the continents are covered by shallow seas, independently propagates into the history of marine animal diversity."
The dramatic changes in biodiversity seen in the fossil record at many different timescales -- including both proliferations and mass extinctions as marine animals diversified, evolved, and moved onto land -- likely arose through biological responses to changes in the global carbon and sulfur cycles and sea level through geologic time.
The strength of the interactions also shows that the fossil record, despite its incompleteness and the influence of sampling, is a good representation of marine biodiversity over the past half-billion years.
"These results show that the number of species in the oceans through time has been influenced by the amount and availability of carbon, oxygen and sulfur, and by sea level," says Lisa Boush, program director in the National Science Foundation's Division of Earth Sciences, which funded the research. "The study allows us to better understand how modern changes in the environment might affect biodiversity today and in the future."
Peters says the findings also emphasize the interconnectedness of physical, chemical, and biological processes on Earth.
Read more at Science Daily
Spiders, Webs and Insects: A New Perspective On Evolutionary History
The orb web, typical of a large number of spider species, has a single evolutionary origin, according to molecular phylogenetic research reported in the Proceedings of the Royal Society. The study in question, which was contributed to by the lecturer Miquel A. Arnedo from the Department of Animal Biology, who conducts research for the Institute for Research on Biodiversity (IRBio) of the University of Barcelona, also presents the hypothesis that the diversification of spider webs is motivated by the need to occupy new natural habitats (trunks, stems, etc.) and to make more efficient use of natural resources.
Spiders are one of the oldest and most diverse groups of species on earth, with a fossil register that dates back to the Devonian Period (some 380 million years ago). With almost 40,000 identified species, spiders are the predominant arthropod predators of microfauna in the natural environment. The study, which applied molecular biology and bioinformatic techniques for examining evolutionary patterns, focused on the phylogenetic analysis of DNA sequences. Specifically, the team of experts studied the molecular differences in six genetic markers taken from a taxonomic sample of 291 spider species, representing 21 of the 22 families of Orbiculariae (used in the study to refer to Deinopoidea, Araneoidea and Nicodamidae).
As Miquel A. Arnedo explains, "This scientific study looks at the most complete taxonomic sample examined to date, in terms of the number of species and families represented, to understand the phylogeny of spiders that weave orb webs, analysing the DNA sequences of all available genetic markers."
Why did spider webs diversify?
Orb weavers appeared approximately 200 million years ago, in the Middle Triassic, and underwent rapid diversification during the latter stages of the Triassic and the Early Jurassic. What are the causes of this evolutionary process?
Traditionally, the diversification of spider webs has been linked to the spread of insects, which are the spider's main prey, and flowering plants (angiosperms). The authors of the new study formulate a new hypothesis to describe this evolutionary phenomenon. "In the article, we suggest that the changes in spider webs are intended to facilitate the move to new habitats and to make better use of the trophic resources in different ecosystems. In other words, the abundance of prey and the structural complexity of the habitat are more influential factors than the actual diversification of the prey. Moreover, according to our study, the biological explosion of orb webs would not have coincided with the rapid diversification of insects," says Miquel A. Arnedo.
According to Arnedo, "Spiders are generalist predators, and few cases have been found in which they specialize in a particular type of prey. In addition, most spiders do not weave orb webs or produce more irregular forms. It is not the capacity to weave webs that has enabled spiders to diversify but rather their ability to produce silk, and this is not the same thing. Silk threads, which are produced by many arachnid species, can be used for anchorage, movement, nest-building, protecting eggs, and so on."
Traps that also evolved
Over their evolutionary history, spiders have come up with different strategies for catching prey. Orb webs, which are difficult to construct but extremely flexible and resistant, are the result of a complex stereotyped pattern of behaviour in spiders. In the article, the experts also consider a reconstruction of the evolution of webs, referring to examples such as the families Linyphiidae and Theridiidae, which weave simple webs that are easier to build, in which the spider is protected from predators, or the Mimetidae family, which prey on other spiders on their own webs by simulating the vibrations caused by trapped prey.
"The great diversification began with sticky silk, which is a more efficient material and easier for spiders to produce. In our study, we found that all of the evolutionary innovations that have occurred since the first orb webs emerge independently, require less energy to be expended by the spider, and demand fewer behavioural patterns," explains Arnedo. This suggests that spiders, rather than following the evolution of insects, design new strategies that allow them to occupy the largest possible ecological space.
Miquel A. Arnedo, winner of the 2009 ICREA Academia award, directs the UB's Arthropod Systematics and Evolution Laboratory, which focuses on the study of factors that determine the diversification of living species. The group works on various research areas based on the use of molecular markers to study the function and evolution of arachnids, a field in which next-generation sequencing technologies will pave the way for a more complete understanding of evolutionary processes.
Read more at Science Daily
Spiders are one of the oldest and most diverse groups of species on earth, with a fossil register that dates back to the Devonian Period (some 380 million years ago). With almost 40,000 identified species, spiders are the predominant arthropod predators of microfauna in the natural environment. The study, which applied molecular biology and bioinformatic techniques for examining evolutionary patterns, focused on the phylogenetic analysis of DNA sequences. Specifically, the team of experts studied the molecular differences in six genetic markers taken from a taxonomic sample of 291 spider species, representing 21 of the 22 families of Orbiculariae (used in the study to refer to Deinopoidea, Araneoidea and Nicodamidae).
As Miquel A. Arnedo explains, "This scientific study looks at the most complete taxonomic sample examined to date, in terms of the number of species and families represented, to understand the phylogeny of spiders that weave orb webs, analysing the DNA sequences of all available genetic markers."
Why did spider webs diversify?
Orb weavers appeared approximately 200 million years ago, in the Middle Triassic, and underwent rapid diversification during the latter stages of the Triassic and the Early Jurassic. What are the causes of this evolutionary process?
Traditionally, the diversification of spider webs has been linked to the spread of insects, which are the spider's main prey, and flowering plants (angiosperms). The authors of the new study formulate a new hypothesis to describe this evolutionary phenomenon. "In the article, we suggest that the changes in spider webs are intended to facilitate the move to new habitats and to make better use of the trophic resources in different ecosystems. In other words, the abundance of prey and the structural complexity of the habitat are more influential factors than the actual diversification of the prey. Moreover, according to our study, the biological explosion of orb webs would not have coincided with the rapid diversification of insects," says Miquel A. Arnedo.
According to Arnedo, "Spiders are generalist predators, and few cases have been found in which they specialize in a particular type of prey. In addition, most spiders do not weave orb webs or produce more irregular forms. It is not the capacity to weave webs that has enabled spiders to diversify but rather their ability to produce silk, and this is not the same thing. Silk threads, which are produced by many arachnid species, can be used for anchorage, movement, nest-building, protecting eggs, and so on."
Traps that also evolved
Over their evolutionary history, spiders have come up with different strategies for catching prey. Orb webs, which are difficult to construct but extremely flexible and resistant, are the result of a complex stereotyped pattern of behaviour in spiders. In the article, the experts also consider a reconstruction of the evolution of webs, referring to examples such as the families Linyphiidae and Theridiidae, which weave simple webs that are easier to build, in which the spider is protected from predators, or the Mimetidae family, which prey on other spiders on their own webs by simulating the vibrations caused by trapped prey.
"The great diversification began with sticky silk, which is a more efficient material and easier for spiders to produce. In our study, we found that all of the evolutionary innovations that have occurred since the first orb webs emerge independently, require less energy to be expended by the spider, and demand fewer behavioural patterns," explains Arnedo. This suggests that spiders, rather than following the evolution of insects, design new strategies that allow them to occupy the largest possible ecological space.
Miquel A. Arnedo, winner of the 2009 ICREA Academia award, directs the UB's Arthropod Systematics and Evolution Laboratory, which focuses on the study of factors that determine the diversification of living species. The group works on various research areas based on the use of molecular markers to study the function and evolution of arachnids, a field in which next-generation sequencing technologies will pave the way for a more complete understanding of evolutionary processes.
Read more at Science Daily
Evolutionary Biology Pioneer Lynn Margulis Dies
Death has not always been a part of life; for perhaps the first billion years in which life existed on Earth, death was an aberration, a function of adverse environmental conditions -- a pool of water being too hot, or too cold -- rather than an inevitability. Death as a way of life is, it appears, a consequence of the development of sex.
But how did sex begin? And why? In the words of evolutionary biologist Lynn Margulis, the answer was a by-product of cannibalism. Writing in Scientific American in 1994, she argued that:
Sex began when unfavorable seasonal changes in the environment caused our protoctist predecessors to engage in attempts at cannibalism that were only partially successful. The result was a monster bearing the cells and genes of at least two individuals (as does the fertilized egg today) ... Those microbial ancestors that fused survived, whereas those that evaded sexual liaisons died.
The following year, in a chapter for a book entitled The Third Culture: Beyond the Scientific Revolution, by John Brockman, she wrote, by way of expansion and elucidation:
It may have started when one sort of squirming bacterium invaded another — seeking food, of course. But certain invasions evolved into truces; associations once ferocious became benign. When swimming bacterial would-be invaders took up residence inside their sluggish hosts, this joining of forces created a new whole that was, in effect, far greater than the sum of its parts: faster swimmers capable of moving large numbers of genes evolved. Some of these newcomers were uniquely competent in the evolutionary struggle. Further bacterial associations were added on, as the modern cell evolved.
In other words, Margulis proposed, so-called eukaryotes -- cells with nuclei, of which all multi-cellular organisms, including humans, are comprised -- are the result of the symbiotic fusion, eons ago, of different prokaryiotic lifeforms.
The result of this joining of forces is inside each and every one of us, in the form of the cells that combine to create our skin, our hair, our eyes, our blood. Perhaps the strongest evidence that this is so, she wrote in the above-excerpted chapter, is the existence within most eukaryote cells of mitochondria, which have their own DNA:
In addition to the nuclear DNA, which is the human genome, each of us also has mitochondrial DNA. Our mitochondria, a completely different lineage, are inherited only from our mothers. None of our mitochondrial DNA comes from our fathers. Thus, in every fungus, animal, or plant (and in most protoctists), at least two distinct genealogies exist side by side. That, in itself, is a clue that at some point these organelles were distinct microorganisms that joined forces.
By the time she wrote the above, her theory had become accepted as orthodoxy. When first she proposed it, it was anything but. Her initial 1966 paper was rejected by about 15 different publications, as she recalled, before being picked up by the Journal of Theoretical Biology; despite having a publishing contract, when she expanded that paper into a book, her initial publisher rejected the manuscript.
It was eventually published in 1970 as The Origin of Eukaryotic Cells, by Yale University Press. In a response to her chapter in Brockman's book, Richard Dawkins wrote of, not only her theory, but her tenacity in its advocation: "This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it."
Perhaps because she saw evolution as a symbiotic, rather than purely 'selfish', process, Margulis was also drawn to the Gaia hypothesis that, around the time she was pushing her own maverick theory, was being developed by James Lovelock.
Gaia, explains John Horgan, essentially postulates that Earth's biota chemically regulates its environment in such a way as to promote its own survival; but although Margulis came to be associated with the hypothesis almost as closely as Lovelock, she resisted some of the more 'spiritual' vocabulary he used to describe it. She rejected, for example, his portrayal as Earth as a living organism, preferring instead the notion that it is one big ecosystem comprised of many smaller ones.
Born in Chicago in 1938, Margulis enrolled at the University of Chicago when she was just 14. At 19, she married the astronomer Carl Sagan, and one's brain bows down on metaphorical bended knee at the thought of some of the intellectual conversations that must have taken place at the breakfast table during that marriage.
Read more at Discovery News
But how did sex begin? And why? In the words of evolutionary biologist Lynn Margulis, the answer was a by-product of cannibalism. Writing in Scientific American in 1994, she argued that:
Sex began when unfavorable seasonal changes in the environment caused our protoctist predecessors to engage in attempts at cannibalism that were only partially successful. The result was a monster bearing the cells and genes of at least two individuals (as does the fertilized egg today) ... Those microbial ancestors that fused survived, whereas those that evaded sexual liaisons died.
The following year, in a chapter for a book entitled The Third Culture: Beyond the Scientific Revolution, by John Brockman, she wrote, by way of expansion and elucidation:
It may have started when one sort of squirming bacterium invaded another — seeking food, of course. But certain invasions evolved into truces; associations once ferocious became benign. When swimming bacterial would-be invaders took up residence inside their sluggish hosts, this joining of forces created a new whole that was, in effect, far greater than the sum of its parts: faster swimmers capable of moving large numbers of genes evolved. Some of these newcomers were uniquely competent in the evolutionary struggle. Further bacterial associations were added on, as the modern cell evolved.
In other words, Margulis proposed, so-called eukaryotes -- cells with nuclei, of which all multi-cellular organisms, including humans, are comprised -- are the result of the symbiotic fusion, eons ago, of different prokaryiotic lifeforms.
The result of this joining of forces is inside each and every one of us, in the form of the cells that combine to create our skin, our hair, our eyes, our blood. Perhaps the strongest evidence that this is so, she wrote in the above-excerpted chapter, is the existence within most eukaryote cells of mitochondria, which have their own DNA:
In addition to the nuclear DNA, which is the human genome, each of us also has mitochondrial DNA. Our mitochondria, a completely different lineage, are inherited only from our mothers. None of our mitochondrial DNA comes from our fathers. Thus, in every fungus, animal, or plant (and in most protoctists), at least two distinct genealogies exist side by side. That, in itself, is a clue that at some point these organelles were distinct microorganisms that joined forces.
By the time she wrote the above, her theory had become accepted as orthodoxy. When first she proposed it, it was anything but. Her initial 1966 paper was rejected by about 15 different publications, as she recalled, before being picked up by the Journal of Theoretical Biology; despite having a publishing contract, when she expanded that paper into a book, her initial publisher rejected the manuscript.
It was eventually published in 1970 as The Origin of Eukaryotic Cells, by Yale University Press. In a response to her chapter in Brockman's book, Richard Dawkins wrote of, not only her theory, but her tenacity in its advocation: "This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it."
Perhaps because she saw evolution as a symbiotic, rather than purely 'selfish', process, Margulis was also drawn to the Gaia hypothesis that, around the time she was pushing her own maverick theory, was being developed by James Lovelock.
Gaia, explains John Horgan, essentially postulates that Earth's biota chemically regulates its environment in such a way as to promote its own survival; but although Margulis came to be associated with the hypothesis almost as closely as Lovelock, she resisted some of the more 'spiritual' vocabulary he used to describe it. She rejected, for example, his portrayal as Earth as a living organism, preferring instead the notion that it is one big ecosystem comprised of many smaller ones.
Born in Chicago in 1938, Margulis enrolled at the University of Chicago when she was just 14. At 19, she married the astronomer Carl Sagan, and one's brain bows down on metaphorical bended knee at the thought of some of the intellectual conversations that must have taken place at the breakfast table during that marriage.
Read more at Discovery News
Nov 23, 2011
Studying Bat Skulls, Evolutionary Biologists Discover How Species Evolve
A new study involving bat skulls, bite force measurements and scat samples collected by an international team of evolutionary biologists is helping to solve a nagging question of evolution: Why some groups of animals develop scores of different species over time while others evolve only a few. Their findings appear in the current issue of Proceedings of the Royal Society B: Biological Sciences.
To answer this question, Elizabeth Dumont at the University of Massachusetts Amherst and Liliana Dávalos of Stony Brook University together with colleagues at UCLA and the Leibniz Institute for Zoo and Wildlife Research, Berlin, compiled large amounts of data on the diet, bite force and skull shape in a family of New World bats, and took advantage of new statistical techniques to date and document changes in the rate of evolution of these traits and the number of species over time.
They investigated why there are so many more species of New World Leaf-Nosed bats, nearly 200, while their closest relatives produced only 10 species over the same period of time. Most bats are insect feeders, while the New World Leaf-Nosed bats eat nectar, fruit, frogs, lizards and even blood.
One hypothesis is that the evolution of a trait, such as head shape, that gives access to new resources can lead to the rapid evolution of many new species. As Dumont and D?valos explain, connecting changes in body structure to an ecological opportunity requires showing that a significant increase in the number of species occurred in tandem with the appearance of new anatomical traits, and that those traits are associated with enhanced resource use.
"If the availability of fruit provided the ecological opportunity that, in the presence of anatomical innovations that allowed eating the fruit, led to a significant increase in the birth of new species, then skull morphology should predict both diet and bite force" they said. They found support for these predictions by analyzing thousands of evolutionary trees of more than 150 species, measuring over 600 individual bat skulls of 85 species, testing bite force in over 500 individual bats from 39 species in the field and examining thousands of scat samples to identify the bats' diets.
They found that the emergence of a new skull shape in New World Leaf-Nosed bats about 15 million years ago led to an explosion of many new bat species. The new shape was a low, broad skull that allowed even small bats to produce the strong bite needed to eat hard fruits. The rate of birth of new species jumped as this new shape evolved, and this group of bats quickly increased the proportion of fruit in their diet. Change in shape slowed once this new skull had evolved.
It can be difficult for evolutionary biologists to demonstrate that traits related to anatomical changes, also called "morphological innovations" such as a new skull shape, give certain groups a survival advantage when new food sources, such as hard fruits, become available.
Read more at Science Daily
To answer this question, Elizabeth Dumont at the University of Massachusetts Amherst and Liliana Dávalos of Stony Brook University together with colleagues at UCLA and the Leibniz Institute for Zoo and Wildlife Research, Berlin, compiled large amounts of data on the diet, bite force and skull shape in a family of New World bats, and took advantage of new statistical techniques to date and document changes in the rate of evolution of these traits and the number of species over time.
They investigated why there are so many more species of New World Leaf-Nosed bats, nearly 200, while their closest relatives produced only 10 species over the same period of time. Most bats are insect feeders, while the New World Leaf-Nosed bats eat nectar, fruit, frogs, lizards and even blood.
One hypothesis is that the evolution of a trait, such as head shape, that gives access to new resources can lead to the rapid evolution of many new species. As Dumont and D?valos explain, connecting changes in body structure to an ecological opportunity requires showing that a significant increase in the number of species occurred in tandem with the appearance of new anatomical traits, and that those traits are associated with enhanced resource use.
"If the availability of fruit provided the ecological opportunity that, in the presence of anatomical innovations that allowed eating the fruit, led to a significant increase in the birth of new species, then skull morphology should predict both diet and bite force" they said. They found support for these predictions by analyzing thousands of evolutionary trees of more than 150 species, measuring over 600 individual bat skulls of 85 species, testing bite force in over 500 individual bats from 39 species in the field and examining thousands of scat samples to identify the bats' diets.
They found that the emergence of a new skull shape in New World Leaf-Nosed bats about 15 million years ago led to an explosion of many new bat species. The new shape was a low, broad skull that allowed even small bats to produce the strong bite needed to eat hard fruits. The rate of birth of new species jumped as this new shape evolved, and this group of bats quickly increased the proportion of fruit in their diet. Change in shape slowed once this new skull had evolved.
It can be difficult for evolutionary biologists to demonstrate that traits related to anatomical changes, also called "morphological innovations" such as a new skull shape, give certain groups a survival advantage when new food sources, such as hard fruits, become available.
Read more at Science Daily
Spiders Coat Webs With Toxic Chemicals for Self-Defense
A tasty spider and its bundle of prey might make an excellent treat for a bunch of marauding ants. So to protect their homes, and themselves, golden orb weavers (Nephila antipodiana) use chemical warfare.
Researchers had long wondered how this species of orb spider — typically found in the ant-rich tropical forests of Singapore, Indonesia, Thailand and the Philippines — manage to avoid becoming dinner.
They found that the spiders coat their webs in a chemical called 2-pyrrolidinone, which acts as a deterrent to many insect species, including ants, moths and caterpillars. The findings appear Nov. 23 in the Proceedings of the Royal Society B.
In an experiment, ants refused to cross lengths of spider silk covered in 2-pyrrolidinone. The researchers suggest this may be because similar chemicals are found in the poison glands of several ant species. The substance could also be triggering a panic response in the ants, which use the same chemical as an alarm pheromone.
Read more at Wired Science
Researchers had long wondered how this species of orb spider — typically found in the ant-rich tropical forests of Singapore, Indonesia, Thailand and the Philippines — manage to avoid becoming dinner.
They found that the spiders coat their webs in a chemical called 2-pyrrolidinone, which acts as a deterrent to many insect species, including ants, moths and caterpillars. The findings appear Nov. 23 in the Proceedings of the Royal Society B.
In an experiment, ants refused to cross lengths of spider silk covered in 2-pyrrolidinone. The researchers suggest this may be because similar chemicals are found in the poison glands of several ant species. The substance could also be triggering a panic response in the ants, which use the same chemical as an alarm pheromone.
Read more at Wired Science
Dinosaur Found With Bird In Its Gut
At least one non-avian dinosaur, Microraptor gui, feasted on birds, according to a new paper published in the journal Proceedings of the National Academy of Sciences.
The evidence is strong. Paleontologists found a fossil of the small, bird-like dinosaur with remains of a bird in its gut. It appears that the dinosaur grabbed the bird and swallowed it whole. The bird must have been the dinosaur's last meal, given how fossils for the two animals were preserved together over millions of years.
(Photograph (A) and camera lucida drawing (B) of the new Microraptor gui specimen; Image: Zhou Zhonghe)
(Close up of the abdomen of the new Microraptor; the remains of the enantiornithine bird are indicated by blue; Image: Zhou Zhonghe)
Before I continue here, for the sake of brevity, the word "dinosaur" by itself in this text will refer to a non-avian dinosaur. Birds are living dinosaurs, confusing the whole matter. Paleontologists continue to grapple over the differences and similarities between the two animal groups. For example, there's ongoing debate on whether or not the world's supposed oldest known bird was actually a dinosaur.
This latest discovery sheds important light on how some dinosaurs and birds interacted.
The project leader was Jingmai O'Connor of the Institute of Vertebrate Paleontology and Paleoanthropology in China. O'Connor and colleagues Zhonghe Zhou and Xing Xu write in the paper, "Preserved indicators of diet are extremely rare in the fossil record; even more so is unequivocal direct evidence for predator–prey relationships. Here, we report on a unique specimen of the small nonavian theropod Microraptor gui from the Early Cretaceous Jehol biota, China, which has the remains of an adult enantiornithine bird preserved in its abdomen, most likely not scavenged, but captured and consumed by the dinosaur."
I love that enantiornithines have their own Facebook page. As you can see there, they were extinct birds that still retained teeth and had clawed fingers on their wings. These birds evolved for tree living, given how their legs and feet were shaped. The Microraptor therefore likely lived in trees too.
Read more at Discovery News
The evidence is strong. Paleontologists found a fossil of the small, bird-like dinosaur with remains of a bird in its gut. It appears that the dinosaur grabbed the bird and swallowed it whole. The bird must have been the dinosaur's last meal, given how fossils for the two animals were preserved together over millions of years.
(Photograph (A) and camera lucida drawing (B) of the new Microraptor gui specimen; Image: Zhou Zhonghe)
(Close up of the abdomen of the new Microraptor; the remains of the enantiornithine bird are indicated by blue; Image: Zhou Zhonghe)
Before I continue here, for the sake of brevity, the word "dinosaur" by itself in this text will refer to a non-avian dinosaur. Birds are living dinosaurs, confusing the whole matter. Paleontologists continue to grapple over the differences and similarities between the two animal groups. For example, there's ongoing debate on whether or not the world's supposed oldest known bird was actually a dinosaur.
This latest discovery sheds important light on how some dinosaurs and birds interacted.
The project leader was Jingmai O'Connor of the Institute of Vertebrate Paleontology and Paleoanthropology in China. O'Connor and colleagues Zhonghe Zhou and Xing Xu write in the paper, "Preserved indicators of diet are extremely rare in the fossil record; even more so is unequivocal direct evidence for predator–prey relationships. Here, we report on a unique specimen of the small nonavian theropod Microraptor gui from the Early Cretaceous Jehol biota, China, which has the remains of an adult enantiornithine bird preserved in its abdomen, most likely not scavenged, but captured and consumed by the dinosaur."
I love that enantiornithines have their own Facebook page. As you can see there, they were extinct birds that still retained teeth and had clawed fingers on their wings. These birds evolved for tree living, given how their legs and feet were shaped. The Microraptor therefore likely lived in trees too.
Read more at Discovery News
Water's Ultimate Freezing Point
How low can you go?
For water, the answer is -55 degrees Fahrenheit (-48 degrees C; 225 Kelvin). University of Utah researchers found that is the lowest temperature liquid water can reach before it becomes ice.
Back in grade school, we all learned that water under normal atmospheric pressure freezes at 32 degrees Fahrenheit (0 C), but that rule only holds for water with tiny impurities.
"If you have liquid water and you want to form ice, then you have to first form a small nucleus or seed of ice from the liquid. The liquid has to give birth to ice," said chemist and co-author of the study Valeria Molinero in a press release.
Impurities in water serve as those seeds.
But in very pure water, "the only way you can form a nucleus is by spontaneously changing the structure of the liquid," Molinero explains. She and co-author Emily Moore published their study today in the journal Nature.
Under the right conditions, pure water can get super cold.
When it gets that cold, the liquid passes into another intermediate form of liquid with the properties of both regular liquid water and ice. But this intermediate phase exists for only a short time. Its very existence has been difficult to prove. Discovery News reported on research that used computer models to observe the properties of this elusive liquid.
Read more at Discovery News
For water, the answer is -55 degrees Fahrenheit (-48 degrees C; 225 Kelvin). University of Utah researchers found that is the lowest temperature liquid water can reach before it becomes ice.
Back in grade school, we all learned that water under normal atmospheric pressure freezes at 32 degrees Fahrenheit (0 C), but that rule only holds for water with tiny impurities.
"If you have liquid water and you want to form ice, then you have to first form a small nucleus or seed of ice from the liquid. The liquid has to give birth to ice," said chemist and co-author of the study Valeria Molinero in a press release.
Impurities in water serve as those seeds.
But in very pure water, "the only way you can form a nucleus is by spontaneously changing the structure of the liquid," Molinero explains. She and co-author Emily Moore published their study today in the journal Nature.
Under the right conditions, pure water can get super cold.
When it gets that cold, the liquid passes into another intermediate form of liquid with the properties of both regular liquid water and ice. But this intermediate phase exists for only a short time. Its very existence has been difficult to prove. Discovery News reported on research that used computer models to observe the properties of this elusive liquid.
Read more at Discovery News
Nov 22, 2011
Tiny Flame Shines Light On Supernovae Explosions
Starting from the behavior of small flames in the laboratory, a team of researchers has gained new insights into the titanic forces that drive Type Ia supernova explosions. These stellar explosions are important tools for studying the evolution of the universe, so a better understanding of how they behave would help answer some of the fundamental questions in astronomy.
Type Ia supernovae form when a white dwarf star -- the left-over cinder of a star like our Sun -- accumulates so much mass from a companion star that it reignites its collapsed stellar furnace and detonates, briefly outshining all other stars in its host galaxy. Because these stellar explosions have a characteristic brightness, astronomers use them to calculate cosmic distances. (It was by studying Type Ia supernovae that two independent research teams determined that the expansion of the Universe was accelerating, earning them the 2011 Nobel Prize in Physics).
To better understand the complex conditions driving this type of supernova, the researchers performed new 3-D calculations of the turbulence that is thought to push a slow-burning flame past its limits, causing a rapid detonation -- the so-called deflagration-to-detonation transition (DDT). How this transition might occur is hotly debated, and these calculations provide insights into what is happening at the moment when the white dwarf star makes this spectacular transition to supernova. "Turbulence properties inferred from these simulations provides insight into the DDT process, if it occurs," said Aaron Jackson, currently an NRC Research Associate working in the Laboratory for Computational Physics and Fluid Dynamics at the Naval Research Laboratory in Washington, D.C. At the time of this research, Jackson was a graduate student at Stony Brook University on Long Island, New York.
Jackson and his colleagues Dean Townsley from the University of Alabama at Tuscaloosa, and Alan Calder also of Stony Brook, presented their data at the American Physical Society's (APS) Division of Fluid Dynamics (DFD) meeting in Baltimore, Nov. 20-22, 2011.
While the deflagration-detonation transition mechanism is still not well understood, a prevailing hypothesis in the astrophysics community is that if turbulence is intense enough, DDT will occur. Extreme turbulent intensities inferred in the white dwarf from the researchers' simulations suggest DDT is likely, but the lack of knowledge about the process allows a large range of
outcomes from the explosion. Matching simulations to observed supernovae can identify likely conditions for DDT.
"There are a few options for how to simulate how they [supernovae] might work, each of which has different advantages and disadvantages," said Townsley. "Our goal is to provide a more realistic simulation of how a given supernova scenario will perform, but that is a long-term goal and involves many different improvements that are still in progress."
Read more at Science Daily
Type Ia supernovae form when a white dwarf star -- the left-over cinder of a star like our Sun -- accumulates so much mass from a companion star that it reignites its collapsed stellar furnace and detonates, briefly outshining all other stars in its host galaxy. Because these stellar explosions have a characteristic brightness, astronomers use them to calculate cosmic distances. (It was by studying Type Ia supernovae that two independent research teams determined that the expansion of the Universe was accelerating, earning them the 2011 Nobel Prize in Physics).
To better understand the complex conditions driving this type of supernova, the researchers performed new 3-D calculations of the turbulence that is thought to push a slow-burning flame past its limits, causing a rapid detonation -- the so-called deflagration-to-detonation transition (DDT). How this transition might occur is hotly debated, and these calculations provide insights into what is happening at the moment when the white dwarf star makes this spectacular transition to supernova. "Turbulence properties inferred from these simulations provides insight into the DDT process, if it occurs," said Aaron Jackson, currently an NRC Research Associate working in the Laboratory for Computational Physics and Fluid Dynamics at the Naval Research Laboratory in Washington, D.C. At the time of this research, Jackson was a graduate student at Stony Brook University on Long Island, New York.
Jackson and his colleagues Dean Townsley from the University of Alabama at Tuscaloosa, and Alan Calder also of Stony Brook, presented their data at the American Physical Society's (APS) Division of Fluid Dynamics (DFD) meeting in Baltimore, Nov. 20-22, 2011.
While the deflagration-detonation transition mechanism is still not well understood, a prevailing hypothesis in the astrophysics community is that if turbulence is intense enough, DDT will occur. Extreme turbulent intensities inferred in the white dwarf from the researchers' simulations suggest DDT is likely, but the lack of knowledge about the process allows a large range of
outcomes from the explosion. Matching simulations to observed supernovae can identify likely conditions for DDT.
"There are a few options for how to simulate how they [supernovae] might work, each of which has different advantages and disadvantages," said Townsley. "Our goal is to provide a more realistic simulation of how a given supernova scenario will perform, but that is a long-term goal and involves many different improvements that are still in progress."
Read more at Science Daily
Predators Drive the Evolution of Poison Dart Frogs' Skin Patterns
Natural selection has played a role in the development of the many skins patterns of the tiny Ranitomeya imitator poison dart frog, according to a study that will be published in an upcoming edition of American Naturalist by University of Montreal biologist Mathieu Chouteau.
The researcher's methodology was rather unusual: on three occasions over three days, at two different sites, Chouteau investigated the number of attacks that had been made on fake frogs, by counting how many times that had been pecked. Those that were attacked the least looked like local frogs, while those that came from another area had obviously been targeted.
The brightly coloured frogs that we find in tropical forests are in fact sending a clear message to predators: "don't come near me, I'm poisonous!" But why would a single species need multiple patterns when one would do? It appears that when predators do not recognize a poisonous frog as being a member of the local group, it attacks in the hope that it has chanced upon edible prey. "When predators see that their targets are of a different species, they attack. Over the long term, that explains how patterns and colours become uniform in an area," said Bernard Angers, who directed Chouteau's doctoral research.
A total of 3,600 life-size plasticine models, each less than one centimetre long, were used in the study. The menagerie was divided between two carefully identified sites in the Amazon forest. "The trickiest part was transporting my models without arousing suspicion at the airport and customs controls," Chouteau said. He chose plasticine following a review of scientific literature. "Many scientists have successfully used plasticine to create models of snakes, salamanders and poison dart frogs." The Peruvian part of the forest proved to be ideal for this study, as two radically different looking groups of frogs are found there: one, living on a plain, has yellow stripes, and the other, living on a mountain, has green patches. The two colonies are ten kilometers apart. 900 fake frogs were placed in each area in carefully targeted positions. Various combinations of colours and patterns were used.
Chouteau was particularly surprised by the "very small spatial scale at which the evolutionary process has taken place." Ten kilometers of separation sufficed for a clearly different adaptation to take place. "A second surprise was the learning abilities of the predator community, especially the speed at which the learning process takes place when a new and exotic defensive signal is introduced on a massive scale," Chouteau said.
Read more at Science Daily
The researcher's methodology was rather unusual: on three occasions over three days, at two different sites, Chouteau investigated the number of attacks that had been made on fake frogs, by counting how many times that had been pecked. Those that were attacked the least looked like local frogs, while those that came from another area had obviously been targeted.
The brightly coloured frogs that we find in tropical forests are in fact sending a clear message to predators: "don't come near me, I'm poisonous!" But why would a single species need multiple patterns when one would do? It appears that when predators do not recognize a poisonous frog as being a member of the local group, it attacks in the hope that it has chanced upon edible prey. "When predators see that their targets are of a different species, they attack. Over the long term, that explains how patterns and colours become uniform in an area," said Bernard Angers, who directed Chouteau's doctoral research.
A total of 3,600 life-size plasticine models, each less than one centimetre long, were used in the study. The menagerie was divided between two carefully identified sites in the Amazon forest. "The trickiest part was transporting my models without arousing suspicion at the airport and customs controls," Chouteau said. He chose plasticine following a review of scientific literature. "Many scientists have successfully used plasticine to create models of snakes, salamanders and poison dart frogs." The Peruvian part of the forest proved to be ideal for this study, as two radically different looking groups of frogs are found there: one, living on a plain, has yellow stripes, and the other, living on a mountain, has green patches. The two colonies are ten kilometers apart. 900 fake frogs were placed in each area in carefully targeted positions. Various combinations of colours and patterns were used.
Chouteau was particularly surprised by the "very small spatial scale at which the evolutionary process has taken place." Ten kilometers of separation sufficed for a clearly different adaptation to take place. "A second surprise was the learning abilities of the predator community, especially the speed at which the learning process takes place when a new and exotic defensive signal is introduced on a massive scale," Chouteau said.
Read more at Science Daily
Ancient Cave Lion Bones Reveal Big Cats’ Diet
A quarter larger than today’s lions, the European cave lion was one of the biggest cats around 12,000 years ago. Now, an unusually sophisticated analysis of its bones is revealing what these creatures ate—and why they may have disappeared.
Although they were certainly massive cats, the term “cave lion” is a bit of a misnomer. Unlike today’s lions, males probably didn’t have manes, and they appear to have been solitary hunters. What’s more, though their bones are best preserved in caves, they probably lived in the open. But they did have one thing in common with their modern relatives: they appear to have worried humans. The big cats show up in ice age cave paintings and in ivory figurines, suggesting that they were a major concern for our ancestors.
To figure out what these lions hunted, biogeologist Hervé Bocherens and colleagues at the University of Tübingen in Germany, analyzed bone samples from 14 cave lions—found in four caves in France and central Europe—that lived between 12,000 and 40,000 years ago. The team focused on the chemical content of the bone collagen, which is often well-preserved, even in bones tens of thousands of years old. By incinerating a tiny fragment of preserved bone—usually less than a milligram—researchers can identify the molecules inside it and determine an animal’s diet.
Scientists have perfected the technique over the years. It was used recently to look at the diet of Neandertals, but this is one of the first studies to use it to look at a nonhuman predator—and the analysis is now sensitive enough to look several steps down the food chain. This enabled Bocherens to determine not only what cave lions ate but also what their prey ate. And that made it possible to tell, for example, whether lions were targeting full-size cave bears or their more vulnerable cubs, because adults and babies eat different diets themselves. “There’s a difference between the [chemical] signal of adults and babies,” Bocherens says. “Babies drink the milk of the mother.”
As it turned out, this distinction was important. Bocherens’s analysis, reported in the 6 December issue of Quaternary International, revealed that the cave lions occasionally ate bear cubs but not adults. Their favorite food, however, was reindeer, which Bocherens and his team determined consumed massive quantities of lichen, much as their modern descendants did. The cave lion diet, Bocherens says, appears to have been much more finicky than that of today’s lions, which eat just about anything they can catch.
The results may provide new insights into why cave lions died out. When Europe’s climate began to warm about 19,000 years ago, the landscape gradually changed from chilly, open steppes to denser forests. That would have made an inhospitable habitat for reindeer and for the cave lions that depended on them for food. (Cave bears were also dying out at the same time.)
Experts say the ability to dissect ancient diets so thoroughly is a tantalizing tool but that this particular study is too geographically limited to be conclusive about cave lions. “It’s quite astonishing that you can quite convincingly demonstrate what predators were eating tens of thousands of years ago,” says Anthony Stuart, a biologist at Durham University in the United Kingdom. “One obvious thing to do is extend the study to a wider area” to see how diets might have varied geographically. Cave lions, he notes, “ranged from Spain across Europe and Siberia all the way to the northwestern part of North America.”
Read more at Wired Science
Although they were certainly massive cats, the term “cave lion” is a bit of a misnomer. Unlike today’s lions, males probably didn’t have manes, and they appear to have been solitary hunters. What’s more, though their bones are best preserved in caves, they probably lived in the open. But they did have one thing in common with their modern relatives: they appear to have worried humans. The big cats show up in ice age cave paintings and in ivory figurines, suggesting that they were a major concern for our ancestors.
To figure out what these lions hunted, biogeologist Hervé Bocherens and colleagues at the University of Tübingen in Germany, analyzed bone samples from 14 cave lions—found in four caves in France and central Europe—that lived between 12,000 and 40,000 years ago. The team focused on the chemical content of the bone collagen, which is often well-preserved, even in bones tens of thousands of years old. By incinerating a tiny fragment of preserved bone—usually less than a milligram—researchers can identify the molecules inside it and determine an animal’s diet.
Scientists have perfected the technique over the years. It was used recently to look at the diet of Neandertals, but this is one of the first studies to use it to look at a nonhuman predator—and the analysis is now sensitive enough to look several steps down the food chain. This enabled Bocherens to determine not only what cave lions ate but also what their prey ate. And that made it possible to tell, for example, whether lions were targeting full-size cave bears or their more vulnerable cubs, because adults and babies eat different diets themselves. “There’s a difference between the [chemical] signal of adults and babies,” Bocherens says. “Babies drink the milk of the mother.”
As it turned out, this distinction was important. Bocherens’s analysis, reported in the 6 December issue of Quaternary International, revealed that the cave lions occasionally ate bear cubs but not adults. Their favorite food, however, was reindeer, which Bocherens and his team determined consumed massive quantities of lichen, much as their modern descendants did. The cave lion diet, Bocherens says, appears to have been much more finicky than that of today’s lions, which eat just about anything they can catch.
The results may provide new insights into why cave lions died out. When Europe’s climate began to warm about 19,000 years ago, the landscape gradually changed from chilly, open steppes to denser forests. That would have made an inhospitable habitat for reindeer and for the cave lions that depended on them for food. (Cave bears were also dying out at the same time.)
Experts say the ability to dissect ancient diets so thoroughly is a tantalizing tool but that this particular study is too geographically limited to be conclusive about cave lions. “It’s quite astonishing that you can quite convincingly demonstrate what predators were eating tens of thousands of years ago,” says Anthony Stuart, a biologist at Durham University in the United Kingdom. “One obvious thing to do is extend the study to a wider area” to see how diets might have varied geographically. Cave lions, he notes, “ranged from Spain across Europe and Siberia all the way to the northwestern part of North America.”
Read more at Wired Science
Blame Your Crooked Teeth on Early Farmers
When humans turned from hunting and gathering to farming some 10,000 years ago, they set our species on the road to civilization. Agricultural surpluses led to division of labor, the rise of cities, and technological innovation. But civilization has had both its blessings and its curses. One downside of farming, a new study demonstrates, was a shortening of the human jaw that has left precious little room for our teeth and sends many of us to an orthodontist’s chair.
Although all living humans belong to one species, Homo sapiens, there are recognizable differences in the shapes of our skulls and faces across the world. In recent years, anthropologists have concluded that most of this geographic variation in skull shape is due to chance, so-called genetic drift, rather than natural selection. But some features of our faces, including the shape of our lower jaws, don’t seem to follow this random pattern.
A number of researchers have hypothesized that the advent of agriculture, which led to diets consisting of softer foods that required less chewing, led to modifications in the lower jaw, either through natural selection or from developmental changes caused by the way we use our jaws beginning in infancy. But evidence from ancient skeletons has been limited. To test the hypothesis, Noreen von Cramon-Taubadel, an anthropologist at the University of Kent in the United Kingdom, looked at skull and jaw shape in 11 populations, six of which live by farming and five of which are hunter-gatherers. The populations included people from Africa, Asia, Australia, Europe, and the Americas.
In the first part of her study, von Cramon-Taubadel measured the shapes of 322 crania and 295 jaws from museums, representing the 11 populations. She found a significant correlation between jaw shape and how each population made its living. Thus hunter-gatherers tended to have longer (more jutting) and narrower lower jaws, whereas those of farmers were relatively shorter and wider. But the form of the crania did not show this correlation, with one exception: The shape of the palate of the upper jaw, which is closely associated with the lower jaw and involved in chewing, also varied to some degree between farmers and hunter-gatherers.
To see whether this dichotomy in jaw shape between farmers and hunter-gatherers could be due to other factors, von Cramon-Taubadel searched for possible correlations with geographic location, genetic history, and climate variation but found little or none. In her report published online this week in the Proceedings of the National Academy of Sciences, she concludes that the transition to farming — which involved the domestication of plants and animals, a major increase in food processing, and thus consumption of easier to chew food — altered the shape of the human jaw, making it shorter and less robust. And this shortening of the jaw, she suggests, led to greater crowding of the teeth and the orthodontist bills that plague many modern families.
As for whether these changes in jaw shape are due to natural selection over many generations or simply changes that arise anew in each growing infant, von Cramon-Taubadel cites experimental studies showing that animals raised on softer, more processed foods grow smaller jaws than those fed fresh, unprocessed food. But even if the jaw alterations were due to natural selection, she concludes, they would have taken place over a relatively short period of evolutionary time.
Read more at Wired Science
Although all living humans belong to one species, Homo sapiens, there are recognizable differences in the shapes of our skulls and faces across the world. In recent years, anthropologists have concluded that most of this geographic variation in skull shape is due to chance, so-called genetic drift, rather than natural selection. But some features of our faces, including the shape of our lower jaws, don’t seem to follow this random pattern.
A number of researchers have hypothesized that the advent of agriculture, which led to diets consisting of softer foods that required less chewing, led to modifications in the lower jaw, either through natural selection or from developmental changes caused by the way we use our jaws beginning in infancy. But evidence from ancient skeletons has been limited. To test the hypothesis, Noreen von Cramon-Taubadel, an anthropologist at the University of Kent in the United Kingdom, looked at skull and jaw shape in 11 populations, six of which live by farming and five of which are hunter-gatherers. The populations included people from Africa, Asia, Australia, Europe, and the Americas.
In the first part of her study, von Cramon-Taubadel measured the shapes of 322 crania and 295 jaws from museums, representing the 11 populations. She found a significant correlation between jaw shape and how each population made its living. Thus hunter-gatherers tended to have longer (more jutting) and narrower lower jaws, whereas those of farmers were relatively shorter and wider. But the form of the crania did not show this correlation, with one exception: The shape of the palate of the upper jaw, which is closely associated with the lower jaw and involved in chewing, also varied to some degree between farmers and hunter-gatherers.
To see whether this dichotomy in jaw shape between farmers and hunter-gatherers could be due to other factors, von Cramon-Taubadel searched for possible correlations with geographic location, genetic history, and climate variation but found little or none. In her report published online this week in the Proceedings of the National Academy of Sciences, she concludes that the transition to farming — which involved the domestication of plants and animals, a major increase in food processing, and thus consumption of easier to chew food — altered the shape of the human jaw, making it shorter and less robust. And this shortening of the jaw, she suggests, led to greater crowding of the teeth and the orthodontist bills that plague many modern families.
As for whether these changes in jaw shape are due to natural selection over many generations or simply changes that arise anew in each growing infant, von Cramon-Taubadel cites experimental studies showing that animals raised on softer, more processed foods grow smaller jaws than those fed fresh, unprocessed food. But even if the jaw alterations were due to natural selection, she concludes, they would have taken place over a relatively short period of evolutionary time.
Read more at Wired Science
Nov 21, 2011
Hula Painted Frog Bounces Back From Extinction
A species of frog that was thought to have been made extinct during the notorious drainage of the Hula marshlands in Israel, has appeared again after more than 50 years of hiding.
The Palestinian or Hula painted frog (Discoglossus nigriventer) originally went missing when the Jewish National Fund drained the marshlands around the Hula Valley in the 1950s. The swamp was a breeding ground for malaria, and the disease was killing off the population.
The JNF removed the water from the swamp and redirected the flow of water to the river Jordan with artificial estuaries. But the operation led to numerous knock-on effects — the reclaimed land was useless for agriculture, toxins invaded the river and dumped peat routinely caught fire.
The disastrous operation also led to huge destruction of ecosystems, wiping out water plants, tropical aquatic ferns, the ray-finned fish Acanthobrama hulensis and the cichlid fish Tristramella intermedia. Until this week, it was thought that the hula painted frog was among the lost species.
But a routine patrol at the Ha’Hula lake by Israel’s Nature and Parks Authority turned up a mysterious, unknown female frog and took it back to the lab for testing. It was soon confirmed that it was a Hula painted frog, and the rare species had hung on amongst the devastation of its habitat.
Read more at Wired Science
The Palestinian or Hula painted frog (Discoglossus nigriventer) originally went missing when the Jewish National Fund drained the marshlands around the Hula Valley in the 1950s. The swamp was a breeding ground for malaria, and the disease was killing off the population.
The JNF removed the water from the swamp and redirected the flow of water to the river Jordan with artificial estuaries. But the operation led to numerous knock-on effects — the reclaimed land was useless for agriculture, toxins invaded the river and dumped peat routinely caught fire.
The disastrous operation also led to huge destruction of ecosystems, wiping out water plants, tropical aquatic ferns, the ray-finned fish Acanthobrama hulensis and the cichlid fish Tristramella intermedia. Until this week, it was thought that the hula painted frog was among the lost species.
But a routine patrol at the Ha’Hula lake by Israel’s Nature and Parks Authority turned up a mysterious, unknown female frog and took it back to the lab for testing. It was soon confirmed that it was a Hula painted frog, and the rare species had hung on amongst the devastation of its habitat.
Read more at Wired Science
New York City Buzzing With New Bee Species
The American Museum of Natural History has announced the discovery of eleven new species of bees, including four from New York City and its suburbs.
The bees, described in the journal Zootaxa, include small-to-medium-sized sweat bees, so named because of their attraction to the salt in human sweat. A team of scientists identified the bees with the help of the vast digital and physical bee collections at the AMNH.
A standout bee among the 11 is Lasioglossum gotham, aka the Gotham Bee. It was spotted in the New York Botanical Garden, in the Bronx, and in the Brooklyn Botanic Garden.
“Declines in honey bees and other bees have received a lot of attention in recent years, but it is not generally appreciated that bee species entirely new to science are still being discovered even within our largest cities," co-author John Ascher, a research scientist in the museum’s Division of Invertebrate Zoology, was quoted as saying in a press release. "New York City has a surprising diversity of bees, with more than 250 described species recorded."
Ascher helped to collect and curate specimens of some of the new species. He leads the Digital Bee Collections Network, a collaborative project that serves as the online clearinghouse for information about the world’s bee species.
The 11 new bees also include Lasioglossum ascheri, which was classified from just two specimens found in Westchester and Suffolk counties; L. katherinae from Brooklyn and Nassau County; Lasioglossum rozeni from Suffolk County; and L. georgeickworti from Queens and Nassau and Suffolk counties.
“It's remarkable that so many bees are able to live in such a major urban area,” co-author Jason Gibbs, a Cornell Univeristy researcher, was quoted as saying. “Natural areas like urban parks and rooftop and botanical gardens provide the nesting sites and floral diversity that bees need. This little bee (Gotham Bee) has been quietly living in the city, pollinating flowers in people’s gardens for years. It’s a pleasure to help give it some well-deserved recognition.”
Over the past decade there's been renewed interest in bees, partly because of a complex problem called Colony Collapse Disorder, which has killed countless bees in recent years. These buzzing insects are the most important pollinators in the Northeastern United States, fertilizing plants as they fly from flower to flower on pollen-collecting missions.
The discovery of new bee species in New York City and the vicinity highlights the need for additional study of native bee diversity across the country, Gibbs believes.
Read more at Discovery News
The bees, described in the journal Zootaxa, include small-to-medium-sized sweat bees, so named because of their attraction to the salt in human sweat. A team of scientists identified the bees with the help of the vast digital and physical bee collections at the AMNH.
A standout bee among the 11 is Lasioglossum gotham, aka the Gotham Bee. It was spotted in the New York Botanical Garden, in the Bronx, and in the Brooklyn Botanic Garden.
“Declines in honey bees and other bees have received a lot of attention in recent years, but it is not generally appreciated that bee species entirely new to science are still being discovered even within our largest cities," co-author John Ascher, a research scientist in the museum’s Division of Invertebrate Zoology, was quoted as saying in a press release. "New York City has a surprising diversity of bees, with more than 250 described species recorded."
Ascher helped to collect and curate specimens of some of the new species. He leads the Digital Bee Collections Network, a collaborative project that serves as the online clearinghouse for information about the world’s bee species.
The 11 new bees also include Lasioglossum ascheri, which was classified from just two specimens found in Westchester and Suffolk counties; L. katherinae from Brooklyn and Nassau County; Lasioglossum rozeni from Suffolk County; and L. georgeickworti from Queens and Nassau and Suffolk counties.
“It's remarkable that so many bees are able to live in such a major urban area,” co-author Jason Gibbs, a Cornell Univeristy researcher, was quoted as saying. “Natural areas like urban parks and rooftop and botanical gardens provide the nesting sites and floral diversity that bees need. This little bee (Gotham Bee) has been quietly living in the city, pollinating flowers in people’s gardens for years. It’s a pleasure to help give it some well-deserved recognition.”
Over the past decade there's been renewed interest in bees, partly because of a complex problem called Colony Collapse Disorder, which has killed countless bees in recent years. These buzzing insects are the most important pollinators in the Northeastern United States, fertilizing plants as they fly from flower to flower on pollen-collecting missions.
The discovery of new bee species in New York City and the vicinity highlights the need for additional study of native bee diversity across the country, Gibbs believes.
Read more at Discovery News
Ice Mummy May Have Smashed Eye in Fall
A sharp incision in his right eye may have contributed to the rapid demise of Ötzi the Iceman, the famous mummy who died in the Italian Alps more than 5,000 years ago.
Twenty years after two hikers stumbled upon the Iceman in a melting glacier, new analyses have revealed that a deep cut likely led to heavy bleeding in the man's eye. In the cold, high-altitude conditions where he was found, that kind of injury would have been tough to recover from.
The official opinion remains that an arrow in his left shoulder was the cause of death for Ötzi. But the new study raises the possibility -- for some, at least -- that he fell over after being shot by an arrow. And, at higher than 10,000 feet in elevation, his alpine fall may have made the situation much worse.
"Maybe he fell down or maybe he had a fight up there, nobody knows," said Wolfgang Recheis, a physicist in the radiology department at the University of Innsbruck in Austria. "With this cut alone, at 3,250 meters, it would have been a deadly wound up there. Bleeding to death in the late afternoon when it was getting cold up there, this could be really dangerous."
Ever since his discovery in 1991, Ötzi has been measured, photographed, X-rayed, CT-scanned and endlessly speculated about. The Iceman Photoscan website allows anyone to scrutinize every inch of the body, which belonged to a 5'3", 110-pound, 45-year old man.
Ten years ago, researchers found a flint arrowhead buried in Ötzi's left shoulder blade inside a two-centimeter (0.8-inch) wide hole. They concluded that the arrow pierced a major artery and killed him within minutes. At a conference in September, experts reaffirmed that assessment.
But in one of the latest studies, Recheis used the most advanced CT-scanning technology available to take a closer look at Ötzi's right eye. Earlier examinations had shown a crack in the skull in that spot. The new work revealed a deep incision in the same place.
Scans also revealed iron crystals around the right eye and forehead, which produce a bluish hue. And since the region's rocks are naturally low in iron, Recheis and colleagues suspect the iron is a sign of a hematoma, or massive bleeding outside of the blood vessels. A biopsy is needed for confirmation.
Despite the officially stated opinion on Ötzi's cause of death, Recheis is not convinced that the arrow wound was deadly on its own.
"My South Tyrolean colleagues say the arrow most probably hit the sub-clavicular artery or other vital vessel and thus the Iceman died," Recheis said. "But there are doubts. It's justified that the arrow did not hit any vital vessels or nerves as far as we can say from the data we have."
"This could be the first thing," he added. "He was up there and shot by an arrow. And then he fell down, cut his eye and bled to death."
Albert Zink, head of the EURAC Institute for Mummies and the Iceman in Bolzano, Italy, was surprised and perplexed to hear of these new claims. At a conference this fall, he said, a whole table-full of experts discussed the evidence and unanimously agreed that the arrow killed the Iceman.
The shoulder wound, he said, was clearly fresh and bleeding heavily when Ötzi died.
Read more at Discovery News
Twenty years after two hikers stumbled upon the Iceman in a melting glacier, new analyses have revealed that a deep cut likely led to heavy bleeding in the man's eye. In the cold, high-altitude conditions where he was found, that kind of injury would have been tough to recover from.
The official opinion remains that an arrow in his left shoulder was the cause of death for Ötzi. But the new study raises the possibility -- for some, at least -- that he fell over after being shot by an arrow. And, at higher than 10,000 feet in elevation, his alpine fall may have made the situation much worse.
"Maybe he fell down or maybe he had a fight up there, nobody knows," said Wolfgang Recheis, a physicist in the radiology department at the University of Innsbruck in Austria. "With this cut alone, at 3,250 meters, it would have been a deadly wound up there. Bleeding to death in the late afternoon when it was getting cold up there, this could be really dangerous."
Ever since his discovery in 1991, Ötzi has been measured, photographed, X-rayed, CT-scanned and endlessly speculated about. The Iceman Photoscan website allows anyone to scrutinize every inch of the body, which belonged to a 5'3", 110-pound, 45-year old man.
Ten years ago, researchers found a flint arrowhead buried in Ötzi's left shoulder blade inside a two-centimeter (0.8-inch) wide hole. They concluded that the arrow pierced a major artery and killed him within minutes. At a conference in September, experts reaffirmed that assessment.
But in one of the latest studies, Recheis used the most advanced CT-scanning technology available to take a closer look at Ötzi's right eye. Earlier examinations had shown a crack in the skull in that spot. The new work revealed a deep incision in the same place.
Scans also revealed iron crystals around the right eye and forehead, which produce a bluish hue. And since the region's rocks are naturally low in iron, Recheis and colleagues suspect the iron is a sign of a hematoma, or massive bleeding outside of the blood vessels. A biopsy is needed for confirmation.
Despite the officially stated opinion on Ötzi's cause of death, Recheis is not convinced that the arrow wound was deadly on its own.
"My South Tyrolean colleagues say the arrow most probably hit the sub-clavicular artery or other vital vessel and thus the Iceman died," Recheis said. "But there are doubts. It's justified that the arrow did not hit any vital vessels or nerves as far as we can say from the data we have."
"This could be the first thing," he added. "He was up there and shot by an arrow. And then he fell down, cut his eye and bled to death."
Albert Zink, head of the EURAC Institute for Mummies and the Iceman in Bolzano, Italy, was surprised and perplexed to hear of these new claims. At a conference this fall, he said, a whole table-full of experts discussed the evidence and unanimously agreed that the arrow killed the Iceman.
The shoulder wound, he said, was clearly fresh and bleeding heavily when Ötzi died.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Who REALLY Discovered the Expanding Universe?
Astronomer Edwin Hubble's landmark paper on the rate of expansion of the universe was published in 1929, overturning the long-held belief among scientists that the universe was static and unchanging.
That's why the Hubble Constant (the number that describes the rate of expansion) is named after him, not to mention the Hubble Space Telescope.
Less well known is that Hubble might not have been the first the person to make this momentous discovery.
A Belgian priest and cosmologist named Georges Lemaitre published a paper reaching very similar conclusions two years earlier. It's a contentious issue among cosmologists, needless to say.
The problem was, Lemaitre's paper was in French, and appeared in a rather obscure journal: Annals of the Brussels Scientific Society. This limited its distribution throughout the scientific community (at least initially).
Yet even when his paper was finally translated and broadly disseminated, certain key elements went missing, sparking rumors that prominent scientists -- Sir Arthur Eddington, perhaps, or even Hubble himself -- had deliberately "censored" Lemaitre's paper to ensure Hubble's scientific legacy.
What happened? The answer might lie in a new article in Nature by cosmologist and author Mario Livio.
It's a long, complicated story, but here's the CliffsNotes version...
In the late 1920s, astronomer Edwin Hubble was studying distant galaxies at the Carnegie Observatories in Pasadena, home of the spanking new 100-inch Hooker telescope on Mount Wilson.
He measured the brightness of so-called Cepheid variable stars -- a type of periodically pulsing star -- based on the "Period-Luminosity Relation" discovered by Henrietta Swan Leavitt. Basically, if you know how long it takes for the star to go from bright to dim, this will tell you how bright it actually is. And once you know that, you have a means of measuring distance.
So Hubble was able to deduce the relative distance of the galaxies. He combined those observations with data collected in 1912 by Vesto Slipher. Slipher is usually credited with being the first to notice that the light the galaxies emitted had a pronounced “shift” toward the red end of the electromagnetic spectrum, indicating that they were moving away from earth.
Next Hubble plotted the velocity (indicated by the redshift) against relative distance, to get the graph at the top of this post. To a casual observer, it might seem like a random number of points scattered about, with some clustering hinting at a possible pattern.
But Hubble wasn't a casual observer, he was a frickin' genius. He looked at that graph and drew a straight line through all those data points. As telescope resolutions improved over the ensuing decades, Hubble's half-intuitive leap proved correct. Plot the same data today, and the points will fall neatly along the line Hubble drew.
In mathematical terms, that straight line indicates a linear function. That is, the redshift of distant galaxies increased as a linear function of their distance. Hubble reasoned (correctly) that the longer the light has been traveling, the more time there has been for space to expand, and hence the greater the red shift of the light’s wavelength.
So he proposed a law: the greater the distance between any two galaxies, the greater their relative speed of separation. Based on that law, he arrived at an inescapable conclusion: the cosmos was still expanding. And that, of course, changed everything in the field of cosmology.
Now back to Lemaitre.
The academic quibbling usually hinges on whether Lemaitre fully derived Hubble's law on his own from actual observational data, or limited his analysis to theoretical predictions. Lemaitre did rely on data, it turns out -- the same redshift data from Slipher's observations, combined with estimates of galaxy distances inferred from Hubble's own observations, published in 1926. And he also correctly concluded that this meant the universe was expanding, not static.
Sean Carroll wrote about this over at Cosmic Variance back in 2007:
Lemaitre didn’t have very good data (and what he did was partly from Hubble, I gather). And for whatever reason, he did not plot velocity vs. distance. Instead, he seems to have taken the average velocity (which was known since the work of Vesto Slipher to be nonzero) and divided by some estimated average distance! If Hubble’s Law — the linear relation between velocity and distance — is true, that will correctly get you Hubble’s constant, but it’s definitely not enough to establish Hubble’s Law. If you have derived the law theoretically from the principles of general relativity applied to an expanding universe, and are convinced you are correct, maybe all you care about is fixing the value of the one free parameter in your model. But I think it’s still correct to say that credit for Hubble’s Law goes to Hubble — although it’s equally correct to remind people of the crucial role that Lemaitre played in the development of modern cosmology.
Eventually, of course, Lemaitre's crucial role was recognized: among others, Eddington published a long commentary on the work in 1930, calling it "brilliant." Thanks to Eddington, Lemaitre's original paper was translated and published again in 1931.
Oddly, however, some of his original calculations -- the ones that specifically related to the Hubble Constant -- were omitted. When this was discovered in 1982, speculation ran rampant, as science historians debated whether the omission had been deliberate, to preserve Hubble's claim to the discovery, or merely done in error.
Now Livio has weighed in on the controversy with the results of his own investigation in the matter in the Nov. 10th issue of Nature. He sifted through hundreds of letters preserved by the Royal Astronomical Society, along with minutes from the society's meetings and other archival materials.
Read more at Discovery News
That's why the Hubble Constant (the number that describes the rate of expansion) is named after him, not to mention the Hubble Space Telescope.
Less well known is that Hubble might not have been the first the person to make this momentous discovery.
A Belgian priest and cosmologist named Georges Lemaitre published a paper reaching very similar conclusions two years earlier. It's a contentious issue among cosmologists, needless to say.
The problem was, Lemaitre's paper was in French, and appeared in a rather obscure journal: Annals of the Brussels Scientific Society. This limited its distribution throughout the scientific community (at least initially).
Yet even when his paper was finally translated and broadly disseminated, certain key elements went missing, sparking rumors that prominent scientists -- Sir Arthur Eddington, perhaps, or even Hubble himself -- had deliberately "censored" Lemaitre's paper to ensure Hubble's scientific legacy.
What happened? The answer might lie in a new article in Nature by cosmologist and author Mario Livio.
It's a long, complicated story, but here's the CliffsNotes version...
In the late 1920s, astronomer Edwin Hubble was studying distant galaxies at the Carnegie Observatories in Pasadena, home of the spanking new 100-inch Hooker telescope on Mount Wilson.
He measured the brightness of so-called Cepheid variable stars -- a type of periodically pulsing star -- based on the "Period-Luminosity Relation" discovered by Henrietta Swan Leavitt. Basically, if you know how long it takes for the star to go from bright to dim, this will tell you how bright it actually is. And once you know that, you have a means of measuring distance.
So Hubble was able to deduce the relative distance of the galaxies. He combined those observations with data collected in 1912 by Vesto Slipher. Slipher is usually credited with being the first to notice that the light the galaxies emitted had a pronounced “shift” toward the red end of the electromagnetic spectrum, indicating that they were moving away from earth.
Next Hubble plotted the velocity (indicated by the redshift) against relative distance, to get the graph at the top of this post. To a casual observer, it might seem like a random number of points scattered about, with some clustering hinting at a possible pattern.
But Hubble wasn't a casual observer, he was a frickin' genius. He looked at that graph and drew a straight line through all those data points. As telescope resolutions improved over the ensuing decades, Hubble's half-intuitive leap proved correct. Plot the same data today, and the points will fall neatly along the line Hubble drew.
In mathematical terms, that straight line indicates a linear function. That is, the redshift of distant galaxies increased as a linear function of their distance. Hubble reasoned (correctly) that the longer the light has been traveling, the more time there has been for space to expand, and hence the greater the red shift of the light’s wavelength.
So he proposed a law: the greater the distance between any two galaxies, the greater their relative speed of separation. Based on that law, he arrived at an inescapable conclusion: the cosmos was still expanding. And that, of course, changed everything in the field of cosmology.
Now back to Lemaitre.
The academic quibbling usually hinges on whether Lemaitre fully derived Hubble's law on his own from actual observational data, or limited his analysis to theoretical predictions. Lemaitre did rely on data, it turns out -- the same redshift data from Slipher's observations, combined with estimates of galaxy distances inferred from Hubble's own observations, published in 1926. And he also correctly concluded that this meant the universe was expanding, not static.
Sean Carroll wrote about this over at Cosmic Variance back in 2007:
Lemaitre didn’t have very good data (and what he did was partly from Hubble, I gather). And for whatever reason, he did not plot velocity vs. distance. Instead, he seems to have taken the average velocity (which was known since the work of Vesto Slipher to be nonzero) and divided by some estimated average distance! If Hubble’s Law — the linear relation between velocity and distance — is true, that will correctly get you Hubble’s constant, but it’s definitely not enough to establish Hubble’s Law. If you have derived the law theoretically from the principles of general relativity applied to an expanding universe, and are convinced you are correct, maybe all you care about is fixing the value of the one free parameter in your model. But I think it’s still correct to say that credit for Hubble’s Law goes to Hubble — although it’s equally correct to remind people of the crucial role that Lemaitre played in the development of modern cosmology.
Eventually, of course, Lemaitre's crucial role was recognized: among others, Eddington published a long commentary on the work in 1930, calling it "brilliant." Thanks to Eddington, Lemaitre's original paper was translated and published again in 1931.
Oddly, however, some of his original calculations -- the ones that specifically related to the Hubble Constant -- were omitted. When this was discovered in 1982, speculation ran rampant, as science historians debated whether the omission had been deliberate, to preserve Hubble's claim to the discovery, or merely done in error.
Now Livio has weighed in on the controversy with the results of his own investigation in the matter in the Nov. 10th issue of Nature. He sifted through hundreds of letters preserved by the Royal Astronomical Society, along with minutes from the society's meetings and other archival materials.
Read more at Discovery News
Nov 20, 2011
We Are Hardwired to Walk
Watching a toddler take his first steps, it's obvious he's learned through observation and encouragement. But are our brains specially wired to learn this important behavior so early on?
Scientists are beginning to think so, according to one article published in the journal Science.
By looking at the body's neural circuitry in rats, humans and other animals, researchers pieced together that the process of learning to move around looks similar across species, despite most mammals moving on four legs and Homo sapiens stepping with two. The finding indicates that humans' innate ability to walk has a lengthy evolutionary history.
Previously, neuroscience experts thought pathways in the nervous system changed dramatically during human development, allowing new pathways to replace the deeply -rooted connections shared with other mammals. Not so, says lead researcher Francesco Lacquaniti, according to one article in The Atlantic. He provided an analogy comparing learning to walk with learning to drive a stick shift car. New drivers first learn the basic gears, but then add more with time. Yet even the most advanced drivers still need the first gear to drive. The same principle applies to learning to walk, with humans and animals sharing a common circuitry and gradually building on it in different ways.
Instead, toddlers continue to use these primitive connections in muscles, adding to them as they become more skilled at walking. In the study, Lacquaniti and colleagues looked at the electrical activity in muscles in newborns, toddlers, preschoolers and adults. They found the same connections were at play among cats, guineafowl, non-human primates and rats as well.
Read more at Discovery News
Scientists are beginning to think so, according to one article published in the journal Science.
By looking at the body's neural circuitry in rats, humans and other animals, researchers pieced together that the process of learning to move around looks similar across species, despite most mammals moving on four legs and Homo sapiens stepping with two. The finding indicates that humans' innate ability to walk has a lengthy evolutionary history.
Previously, neuroscience experts thought pathways in the nervous system changed dramatically during human development, allowing new pathways to replace the deeply -rooted connections shared with other mammals. Not so, says lead researcher Francesco Lacquaniti, according to one article in The Atlantic. He provided an analogy comparing learning to walk with learning to drive a stick shift car. New drivers first learn the basic gears, but then add more with time. Yet even the most advanced drivers still need the first gear to drive. The same principle applies to learning to walk, with humans and animals sharing a common circuitry and gradually building on it in different ways.
Instead, toddlers continue to use these primitive connections in muscles, adding to them as they become more skilled at walking. In the study, Lacquaniti and colleagues looked at the electrical activity in muscles in newborns, toddlers, preschoolers and adults. They found the same connections were at play among cats, guineafowl, non-human primates and rats as well.
Read more at Discovery News
Earthquake-Proof Bridge Being Built In San Francisco
Within the next 30 years, a major earthquake with a magnitude of 6.7 or higher is expected to hit San Francisco. That's why the Bay Bridge, which connects San Francisco and Oakland, is undergoing major seismic renovations.
During 1989's Loma Prieta Earthquake, which registered 6.9 on the Richter scale, a section of the Bay Bridge collapsed, killing a motorist. Since then, major studies were conducted to determine if California's largest bridges were seismically safe.
Results of those studies showed the Bay Bridge -- which is bisected by Yerba Buena Island -- needed major improvements. A one-mile stretch on the west span needed three on- and off-ramps replaced, while the entire east span needed to be completely replaced.
Construction began in 2006 on a 2.2. mile stretch. Its main architectural feature will be a single-tower Self-Anchored Suspension span (SAS). When completed in late 2013, its 1,263-foot main span length will make it the longest, single-tower, self-anchored suspension bridge in the world.
Enhancing the bridge's form and function is the 525-foot single tower that is capable of withstanding a major earthquake. The steel tower is actually composed of four separate towers that are connected by shear link beams designed to move separately and act as shock absorbers in the event of a quake.
Also unique to new SAS is that one continuous main cable will help support the deck, as opposed to traditional suspension bridges that have two separate main cables.
This new design will include a nearly one-mile-long main cable anchored on the Oakland side of the bridge. It will then be carried over the single tower and, as it extends down, the cable will loop around two decks and their foundations on Yerba Buena Island, and back to the original anchor.
This compresses the entire span and allows for a level of cable tension to be sustained. In traditional suspension spans, any tension in the main cables is resisted by anchor points in the soil.
The estimated $6.281 billion project will also feature cantilevered bicycle and pedestrian paths and special lighting to accentuate the bridge's asymmetric design.
Read more at Discovery News
During 1989's Loma Prieta Earthquake, which registered 6.9 on the Richter scale, a section of the Bay Bridge collapsed, killing a motorist. Since then, major studies were conducted to determine if California's largest bridges were seismically safe.
Results of those studies showed the Bay Bridge -- which is bisected by Yerba Buena Island -- needed major improvements. A one-mile stretch on the west span needed three on- and off-ramps replaced, while the entire east span needed to be completely replaced.
Construction began in 2006 on a 2.2. mile stretch. Its main architectural feature will be a single-tower Self-Anchored Suspension span (SAS). When completed in late 2013, its 1,263-foot main span length will make it the longest, single-tower, self-anchored suspension bridge in the world.
Enhancing the bridge's form and function is the 525-foot single tower that is capable of withstanding a major earthquake. The steel tower is actually composed of four separate towers that are connected by shear link beams designed to move separately and act as shock absorbers in the event of a quake.
Also unique to new SAS is that one continuous main cable will help support the deck, as opposed to traditional suspension bridges that have two separate main cables.
This new design will include a nearly one-mile-long main cable anchored on the Oakland side of the bridge. It will then be carried over the single tower and, as it extends down, the cable will loop around two decks and their foundations on Yerba Buena Island, and back to the original anchor.
This compresses the entire span and allows for a level of cable tension to be sustained. In traditional suspension spans, any tension in the main cables is resisted by anchor points in the soil.
The estimated $6.281 billion project will also feature cantilevered bicycle and pedestrian paths and special lighting to accentuate the bridge's asymmetric design.
Read more at Discovery News
Subscribe to:
Posts (Atom)