Danish scientists are expecting results that will show that “finding a mass-distributable and affordable cure to HIV is possible”.
They are conducting clinical trials to test a “novel strategy” in which the HIV virus is stripped from human DNA and destroyed permanently by the immune system.
The move would represent a dramatic step forward in the attempt to find a cure for the virus, which causes Aids.
The scientists are currently conducting human trials on their treatment, in the hope of proving that it is effective. It has already been found to work in laboratory tests.
The technique involves releasing the HIV virus from “reservoirs” it forms in DNA cells, bringing it to the surface of the cells. Once it comes to the surface, the body’s natural immune system can kill the virus through being boosted by a “vaccine”.
In vitro studies — those that use human cells in a laboratory — of the new technique proved so successful that in January, the Danish Research Council awarded the team 12 million Danish kroner (£1.5 million) to pursue their findings in clinical trials with human subjects.
These are now under way, and according to Dr Søgaard, the early signs are “promising”.
Dr Ole Søgaard, a senior researcher at the Aarhus University Hospital in Denmark who is leading the study said: “I am almost certain that we will be successful in releasing the reservoirs of HIV.
“The challenge will be getting the patients’ immune system to recognise the virus and destroy it. This depends on the strength and sensitivity of individual immune systems.”
Fifteen patients are currently taking part in the trials, and if they are found to have successfully been cured of HIV, the “cure” will be tested on a wider scale.
Dr Søgaard stressed that a cure is not the same as a preventative vaccine, and that raising awareness of unsafe behaviour, including unprotected sex and sharing needles, remains of paramount importance in combating HIV.
With modern HIV treatment, a patient can live an almost normal life, even into old age, with limited side effects.
However, if medication is stopped, HIV reservoirs become active and start to produce more of the virus, meaning that symptoms can reappear within two weeks.
Finding a cure would free a patient from the need to take continuous HIV medication, and save health services billions of pounds.
The technique is being researched in Britain, but studies have not yet moved on to the clinical trial stage. Five universities — Oxford, Cambridge, Imperial College, London, University College, London and King’s College, London — have jointly formed the Collaborative HIV Eradication of Reservoirs UK Biomedical Research Centre group (CHERUB), which is dedicated to finding an HIV cure.
They have applied to the Medical Research Council for funding to conduct clinical trials, which will seek to combine techniques to release the reservoirs of HIV with immunotherapy to destroy the virus.
In addition, they are focusing on patients that have only recently been infected, as they believe this will improve chances of a cure. The group hopes to receive a funding decision in May.
“When the first patient is cured in this way it will be a spectacular moment,” says Dr John Frater, a clinical research fellow at the Nuffield School of Medicine, Oxford University, and a member of the CHERUB group.
“It will prove that we are heading in the right direction and demonstrate that a cure is possible. But I think it will be five years before we see a cure that can be offered on a large scale.”
The Danish team’s research is among the most advanced and fast moving in the world, as that they have streamlined the process of putting the latest basic science discoveries into clinical testing.
This means that researchers can progress more quickly to clinical trials, accelerating the process and reaching reliable results sooner than many others.
The technique uses drugs called HDAC Inhibitors, which are more commonly used in treating cancer, to drive out the HIV from a patient’s DNA. The Danish researchers are using a particularly powerful type of HDAC inhibitor called Panobinostat.
Five years ago, the general consensus was that HIV could not be cured. But then Timothy Ray Brown, an HIV sufferer — who has become known in the field as the Berlin Patient — developed leukaemia.
He had a bone marrow transplant from a donor with a rare genetic mutation that made his cells resistant to HIV. As a result, in 2007 Mr Brown became the first man to ever be fully cured of the disease.
Replicating this procedure on a mass scale is impossible. Nevertheless, the Brown case caused a sea change in research, with scientists focusing on finding a cure as well as suppressing the symptoms.
Read more at The Telegraph
Apr 27, 2013
New Clues Behind Antimatter Mystery Found by LHC
How the modern universe is primarily composed of matter and not antimatter has foxed astrophysicists for decades, but a result from a Large Hadron Collider (LHC) experiment has uncovered a new clue behind the matter-antimatter asymmetry mystery.
During high-energy proton collisions in 2011, the worlds most powerful particle accelerator, located at the France-Swiss border near Geneva, created BOs mesons — hadronic subatomic particles comprised of one quark and one antiquark — inside the LHCb experiment.
By observing the rapid decay of the BOs, physicists were able to identify the neutral particle’s decay products — i.e. the particles that it decayed into. After a huge number of proton collisions and BOs decay events, physicists have announced that more matter particles are generated than antimatter during neutral BOs decays.
“The discovery of the asymmetric behavior in the BOs particle comes with a significance of more than 5 sigma — a result that was only possible thanks to the large amount of data provided by the LHC and to the LHCb detector’s particle identification capabilities,” Pierluigi Campana, spokesperson for the LHCb collaboration, said in a CERN announcementon Wednesday (April 24). “Experiments elsewhere have not been in a position to accumulate a large enough number of BOs decays.” 5-sigma is the statistical “gold standard” of a discovery in particle physics.
This preference of matter over antimatter in decay products of particles is known as a “CP violation.” Generally, the laws of physics will only allow equal quantities of matter and antimatter to be produced in decay events (called “CP symmetry”). However, there are some exceptions known within the Standard Model of physics.
Violation of CP symmetry was first documented in the 1960s in the decay of neutral kaon particles. Since then, Japanese and US labs have detected CP violation in BO mesons. Recently, the LHCb experiment has detected CP violation in B+ mesons. And now, the decay of BOs is showing similar behavior.
At the time of the Big Bang, it is thought that equal quantities of matter and antimatter were created, but how did matter overwhelm antimatter to form the universe we know and love today? Although the effect of a slight asymmetry in BOs decay products is suggestive, the effect is very small, but it is another clue as to why the nature of our universe prefers matter over antimatter.
Read more at Discovery News
During high-energy proton collisions in 2011, the worlds most powerful particle accelerator, located at the France-Swiss border near Geneva, created BOs mesons — hadronic subatomic particles comprised of one quark and one antiquark — inside the LHCb experiment.
By observing the rapid decay of the BOs, physicists were able to identify the neutral particle’s decay products — i.e. the particles that it decayed into. After a huge number of proton collisions and BOs decay events, physicists have announced that more matter particles are generated than antimatter during neutral BOs decays.
“The discovery of the asymmetric behavior in the BOs particle comes with a significance of more than 5 sigma — a result that was only possible thanks to the large amount of data provided by the LHC and to the LHCb detector’s particle identification capabilities,” Pierluigi Campana, spokesperson for the LHCb collaboration, said in a CERN announcementon Wednesday (April 24). “Experiments elsewhere have not been in a position to accumulate a large enough number of BOs decays.” 5-sigma is the statistical “gold standard” of a discovery in particle physics.
This preference of matter over antimatter in decay products of particles is known as a “CP violation.” Generally, the laws of physics will only allow equal quantities of matter and antimatter to be produced in decay events (called “CP symmetry”). However, there are some exceptions known within the Standard Model of physics.
Violation of CP symmetry was first documented in the 1960s in the decay of neutral kaon particles. Since then, Japanese and US labs have detected CP violation in BO mesons. Recently, the LHCb experiment has detected CP violation in B+ mesons. And now, the decay of BOs is showing similar behavior.
At the time of the Big Bang, it is thought that equal quantities of matter and antimatter were created, but how did matter overwhelm antimatter to form the universe we know and love today? Although the effect of a slight asymmetry in BOs decay products is suggestive, the effect is very small, but it is another clue as to why the nature of our universe prefers matter over antimatter.
Read more at Discovery News
Apr 26, 2013
Computer Scientists Suggest New Spin On Origins of Evolvability: Competition to Survive Not Necessary?
Scientists have long observed that species seem to have become increasingly capable of evolving in response to changes in the environment. But computer science researchers now say that the popular explanation of competition to survive in nature may not actually be necessary for evolvability to increase.
In a paper published this week in PLOS ONE, the researchers report that evolvability can increase over generations regardless of whether species are competing for food, habitat or other factors.
Using a simulated model they designed to mimic how organisms evolve, the researchers saw increasing evolvability even without competitive pressure.
"The explanation is that evolvable organisms separate themselves naturally from less evolvable organisms over time simply by becoming increasingly diverse," said Kenneth O. Stanley, an associate professor at the College of Engineering and Computer Science at the University of Central Florida. He co-wrote the paper about the study along with lead author Joel Lehman, a post-doctoral researcher at the University of Texas at Austin.
The finding could have implications for the origins of evolvability in many species.
"When new species appear in the future, they are most likely descendants of those that were evolvable in the past," Lehman said. "The result is that evolvable species accumulate over time even without selective pressure."
During the simulations, the team's simulated organisms became more evolvable without any pressure from other organisms out-competing them. The simulations were based on a conceptual algorithm.
"The algorithms used for the simulations are abstractly based on how organisms are evolved, but not on any particular real-life organism," explained Lehman.
The team's hypothesis is unique and is in contrast to most popular theories for why evolvability increases.
"An important implication of this result is that traditional selective and adaptive explanations for phenomena such as increasing evolvability deserve more scrutiny and may turn out unnecessary in some cases," Stanley said.
Read more at Science Daily
In a paper published this week in PLOS ONE, the researchers report that evolvability can increase over generations regardless of whether species are competing for food, habitat or other factors.
Using a simulated model they designed to mimic how organisms evolve, the researchers saw increasing evolvability even without competitive pressure.
"The explanation is that evolvable organisms separate themselves naturally from less evolvable organisms over time simply by becoming increasingly diverse," said Kenneth O. Stanley, an associate professor at the College of Engineering and Computer Science at the University of Central Florida. He co-wrote the paper about the study along with lead author Joel Lehman, a post-doctoral researcher at the University of Texas at Austin.
The finding could have implications for the origins of evolvability in many species.
"When new species appear in the future, they are most likely descendants of those that were evolvable in the past," Lehman said. "The result is that evolvable species accumulate over time even without selective pressure."
During the simulations, the team's simulated organisms became more evolvable without any pressure from other organisms out-competing them. The simulations were based on a conceptual algorithm.
"The algorithms used for the simulations are abstractly based on how organisms are evolved, but not on any particular real-life organism," explained Lehman.
The team's hypothesis is unique and is in contrast to most popular theories for why evolvability increases.
"An important implication of this result is that traditional selective and adaptive explanations for phenomena such as increasing evolvability deserve more scrutiny and may turn out unnecessary in some cases," Stanley said.
Read more at Science Daily
Developmental Neurobiology: How the Brain Folds to Fit
During fetal development of the mammalian brain, the cerebral cortex undergoes a marked expansion in surface area in some species, which is accommodated by folding of the tissue in species with most expanded neuron numbers and surface area. Researchers have now identified a key regulator of this crucial process.
Different regions of the mammalian brain are devoted to the performance of specific tasks. This in turn imposes particular demands on their development and structural organization. In the vertebrate forebrain, for instance, the cerebral cortex -- which is responsible for cognitive functions -- is remarkably expanded and extensively folded exclusively in mammalian species. The greater the degree of folding and the more furrows present, the larger is the surface area available for reception and processing of neural information. In humans, the exterior of the developing brain remains smooth until about the sixth month of gestation. Only then do superficial folds begin to appear and ultimately dominate the entire brain in humans. Conversely mice, for example, have a much smaller and smooth cerebral cortex.
"The mechanisms that control the expansion and folding of the brain during fetal development have so far been mysterious," says Professor Magdalena Götz, a professor at the Institute of Physiology at LMU and Director of the Institute for Stem Cell Research at the Helmholtz Center Munich. Götz and her team have now pinpointed a major player involved in the molecular process that drives cortical expansion in the mouse. They were able to show that a novel nuclear protein called Trnp1 triggers the enormous increase in the numbers of nerve cells which forces the cortex to undergo a complex series of folds. Indeed, although the normal mouse brain has a smooth appearance, dynamic regulation of Trnp1 results in activating all necessary processes for the formation of a much enlarged and folded cerebral cortex.
Levels of Trnp1 control expansion and folding
"Trnp1 is critical for the expansion and folding of the cerebral cortex, and its expression level is dynamically controlled during development," says Götz. In the early embryo, Trnp1 is locally expressed in high concentrations. This promotes the proliferation of self-renewing multipotent neural stem cells and supports tangential expansion of the cerebral cortex. The subsequent fall in levels of Trnp1 is associated with an increase in the numbers of various intermediate progenitors and basal radial glial cells. This results in the ordered formation and migration of a much enlarged number of neurons forming folds in the growing cortex.
Read more at Science Daily
Different regions of the mammalian brain are devoted to the performance of specific tasks. This in turn imposes particular demands on their development and structural organization. In the vertebrate forebrain, for instance, the cerebral cortex -- which is responsible for cognitive functions -- is remarkably expanded and extensively folded exclusively in mammalian species. The greater the degree of folding and the more furrows present, the larger is the surface area available for reception and processing of neural information. In humans, the exterior of the developing brain remains smooth until about the sixth month of gestation. Only then do superficial folds begin to appear and ultimately dominate the entire brain in humans. Conversely mice, for example, have a much smaller and smooth cerebral cortex.
"The mechanisms that control the expansion and folding of the brain during fetal development have so far been mysterious," says Professor Magdalena Götz, a professor at the Institute of Physiology at LMU and Director of the Institute for Stem Cell Research at the Helmholtz Center Munich. Götz and her team have now pinpointed a major player involved in the molecular process that drives cortical expansion in the mouse. They were able to show that a novel nuclear protein called Trnp1 triggers the enormous increase in the numbers of nerve cells which forces the cortex to undergo a complex series of folds. Indeed, although the normal mouse brain has a smooth appearance, dynamic regulation of Trnp1 results in activating all necessary processes for the formation of a much enlarged and folded cerebral cortex.
Levels of Trnp1 control expansion and folding
"Trnp1 is critical for the expansion and folding of the cerebral cortex, and its expression level is dynamically controlled during development," says Götz. In the early embryo, Trnp1 is locally expressed in high concentrations. This promotes the proliferation of self-renewing multipotent neural stem cells and supports tangential expansion of the cerebral cortex. The subsequent fall in levels of Trnp1 is associated with an increase in the numbers of various intermediate progenitors and basal radial glial cells. This results in the ordered formation and migration of a much enlarged number of neurons forming folds in the growing cortex.
Read more at Science Daily
Physicists, Biologists Unite to Expose How Cancer Spreads
Cancer cells that can break out of a tumor and invade other organs are more aggressive and nimble than nonmalignant cells, according to a new multi-institutional nationwide study. These cells exert greater force on their environment and can more easily maneuver small spaces.
The researchers report in the journal Scientific Reports that a systematic comparison of metastatic breast-cancer cells to healthy breast cells revealed dramatic differences between the two cell lines in their mechanics, migration, oxygen response, protein production and ability to stick to surfaces. The researchers discovered new insights into how cells make the transition from nonmalignant to metastatic, a process that is not well understood.
The resulting catalogue of differences could someday help researchers detect cancerous cells earlier and someday prevent or treat metastatic cancer, which is responsible for 90 percent of all cancer deaths, according to the study. It was conducted by a network of 12 federally funded Physical Sciences-Oncology Centers (PS-OC) sponsored by the National Cancer Institute. PS-OC is a collaboration of researchers in the physical and biological sciences seeking a better understanding of the physical and chemical forces that shape the emergence and behavior of cancer.
A multi-institutional study including researchers from Princeton University's Physical Sciences-Oncology Center found that metastatic cancer cells are more aggressive and nimble than nonmalignant cells. The Princeton group used silicon-etched microchannels (above) to study the behavior and physical properties of cancer cells. In this device, metastatic cancer cells enter the narrow channels at one end and accelerate as they rapidly move down the channel. Such high motility is a hallmark of metastasis and also indicative of high glucose metabolism, another hallmark of cancer. (Image by Guillaume Lambert)
"By bringing together different types of experimental expertise to systematically compare metastatic and nonmetastatic cells, we have advanced our knowledge of how metastasis occurs," said Robert Austin, professor of physics and leader of the Princeton PS-OC, along with senior co-investigator Thea Tlsty of the University of California-San Francisco.
Researchers with the Princeton PS-OC, for instance, determined that metastatic cells, in spite of moving more slowly than nonmalignant cells, move farther and in a straighter line, Austin said. The investigators studied the cells' behavior in tiny cell-sized chambers and channels etched out of silicon and designed to mimic the natural environment of the body's interior.
"The mobility of these metastatic cells is an essential feature of their ability to break through the tough membrane [the extracellular matrix] that the body uses to wall off the tumor from the rest of the body," Austin said. "These cells are essentially jail-breakers."
The tiny silicon chambers were built using Princeton's expertise in microfabrication technology -- typically used to create small technologies such as integrated circuits and solar cells -- and are an example of the type of expertise that physicists and engineers can bring to cancer research, Austin said. For the current study, the Princeton team included physics graduate students David Liao and Guillaume Lambert, and postdoctoral researchers Liyu Liu and Saurabh Vyawahare. They worked closely with a research group led by James Sturm, Princeton's William and Edna Macaleer Professor of Engineering and Applied Science and director of the Princeton Institute for the Science and Technology of Materials (PRISM) where the microfabrication was done.
The Princeton PS-OC also includes collaborators at the Johns Hopkins University School of Medicine, the Salk Institute for Biological Studies and the University of California-Santa Cruz.
The nationwide PS-OC program aims to crack the difficulty of understanding and treating cancer by bringing in researchers from physics, engineering, computer science and chemistry, said Nastaran Zahir Kuhn, program manager for the PS-OC at the National Cancer Institute.
Other notable findings from the paper include that metastatic cells recover more rapidly from the stress of a low-oxygen environment than nonmetastatic cells, which is consistent with previous studies. Although the low-oxygen environment did kill many of the metastatic cells, the survivors rebounded vigorously, underscoring the likely role of individual cells in the spread of cancer. The study also looked at total protein production and detected proteins in the metastatic cells that are consistent with the physical properties such as mobility that malignant cells need to invade the extracellular matrix.
"The PS-OC program aims to bring physical sciences tools and perspectives into cancer research," Kuhn said. "The results of this study demonstrate the utility of such an approach, particularly when studies are conducted in a standardized manner from the beginning."
For the nationwide project, nearly 100 investigators from 20 institutions and laboratories conducted their experiments using the same two cell lines, reagents and protocols to assure that results could be compared. The experimental methods ranged from physical measurements of how the cells push on surrounding cells to measurements of gene and protein expression.
"Roughly 20 techniques were used to study the cell lines, enabling identification of a number of unique relationships between observations," Kuhn said.
Read more at Science Daily
The researchers report in the journal Scientific Reports that a systematic comparison of metastatic breast-cancer cells to healthy breast cells revealed dramatic differences between the two cell lines in their mechanics, migration, oxygen response, protein production and ability to stick to surfaces. The researchers discovered new insights into how cells make the transition from nonmalignant to metastatic, a process that is not well understood.
The resulting catalogue of differences could someday help researchers detect cancerous cells earlier and someday prevent or treat metastatic cancer, which is responsible for 90 percent of all cancer deaths, according to the study. It was conducted by a network of 12 federally funded Physical Sciences-Oncology Centers (PS-OC) sponsored by the National Cancer Institute. PS-OC is a collaboration of researchers in the physical and biological sciences seeking a better understanding of the physical and chemical forces that shape the emergence and behavior of cancer.
A multi-institutional study including researchers from Princeton University's Physical Sciences-Oncology Center found that metastatic cancer cells are more aggressive and nimble than nonmalignant cells. The Princeton group used silicon-etched microchannels (above) to study the behavior and physical properties of cancer cells. In this device, metastatic cancer cells enter the narrow channels at one end and accelerate as they rapidly move down the channel. Such high motility is a hallmark of metastasis and also indicative of high glucose metabolism, another hallmark of cancer. (Image by Guillaume Lambert)
"By bringing together different types of experimental expertise to systematically compare metastatic and nonmetastatic cells, we have advanced our knowledge of how metastasis occurs," said Robert Austin, professor of physics and leader of the Princeton PS-OC, along with senior co-investigator Thea Tlsty of the University of California-San Francisco.
Researchers with the Princeton PS-OC, for instance, determined that metastatic cells, in spite of moving more slowly than nonmalignant cells, move farther and in a straighter line, Austin said. The investigators studied the cells' behavior in tiny cell-sized chambers and channels etched out of silicon and designed to mimic the natural environment of the body's interior.
"The mobility of these metastatic cells is an essential feature of their ability to break through the tough membrane [the extracellular matrix] that the body uses to wall off the tumor from the rest of the body," Austin said. "These cells are essentially jail-breakers."
The tiny silicon chambers were built using Princeton's expertise in microfabrication technology -- typically used to create small technologies such as integrated circuits and solar cells -- and are an example of the type of expertise that physicists and engineers can bring to cancer research, Austin said. For the current study, the Princeton team included physics graduate students David Liao and Guillaume Lambert, and postdoctoral researchers Liyu Liu and Saurabh Vyawahare. They worked closely with a research group led by James Sturm, Princeton's William and Edna Macaleer Professor of Engineering and Applied Science and director of the Princeton Institute for the Science and Technology of Materials (PRISM) where the microfabrication was done.
The Princeton PS-OC also includes collaborators at the Johns Hopkins University School of Medicine, the Salk Institute for Biological Studies and the University of California-Santa Cruz.
The nationwide PS-OC program aims to crack the difficulty of understanding and treating cancer by bringing in researchers from physics, engineering, computer science and chemistry, said Nastaran Zahir Kuhn, program manager for the PS-OC at the National Cancer Institute.
Other notable findings from the paper include that metastatic cells recover more rapidly from the stress of a low-oxygen environment than nonmetastatic cells, which is consistent with previous studies. Although the low-oxygen environment did kill many of the metastatic cells, the survivors rebounded vigorously, underscoring the likely role of individual cells in the spread of cancer. The study also looked at total protein production and detected proteins in the metastatic cells that are consistent with the physical properties such as mobility that malignant cells need to invade the extracellular matrix.
"The PS-OC program aims to bring physical sciences tools and perspectives into cancer research," Kuhn said. "The results of this study demonstrate the utility of such an approach, particularly when studies are conducted in a standardized manner from the beginning."
For the nationwide project, nearly 100 investigators from 20 institutions and laboratories conducted their experiments using the same two cell lines, reagents and protocols to assure that results could be compared. The experimental methods ranged from physical measurements of how the cells push on surrounding cells to measurements of gene and protein expression.
"Roughly 20 techniques were used to study the cell lines, enabling identification of a number of unique relationships between observations," Kuhn said.
Read more at Science Daily
New Excavations in Sweden Indicate Use of Fertilizers 5,000 Years Ago
Researchers from the University of Gothenburg, Sweden, have spent many years studying the remains of a Stone Age community in Karleby outside the town of Falköping, Sweden. The researchers have for example tried to identify parts of the inhabitants' diet. Right now they are looking for evidence that fertilisers were used already during the Scandinavian Stone Age, and the results of their first analyses may be exactly what they are looking for.
Using remains of grains and other plants and some highly advanced analysis techniques, the two researchers and archaeologists Tony Axelsson and Karl-Göran Sjögren have been able to identify parts of the diet of their Stone Age ancestors.
'Our first task was to find so-called macrofossils, such as old weed seeds or pieces of grain. By analysing macrofossils, we can learn a lot about Stone Age farming and how important farming was in relation to livestock ranching,' says Axelsson.
Another aim has been to collect animal bone material -- or simply 5,000 year old food remains. The researchers know that pieces of bones from cattle, pigs and sheep can be found at the site.
'By studying the levels of isotopes in the bones, we can for example find out where the animals were raised, which in turn can give important information about their role in trade,' says Sjögren.
The results of the first grain analyses have now been presented, and besides revealing that both barley and wheat were farmed at the site, they point to elevated levels of the isotope N15 (nitrogen 15). The elevated levels may indicate that fertilisers were used in the area of Karleby already 5,000 years ago.
'We will continue our analyses both in the field and in the lab, and are hoping to find more macrofossils. Hopefully we'll find some weed seeds, as they may help confirm that fertilisers were indeed used since the type of weeds found in a field can signal whether fertilisers or some other method was used,' says Axelsson.
From Science Daily
Using remains of grains and other plants and some highly advanced analysis techniques, the two researchers and archaeologists Tony Axelsson and Karl-Göran Sjögren have been able to identify parts of the diet of their Stone Age ancestors.
'Our first task was to find so-called macrofossils, such as old weed seeds or pieces of grain. By analysing macrofossils, we can learn a lot about Stone Age farming and how important farming was in relation to livestock ranching,' says Axelsson.
Another aim has been to collect animal bone material -- or simply 5,000 year old food remains. The researchers know that pieces of bones from cattle, pigs and sheep can be found at the site.
'By studying the levels of isotopes in the bones, we can for example find out where the animals were raised, which in turn can give important information about their role in trade,' says Sjögren.
The results of the first grain analyses have now been presented, and besides revealing that both barley and wheat were farmed at the site, they point to elevated levels of the isotope N15 (nitrogen 15). The elevated levels may indicate that fertilisers were used in the area of Karleby already 5,000 years ago.
'We will continue our analyses both in the field and in the lab, and are hoping to find more macrofossils. Hopefully we'll find some weed seeds, as they may help confirm that fertilisers were indeed used since the type of weeds found in a field can signal whether fertilisers or some other method was used,' says Axelsson.
From Science Daily
Apr 25, 2013
Vaterite: Crystal Within a Crystal Helps Resolve an Old Puzzle
With the help of a solitary sea squirt, scientists have resolved the longstanding puzzle of the crystal structure of vaterite, an enigmatic geologic mineral and biomineral.
A form of calcium carbonate, vaterite can be found in Portland cement. Its quick transformation into other more stable forms of calcium carbonate when exposed to water helps make the cement hard and water resistant. As a biomineral, vaterite is found in such things as gallstones, fish otoliths, freshwater pearls, and the healed scars of some mollusk shells.
But unlike most minerals, vaterite has defied every effort to resolve its crystal structure, stymieing scientists for nearly 100 years. The structure of a mineral crystal is a critically important feature and is determined by how atoms are arranged in the crystal. The arrangement of atoms and the resulting crystal structure, for instance, make the difference between graphite and diamond, both forms of pure carbon.
Now, however, a team of scientists from the Technion-Israel Institute of Technology and the University of Wisconsin-Madison has discovered the crystalline secrets of vaterite with the help of a needlelike spicule from a sea squirt found in the Mediterranean and Red Seas. Writing today (April 25, 2013) in the journal Science, a group led by Boaz Pokroy of Technion and Pupa Gilbert of UW-Madison report that vaterite is composed of two different crystal structures that "coexist within a pseudo-single crystal."
"We never envisaged this scenario," explains Pokroy, a professor of materials science and engineering at Technion. "It was a total surprise, but at the same time it made so much sense knowing the years of conflicting results from different groups publishing on the structure of vaterite."
Ferreting out the structure of the mineral, according to Gilbert, a UW-Madison professor of physics, was challenging because of the difficulty of finding large, pure single crystals of vaterite. Enter Herdmania momus, a solitary sea squirt, and member of a large family of filter-feeding marine invertebrates.
"This organism makes the best crystal," avers Gilbert, an expert on crystalline biominerals formed by marine animals. "Geologic vaterite is extremely rare and unstable. The synthetic version is a powder and yields only small crystal grains with poor structure."
In nature, sea squirts and sponges use spicules as a skeleton to provide structural support. The spicule from the sea squirt Herdmania momus is composed of a large, single crystal that Pokroy and Gilbert used to unmask the atomic structure of its constituent biomineral vaterite using state-of-the-art high-resolution transmission electron microscopy at Technion.
"The Herdmania momus spicules were known to be made of vaterite since 1975 when a paper by the late Heinz Lowenstam of CalTech was published in Science," notes Pokroy. "It was clear that this would be the best source of biogenic vaterite and geologic vaterite we could ever find."
The big discovery, say Gilbert and Pokroy, is that vaterite is really two interspersed crystal structures: "It changes the concept of vaterite," Pokroy says, explaining that knowledge of the atomic properties of a crystalline structure enable prediction and explanation of its physical properties. "Now we know that 'vaterite' is not just one structure, but is composed of two different ones."
Read more at Science Daily
A form of calcium carbonate, vaterite can be found in Portland cement. Its quick transformation into other more stable forms of calcium carbonate when exposed to water helps make the cement hard and water resistant. As a biomineral, vaterite is found in such things as gallstones, fish otoliths, freshwater pearls, and the healed scars of some mollusk shells.
But unlike most minerals, vaterite has defied every effort to resolve its crystal structure, stymieing scientists for nearly 100 years. The structure of a mineral crystal is a critically important feature and is determined by how atoms are arranged in the crystal. The arrangement of atoms and the resulting crystal structure, for instance, make the difference between graphite and diamond, both forms of pure carbon.
Now, however, a team of scientists from the Technion-Israel Institute of Technology and the University of Wisconsin-Madison has discovered the crystalline secrets of vaterite with the help of a needlelike spicule from a sea squirt found in the Mediterranean and Red Seas. Writing today (April 25, 2013) in the journal Science, a group led by Boaz Pokroy of Technion and Pupa Gilbert of UW-Madison report that vaterite is composed of two different crystal structures that "coexist within a pseudo-single crystal."
"We never envisaged this scenario," explains Pokroy, a professor of materials science and engineering at Technion. "It was a total surprise, but at the same time it made so much sense knowing the years of conflicting results from different groups publishing on the structure of vaterite."
Ferreting out the structure of the mineral, according to Gilbert, a UW-Madison professor of physics, was challenging because of the difficulty of finding large, pure single crystals of vaterite. Enter Herdmania momus, a solitary sea squirt, and member of a large family of filter-feeding marine invertebrates.
"This organism makes the best crystal," avers Gilbert, an expert on crystalline biominerals formed by marine animals. "Geologic vaterite is extremely rare and unstable. The synthetic version is a powder and yields only small crystal grains with poor structure."
In nature, sea squirts and sponges use spicules as a skeleton to provide structural support. The spicule from the sea squirt Herdmania momus is composed of a large, single crystal that Pokroy and Gilbert used to unmask the atomic structure of its constituent biomineral vaterite using state-of-the-art high-resolution transmission electron microscopy at Technion.
"The Herdmania momus spicules were known to be made of vaterite since 1975 when a paper by the late Heinz Lowenstam of CalTech was published in Science," notes Pokroy. "It was clear that this would be the best source of biogenic vaterite and geologic vaterite we could ever find."
The big discovery, say Gilbert and Pokroy, is that vaterite is really two interspersed crystal structures: "It changes the concept of vaterite," Pokroy says, explaining that knowledge of the atomic properties of a crystalline structure enable prediction and explanation of its physical properties. "Now we know that 'vaterite' is not just one structure, but is composed of two different ones."
Read more at Science Daily
Maya Sun Observatory Hints at Origin of Civilization
The oldest ancient Maya ceremonial compound ever discovered in the Central American lowlands dates back 200 years before similar sites pop up elsewhere in the region, archaeologists announced today (April 25). The recently excavated plaza and pyramid would have likely served as a solar observatory for rituals.
The finding at a site called Ceibal suggests that the origins of the Maya civilization are more complex than first believed. Archaeologists hotly debate whether the Maya -- famous for their complex calendar system that spurred apocalypse rumors last year -- developed independently or whether they were largely inspired by an earlier culture known as the Olmec. The new research suggests the answer is neither.
"This major social change happened through interregional interactions," said study researcher Takeshi Inomata, an anthropologist at the University of Arizona. But it doesn't look like the Olmec inspired the Maya, Inomata told reporters. Rather, the entire region went through a cultural shift around 1000 B.C., with all nearby cultures adopting similar architectural and ceremonial styles.
"It's signaling to us that the Maya were not receiving this sophisticated stuff 500 years later from somebody else, but much of the innovation we're seeing out of the whole region may be coming out of Ceibal or a place like Ceibal," said Walter Witschey, an anthropologist at Longwood University in Virginia, who was not involved in the study.
Oldest ritual compound
The finding comes from seven years of archaeological excavations at Ceibal, a site in central Guatemala that was occupied continuously for 2,000 years. Getting to Ceibal's origins was no small feat: The earliest buildings were buried under 23 to 60 feet (7 to 18 meters) of sediment and later construction, said study co-researcher Daniela Triadan, also a University of Arizona anthropologist.
The earliest structures recently discovered include a plaza with a western building and an eastern platform, a pattern seen at later Maya sites and also at the Olmec center of La Venta on the Gulf Coast of what is now Mexico. The researchers used radiocarbon dating to peg the date of construction to about 1000 B.C. This technique analyzes organic materials for carbon-14, an isotope or variation of carbon that decays predictably. As such, carbon-14 acts as a chemical clock archaeologists can use to figure out how long something has been in the ground.
A construction date of 1000 B.C. makes the Ceibal structures about 200 years older than those at La Venta, meaning the Olmec's construction practices couldn't have inspired the Mayans, the researchers report Thursday (April 25) in the journal Science. Instead, it appears that the entire region underwent a shift around this time, with groups adopting each other's architecture and rituals, modifying them and inventing new additions.
"We are saying there was this connection with various groups, but we are saying it was probably not one directional influence," Inomata said.
There was an earlier Olmec center, San Lorenzo, which declined around 1150 B.C., but residents there did not build these distinctive ceremonial structures. By 850 B.C. or 800 B.C., the Maya at Ceibal had renovated their platform into a pyramid, which they continued refining until it reached a height of about 20 to 26 feet (6 to 8 m) by 700 B.C.
Starting a civilization
This early phase of Maya culture occurs before the group developed written language and before any record of their elaborate calendar system, so little is known about their beliefs, Inomata said. But the pyramid-and-plaza area was almost certainly a space for rituals. Among the artifacts found in the plaza are numerous greenstone axes, which seem to have been put there as offerings.
The architecture layout is what's known as a "group-E assemblage," said Witschey. These assemblages appear all over the Maya world and worked as solar observatories. From the western building, a view could stand and look at the eastern platform or pyramid, which would have posts at each end and at the center. On the summer solstice, the sunrise would occur over the northernmost marker; on the spring and fall equinoxes, it would be right over the center marker; and finally, on the winter solstice, the sun would rise over the southernmost marker, Witschey said.
"The first people who settled at Ceibal had, already, a well-developed idea about what a village would look like," Triadan said. "The transition from a mobile hunter-gatherer and horticultural lifestyle to permanently settled agriculturalists was rapid."
It's not clear what might have prompted the lowland Maya to give up their semi-settled life for permanent villages and cities, Inomata said. One possibility is that maize production became more efficient around 1000 B.C. The coastal Olmec people had long been able to grow maize reasonably well, given fertile soil from rivers feeding into the Gulf of Mexico. But the Maya lowlands were less wet and less fertile, with fewer fish and fowl that the Olmec could have depended on to round out their diets. If maize farming became more productive around 1000 B.C., however, it may have prompted the Maya to start staying put.
"At that point, it probably made sense to cut down many forest trees in the Maya lowlands and then commit more strongly to an agricultural way of life," Inomata said.
Members of the research team are currently working on environmental analysis to try to better understand the climate and weather of the area around the time of settlement. What does seem clear, Inomata said, is that Maya civilization didn't have to arise from an earlier, failing civilization.
"This study is not just a study about this specific civilization," he said. "We also want to think about how human society changed and how human society develops."
Read more at Discovery News
The finding at a site called Ceibal suggests that the origins of the Maya civilization are more complex than first believed. Archaeologists hotly debate whether the Maya -- famous for their complex calendar system that spurred apocalypse rumors last year -- developed independently or whether they were largely inspired by an earlier culture known as the Olmec. The new research suggests the answer is neither.
"This major social change happened through interregional interactions," said study researcher Takeshi Inomata, an anthropologist at the University of Arizona. But it doesn't look like the Olmec inspired the Maya, Inomata told reporters. Rather, the entire region went through a cultural shift around 1000 B.C., with all nearby cultures adopting similar architectural and ceremonial styles.
"It's signaling to us that the Maya were not receiving this sophisticated stuff 500 years later from somebody else, but much of the innovation we're seeing out of the whole region may be coming out of Ceibal or a place like Ceibal," said Walter Witschey, an anthropologist at Longwood University in Virginia, who was not involved in the study.
Oldest ritual compound
The finding comes from seven years of archaeological excavations at Ceibal, a site in central Guatemala that was occupied continuously for 2,000 years. Getting to Ceibal's origins was no small feat: The earliest buildings were buried under 23 to 60 feet (7 to 18 meters) of sediment and later construction, said study co-researcher Daniela Triadan, also a University of Arizona anthropologist.
The earliest structures recently discovered include a plaza with a western building and an eastern platform, a pattern seen at later Maya sites and also at the Olmec center of La Venta on the Gulf Coast of what is now Mexico. The researchers used radiocarbon dating to peg the date of construction to about 1000 B.C. This technique analyzes organic materials for carbon-14, an isotope or variation of carbon that decays predictably. As such, carbon-14 acts as a chemical clock archaeologists can use to figure out how long something has been in the ground.
A construction date of 1000 B.C. makes the Ceibal structures about 200 years older than those at La Venta, meaning the Olmec's construction practices couldn't have inspired the Mayans, the researchers report Thursday (April 25) in the journal Science. Instead, it appears that the entire region underwent a shift around this time, with groups adopting each other's architecture and rituals, modifying them and inventing new additions.
"We are saying there was this connection with various groups, but we are saying it was probably not one directional influence," Inomata said.
There was an earlier Olmec center, San Lorenzo, which declined around 1150 B.C., but residents there did not build these distinctive ceremonial structures. By 850 B.C. or 800 B.C., the Maya at Ceibal had renovated their platform into a pyramid, which they continued refining until it reached a height of about 20 to 26 feet (6 to 8 m) by 700 B.C.
Starting a civilization
This early phase of Maya culture occurs before the group developed written language and before any record of their elaborate calendar system, so little is known about their beliefs, Inomata said. But the pyramid-and-plaza area was almost certainly a space for rituals. Among the artifacts found in the plaza are numerous greenstone axes, which seem to have been put there as offerings.
The architecture layout is what's known as a "group-E assemblage," said Witschey. These assemblages appear all over the Maya world and worked as solar observatories. From the western building, a view could stand and look at the eastern platform or pyramid, which would have posts at each end and at the center. On the summer solstice, the sunrise would occur over the northernmost marker; on the spring and fall equinoxes, it would be right over the center marker; and finally, on the winter solstice, the sun would rise over the southernmost marker, Witschey said.
"The first people who settled at Ceibal had, already, a well-developed idea about what a village would look like," Triadan said. "The transition from a mobile hunter-gatherer and horticultural lifestyle to permanently settled agriculturalists was rapid."
It's not clear what might have prompted the lowland Maya to give up their semi-settled life for permanent villages and cities, Inomata said. One possibility is that maize production became more efficient around 1000 B.C. The coastal Olmec people had long been able to grow maize reasonably well, given fertile soil from rivers feeding into the Gulf of Mexico. But the Maya lowlands were less wet and less fertile, with fewer fish and fowl that the Olmec could have depended on to round out their diets. If maize farming became more productive around 1000 B.C., however, it may have prompted the Maya to start staying put.
"At that point, it probably made sense to cut down many forest trees in the Maya lowlands and then commit more strongly to an agricultural way of life," Inomata said.
Members of the research team are currently working on environmental analysis to try to better understand the climate and weather of the area around the time of settlement. What does seem clear, Inomata said, is that Maya civilization didn't have to arise from an earlier, failing civilization.
"This study is not just a study about this specific civilization," he said. "We also want to think about how human society changed and how human society develops."
Read more at Discovery News
What If We Lost Our Moon?
Today I was asked a question that was motivated by the new movie Oblivion: What would happen to the Earth if the moon was destroyed? “I dunno,” I replied, “What does happen when the moon is destroyed?” When the expected why-the-chicken-crossed-the-road response didn’t come, I decided I’d better try and answer the question.
The first thing that came to mind is that it depends on the manner of the moon’s destruction. If it was, say, zapped to bits by a Death Star and those bits still floated in a cluster in the same orbit, I expect they would exert the same gravitational pull on Earth as does the intact moon, and not much would change on Earth.
We’d no longer watch the phases of the moon at night, but see a glittering cloud of debris which would probably be a lot brighter than the full moon, what with all those zillions of little surfaces to reflect sunlight. I know some astronomers who would really hate this new interference with their dark skies.
But if the moon were dragged off and completely removed, there would be none of its mass left to tug gravitationally on the Earth. One of the effects would be that we could throw out tide tables for good.
The ocean tides would still happen, but the bulge of water would follow the sun, so you could expect high tides around noon everywhere, everyday. I know some fishermen who would appreciate this.
Since the solid Earth flexes tidally, it makes sense that there might be some internal grumbling when Earth loses the moon. Earthquakes. Maybe a few volcanoes getting rowdy. That kind of stuff. But there’s no reason to worry (or hope) that California will fall into the Pacific. Sorry New York. Sorry Las Vegas.
The greater concern would be in the long-term, regarding the Earth’s wobbling spin axis. Right now the spin axis of the Earth very slowly wobbles over 26,000 years, like a slowing wobbling top, because of the tug of the sun. The wobble causes true north to not always point at Polaris, a.k.a., the North Star. Experts agree that the moon acts sort of like a shock absorber to this wobble — keeping it from getting out of hand (see the nitty gritty details in this SETI talk).
It’s possible that Earth without a moon would wobble wildly, sort of like Mars does. The Red Planet’s wobble is so extreme that it may be the cause of some cycles of climate change there. If the same thing happened here, Earth might wobble so much that seasons would become inhospitably extreme and Earth would be a much less stable and habitable planet.
Without the moon the tilt of the Earth’s axis could go from its current wobble of 22 to 25 degrees to a wide ride of zero to 85 degrees — zero would eliminate seasons, and 85 is basically the Earth leaning over on its side. If this happened, the current crisis we call global warming would be a very pleasant tea party by comparison.
Luckily, the wobbling would not affect things right away but over many millions of years.
Read more at Discovery News
The first thing that came to mind is that it depends on the manner of the moon’s destruction. If it was, say, zapped to bits by a Death Star and those bits still floated in a cluster in the same orbit, I expect they would exert the same gravitational pull on Earth as does the intact moon, and not much would change on Earth.
We’d no longer watch the phases of the moon at night, but see a glittering cloud of debris which would probably be a lot brighter than the full moon, what with all those zillions of little surfaces to reflect sunlight. I know some astronomers who would really hate this new interference with their dark skies.
But if the moon were dragged off and completely removed, there would be none of its mass left to tug gravitationally on the Earth. One of the effects would be that we could throw out tide tables for good.
The ocean tides would still happen, but the bulge of water would follow the sun, so you could expect high tides around noon everywhere, everyday. I know some fishermen who would appreciate this.
Since the solid Earth flexes tidally, it makes sense that there might be some internal grumbling when Earth loses the moon. Earthquakes. Maybe a few volcanoes getting rowdy. That kind of stuff. But there’s no reason to worry (or hope) that California will fall into the Pacific. Sorry New York. Sorry Las Vegas.
The greater concern would be in the long-term, regarding the Earth’s wobbling spin axis. Right now the spin axis of the Earth very slowly wobbles over 26,000 years, like a slowing wobbling top, because of the tug of the sun. The wobble causes true north to not always point at Polaris, a.k.a., the North Star. Experts agree that the moon acts sort of like a shock absorber to this wobble — keeping it from getting out of hand (see the nitty gritty details in this SETI talk).
It’s possible that Earth without a moon would wobble wildly, sort of like Mars does. The Red Planet’s wobble is so extreme that it may be the cause of some cycles of climate change there. If the same thing happened here, Earth might wobble so much that seasons would become inhospitably extreme and Earth would be a much less stable and habitable planet.
Without the moon the tilt of the Earth’s axis could go from its current wobble of 22 to 25 degrees to a wide ride of zero to 85 degrees — zero would eliminate seasons, and 85 is basically the Earth leaning over on its side. If this happened, the current crisis we call global warming would be a very pleasant tea party by comparison.
Luckily, the wobbling would not affect things right away but over many millions of years.
Read more at Discovery News
Einstein's Theory Passes Extreme Gravity Test
Every human being that has ever lived could fit inside 1 cubic inch of space at the center of a newly found neutron star, called PSR J0348+0432, located about 7,000 light years from Earth.
But extreme density is not its most unusual feature. The star, which packs about twice the mass of the sun into an object less than 13 miles in diameter, spins around 25 times a second, emitting a steady and detectable radio pulse. Plus, it has a companion close by, a dying star known as a white dwarf, which circles around every 144 minutes.
Assembling the pieces of the system took some time, but when astronomers realized what they had found an idea took shape: Would the pulsar’s extreme gravity cause the pair to move closer together at the rate predicted by physicist Albert Einstein’s long-standing general relativity theory? Or, was this a situation better explained by other models, such as those that tiptoe into the realm of quantum mechanics where the rules of gravity break down?
“There are many theories about what happens to matter under such extreme conditions,” John Antoniadis, with the Max Planck Institute for Radio Astronomy in Bonn, Germany, told Discovery News.
Making the measurements required patience and extreme precision, but in the end the gravitational impact predicted by Einstein’s theory proved correct. In this case, the loss of energy due to gravity waves from the system escaping into space slowed the pair’s orbital period by eight-millionths of a second per year.
“It is essential to know the masses of the pulsar and white dwarf to high accuracy because these are the actual inputs that General Relativity or other theories use to predict the orbital decay,” said astronomer Ryan Lynch, with McGill University in Montreal, Canada.
Astronomers also needed a way to precisely measure the pair’s orbital period. The pulsating neutron star served as their clock.
“These things came together to make J0348 a power tool,” Lynch wrote in an email to Discovery News.
Scientists are continuing to study the system in hopes of learning more about how it formed. They believe it has been in its present state for about 2 billion years.
Astronomers also remain on the hunt for even more extreme conditions to test Einstein’s theory. In particular, scientists would like to find a pulsar orbiting a black hole, which is an object even denser than a neutron star that does not even allow photons of light to escape its gravitational grip.
“This would allow us to test the properties of black holes in great detail and see if they follow Einstein's predictions,” Antoniadis said.
Gravity at the surface of J0348, the heaviest neutron star found so far, is 300 billion times stronger than Earth’s gravity. At its center, 1 billion tons of matter can fit into an area the size of a sugar cube.
Read more at Discovery News
But extreme density is not its most unusual feature. The star, which packs about twice the mass of the sun into an object less than 13 miles in diameter, spins around 25 times a second, emitting a steady and detectable radio pulse. Plus, it has a companion close by, a dying star known as a white dwarf, which circles around every 144 minutes.
Assembling the pieces of the system took some time, but when astronomers realized what they had found an idea took shape: Would the pulsar’s extreme gravity cause the pair to move closer together at the rate predicted by physicist Albert Einstein’s long-standing general relativity theory? Or, was this a situation better explained by other models, such as those that tiptoe into the realm of quantum mechanics where the rules of gravity break down?
“There are many theories about what happens to matter under such extreme conditions,” John Antoniadis, with the Max Planck Institute for Radio Astronomy in Bonn, Germany, told Discovery News.
Making the measurements required patience and extreme precision, but in the end the gravitational impact predicted by Einstein’s theory proved correct. In this case, the loss of energy due to gravity waves from the system escaping into space slowed the pair’s orbital period by eight-millionths of a second per year.
“It is essential to know the masses of the pulsar and white dwarf to high accuracy because these are the actual inputs that General Relativity or other theories use to predict the orbital decay,” said astronomer Ryan Lynch, with McGill University in Montreal, Canada.
Astronomers also needed a way to precisely measure the pair’s orbital period. The pulsating neutron star served as their clock.
“These things came together to make J0348 a power tool,” Lynch wrote in an email to Discovery News.
Scientists are continuing to study the system in hopes of learning more about how it formed. They believe it has been in its present state for about 2 billion years.
Astronomers also remain on the hunt for even more extreme conditions to test Einstein’s theory. In particular, scientists would like to find a pulsar orbiting a black hole, which is an object even denser than a neutron star that does not even allow photons of light to escape its gravitational grip.
“This would allow us to test the properties of black holes in great detail and see if they follow Einstein's predictions,” Antoniadis said.
Gravity at the surface of J0348, the heaviest neutron star found so far, is 300 billion times stronger than Earth’s gravity. At its center, 1 billion tons of matter can fit into an area the size of a sugar cube.
Read more at Discovery News
Apr 24, 2013
Heavier Dino Arms Led Evolution To Birds
Many dinosaurs, like T. rex, had scrawny arms, but paleontologists have discovered that as dinosaurs gradually evolved bigger arms, they began to stand and move more like birds.
The change, documented in the journal Nature, passed on to the descendants of dinosaurs -- birds themselves.
"Our study shows how mass was allocated to the forelimbs, starting in non-flying dinosaurs, to turn them into longer, heavier, more muscular wings that became more and more effective for flapping during flight," co-author John Hutchinson of the Royal Veterinary College’s Structure and Motion Lab told Discovery News.
Hutchinson and his colleagues used digitizing technology to create 3D images of the skeletons of 17 archosaurs, a group that included living crocodiles and birds as well as extinct dinosaurs. The researchers then digitally added flesh around the skeletons to estimate the overall shape of the body as well as the individual body parts, such as the head, forelimbs and tail.
The scientists found that as the arms got bigger, eventually turning into wings in some species, the hind limb posture got progressively more crouched as the center of mass moved forward. Before then, dinosaurs that stood on two legs had a fairly straight posture, similar to that of humans. Maniraptoran dinosaurs, such as Velociraptor, which were in the lineage that evolved into birds, dramatically developed this crouching trait.
Posture varies among today’s birds, with larger species like ostriches tending to have straighter legs to more economically support their own weight. But even these birds still have fairly crouched limbs overall.
The discovery calls into question an earlier theory that held dinosaurs became more bird-like when their tails grew shorter and lighter. Hutchinson and his team acknowledge that happened, but only after the other changes took place.
He explained that "the shortening of the tail coincided with a reduction of a once-large tail muscle that connected to the thigh and moved the leg back and forth during locomotion. Birds only have a tiny remnant of that muscle."
Why the changes occurred is not yet clear.
Hutchinson suspects that "based on the anatomy of the dinosaurs that re-enlarged their forelimbs, they were using them for grasping -- various food, each other, whatever -- and maybe for climbing sometimes, and then much later they were used for flight. So the grasping/predatory/feeding/manipulatory functions of the forelimb probably drove that trend early on, then once flight evolved that added more impetus."
The crouched position would seem to be less sturdy, but that’s actually not the case.
"Biomechanical studies suggest that more crouched limbs can make animals more stable, more stealthy and more bouncy -- better for running, perhaps -- among other things," he said. "All birds maintain some sort of crouched posture."
Paul Barrett, a dinosaur researchers in the Department of Paleontology at the Natural History Museum, told Discovery News that the new paper is "a nice piece of work that has tested a previously widely-accepted idea about how birds acquired some of the features we associate with flight."
Read more at Discovery News
The change, documented in the journal Nature, passed on to the descendants of dinosaurs -- birds themselves.
"Our study shows how mass was allocated to the forelimbs, starting in non-flying dinosaurs, to turn them into longer, heavier, more muscular wings that became more and more effective for flapping during flight," co-author John Hutchinson of the Royal Veterinary College’s Structure and Motion Lab told Discovery News.
Hutchinson and his colleagues used digitizing technology to create 3D images of the skeletons of 17 archosaurs, a group that included living crocodiles and birds as well as extinct dinosaurs. The researchers then digitally added flesh around the skeletons to estimate the overall shape of the body as well as the individual body parts, such as the head, forelimbs and tail.
The scientists found that as the arms got bigger, eventually turning into wings in some species, the hind limb posture got progressively more crouched as the center of mass moved forward. Before then, dinosaurs that stood on two legs had a fairly straight posture, similar to that of humans. Maniraptoran dinosaurs, such as Velociraptor, which were in the lineage that evolved into birds, dramatically developed this crouching trait.
Posture varies among today’s birds, with larger species like ostriches tending to have straighter legs to more economically support their own weight. But even these birds still have fairly crouched limbs overall.
The discovery calls into question an earlier theory that held dinosaurs became more bird-like when their tails grew shorter and lighter. Hutchinson and his team acknowledge that happened, but only after the other changes took place.
He explained that "the shortening of the tail coincided with a reduction of a once-large tail muscle that connected to the thigh and moved the leg back and forth during locomotion. Birds only have a tiny remnant of that muscle."
Why the changes occurred is not yet clear.
Hutchinson suspects that "based on the anatomy of the dinosaurs that re-enlarged their forelimbs, they were using them for grasping -- various food, each other, whatever -- and maybe for climbing sometimes, and then much later they were used for flight. So the grasping/predatory/feeding/manipulatory functions of the forelimb probably drove that trend early on, then once flight evolved that added more impetus."
The crouched position would seem to be less sturdy, but that’s actually not the case.
"Biomechanical studies suggest that more crouched limbs can make animals more stable, more stealthy and more bouncy -- better for running, perhaps -- among other things," he said. "All birds maintain some sort of crouched posture."
Paul Barrett, a dinosaur researchers in the Department of Paleontology at the Natural History Museum, told Discovery News that the new paper is "a nice piece of work that has tested a previously widely-accepted idea about how birds acquired some of the features we associate with flight."
Read more at Discovery News
'Workers Town' Fed 10,000 Giza Pyramid Builders
The builders of the famous Giza pyramids in Egypt feasted on food from a massive catering-type operation, the remains of which scientists have discovered at a workers' town near the pyramids.
The workers' town is located about 1,300 feet (400 meters) south of the Sphinx, and was used to house workers building the pyramid of pharaoh Menkaure, the third and last pyramid on the Giza plateau. The site is also known by its Arabic name, Heit el-Ghurab, and is sometimes called "the Lost City of the Pyramid Builders."
So far, researchers have discovered a nearby cemetery with bodies of pyramid builders; a corral with possible slaughter areas on the southern edge of workers' town; and piles of animal bones.
Based on animal bone findings, nutritional data, and other discoveries at this workers' town site, the archaeologists estimate that more than 4,000 pounds of meat — from cattle, sheep and goats — were slaughtered every day, on average, to feed the pyramid builders.
This meat-rich diet, along with the availability of medical care (the skeletons of some workers show healed bones), would have been an additional lure for ancient Egyptians to work on the pyramids.
"People were taken care of, and they were well fed when they were down there working, so there would have been an attractiveness to that," said Richard Redding, chief research officer at Ancient Egypt Research Associates (AERA), a group that has been excavating and studying the workers' town site for about 25 years.
"They probably got a much better diet than they got in their village," Redding told LiveScience.
At the workers' town, which was likely occupied for 35 years, researchers have discovered a plethora of animal bones. Although the researchers are still unsure of the exact number of bones, Redding estimates he has identified about 25,000 sheep and goats, 8,000 cattle and 1,000 pig bones, he wrote in a paper published in the book "Proceedings of the 10th Meeting of the ICAZ Working Group 'Archaeozoology of southwest Asia and adjacent Areas'" (Peeters Publishing, 2013).
About 10,000 workers helped build the Menkaure pyramid, with a smaller work force present year-round to cut stones and complete preparation and survey work, the AERA team estimates. This smaller work force would have ramped up for a few months starting around July of each year. "What they would do is, for about four or five months a year, they would bring in a big work force to move blocks, and they would do nothing but move blocks," explained Redding, who is also a research scientist at the Kelsey Museum of Archaeology and a member of the faculty at the University of Michigan.
Needless to say, pyramid building is hard work. The workers would need at least 45 to 50 grams of protein a day, Redding said. Half of this protein would likely come from fish, beans, lentils and other non-meat sources, while the other half would come from sheep, goat and cattle, he estimated. Milk and cheese were probably not consumed due to transportation problems and the cattle's low milk yield during that time, Redding said.
Combining these requirements and other protein sources with the ratio of the bones (and the amount of meat and protein one can get from an animal), Redding determined about 11 cattle and 37 sheep or goats were consumed each day.
This would be in addition to supplying workers with grain, beer and other products.
In order to maintain this level of slaughter, the ancient Egyptians would have needed a herd of 21,900 cattle and 54,750 sheep and goats just to keep up regular delivery to the Giza workers, Redding estimates.
The animals alone would need about 155 square miles (401 square kilometers) of territory to graze. Add in fallow land, waste land, settlements and agricultural land for the herders, and this number triples to about 465 square miles (1,205 square km) of land — an area about the size of modern-day Los Angeles. Even so, this area would take up just about 5 percent of the present-day Nile Delta.
These animals also needed herders — likely one herder for every six cattle and one herder for every 50 sheep or goats, based on ethnographic observations. This brings the total number of herders to 3,650 overall and, once their families are included, 18,980, just under 2 percent of Egypt's estimated population at the time.
These herds would have been spread out in villages across the Nile Delta, then brought to the workers' town at Giza to be slaughtered and cooked. At the end of their lives, the animals were likely kept in the southern part of the town, in a recently unearthed structure that researchers have dubbed the "OK corral." ("OK" stands for "Old Kingdom," the time period in which the Giza pyramids were built.) The structure, which includes two small enclosures where animals may have been slaughtered and a rounded pen, is partly hidden under a modern-day soccer field.
The research revealed interesting details about life in the workers' town. For instance, the overseers — who lived in a structure the archaeologists call the "north street gatehouse" — got to eat the most cattle, and those living in an area called the "galleries," where the everyday workers lived, ate mainly sheep and goats.
Redding said it wasn’t surprising that the overseers preferred to dine on beef, considering it was the most valued meat in ancient Egypt. "Cattle is, of course, the highest-status meat," he said, noting that it appears far more frequently then sheep or goat in tomb scenes, and that pigs never appear in tomb scenes.
The settlement located adjacent to the workers' town, dubbed "eastern town," wasn't as rigidly planned as workers' town, and its residents were eating a considerable number of pigs, the researchers found. Evidence also suggested the people in eastern town were trading with people in workers' town for hippo-tusk fragments.
These finds suggest that the residents of the eastern town were not as directly involved in pyramid building and had a special relationship with the pyramid workers.
"They were not provisioned; they were not given their meat and food every day," like those in the workers' town were, Redding said. "It's more of a typical urban farming settlement, and there was a symbiotic relationship between the two —probably," he said.
Research at workers' town suggests that not all the workers lived there and some may have actually camped out near the Giza pyramids.
"What we think now is — and this is something we're going to be coming out with in the next little while — is that, more likely, it was a large portion of the work force, the more skilled laborers [living at workers' town], and that there were temporary camps up by the pyramids where the temporary workers who came in would be housed," he said.
"They probably (didn’t) need much in the way of housing; they would need more shade than anything else. They wouldn't need any kind of warmth because it wouldn't be winter."
Read more at Discovery News
The workers' town is located about 1,300 feet (400 meters) south of the Sphinx, and was used to house workers building the pyramid of pharaoh Menkaure, the third and last pyramid on the Giza plateau. The site is also known by its Arabic name, Heit el-Ghurab, and is sometimes called "the Lost City of the Pyramid Builders."
So far, researchers have discovered a nearby cemetery with bodies of pyramid builders; a corral with possible slaughter areas on the southern edge of workers' town; and piles of animal bones.
Based on animal bone findings, nutritional data, and other discoveries at this workers' town site, the archaeologists estimate that more than 4,000 pounds of meat — from cattle, sheep and goats — were slaughtered every day, on average, to feed the pyramid builders.
This meat-rich diet, along with the availability of medical care (the skeletons of some workers show healed bones), would have been an additional lure for ancient Egyptians to work on the pyramids.
"People were taken care of, and they were well fed when they were down there working, so there would have been an attractiveness to that," said Richard Redding, chief research officer at Ancient Egypt Research Associates (AERA), a group that has been excavating and studying the workers' town site for about 25 years.
"They probably got a much better diet than they got in their village," Redding told LiveScience.
At the workers' town, which was likely occupied for 35 years, researchers have discovered a plethora of animal bones. Although the researchers are still unsure of the exact number of bones, Redding estimates he has identified about 25,000 sheep and goats, 8,000 cattle and 1,000 pig bones, he wrote in a paper published in the book "Proceedings of the 10th Meeting of the ICAZ Working Group 'Archaeozoology of southwest Asia and adjacent Areas'" (Peeters Publishing, 2013).
About 10,000 workers helped build the Menkaure pyramid, with a smaller work force present year-round to cut stones and complete preparation and survey work, the AERA team estimates. This smaller work force would have ramped up for a few months starting around July of each year. "What they would do is, for about four or five months a year, they would bring in a big work force to move blocks, and they would do nothing but move blocks," explained Redding, who is also a research scientist at the Kelsey Museum of Archaeology and a member of the faculty at the University of Michigan.
Needless to say, pyramid building is hard work. The workers would need at least 45 to 50 grams of protein a day, Redding said. Half of this protein would likely come from fish, beans, lentils and other non-meat sources, while the other half would come from sheep, goat and cattle, he estimated. Milk and cheese were probably not consumed due to transportation problems and the cattle's low milk yield during that time, Redding said.
Combining these requirements and other protein sources with the ratio of the bones (and the amount of meat and protein one can get from an animal), Redding determined about 11 cattle and 37 sheep or goats were consumed each day.
This would be in addition to supplying workers with grain, beer and other products.
In order to maintain this level of slaughter, the ancient Egyptians would have needed a herd of 21,900 cattle and 54,750 sheep and goats just to keep up regular delivery to the Giza workers, Redding estimates.
The animals alone would need about 155 square miles (401 square kilometers) of territory to graze. Add in fallow land, waste land, settlements and agricultural land for the herders, and this number triples to about 465 square miles (1,205 square km) of land — an area about the size of modern-day Los Angeles. Even so, this area would take up just about 5 percent of the present-day Nile Delta.
These animals also needed herders — likely one herder for every six cattle and one herder for every 50 sheep or goats, based on ethnographic observations. This brings the total number of herders to 3,650 overall and, once their families are included, 18,980, just under 2 percent of Egypt's estimated population at the time.
These herds would have been spread out in villages across the Nile Delta, then brought to the workers' town at Giza to be slaughtered and cooked. At the end of their lives, the animals were likely kept in the southern part of the town, in a recently unearthed structure that researchers have dubbed the "OK corral." ("OK" stands for "Old Kingdom," the time period in which the Giza pyramids were built.) The structure, which includes two small enclosures where animals may have been slaughtered and a rounded pen, is partly hidden under a modern-day soccer field.
The research revealed interesting details about life in the workers' town. For instance, the overseers — who lived in a structure the archaeologists call the "north street gatehouse" — got to eat the most cattle, and those living in an area called the "galleries," where the everyday workers lived, ate mainly sheep and goats.
Redding said it wasn’t surprising that the overseers preferred to dine on beef, considering it was the most valued meat in ancient Egypt. "Cattle is, of course, the highest-status meat," he said, noting that it appears far more frequently then sheep or goat in tomb scenes, and that pigs never appear in tomb scenes.
The settlement located adjacent to the workers' town, dubbed "eastern town," wasn't as rigidly planned as workers' town, and its residents were eating a considerable number of pigs, the researchers found. Evidence also suggested the people in eastern town were trading with people in workers' town for hippo-tusk fragments.
These finds suggest that the residents of the eastern town were not as directly involved in pyramid building and had a special relationship with the pyramid workers.
"They were not provisioned; they were not given their meat and food every day," like those in the workers' town were, Redding said. "It's more of a typical urban farming settlement, and there was a symbiotic relationship between the two —probably," he said.
Research at workers' town suggests that not all the workers lived there and some may have actually camped out near the Giza pyramids.
"What we think now is — and this is something we're going to be coming out with in the next little while — is that, more likely, it was a large portion of the work force, the more skilled laborers [living at workers' town], and that there were temporary camps up by the pyramids where the temporary workers who came in would be housed," he said.
"They probably (didn’t) need much in the way of housing; they would need more shade than anything else. They wouldn't need any kind of warmth because it wouldn't be winter."
Read more at Discovery News
Why We Don't See Ourselves as Others Do
In a recent Dove ad, an FBI forensic artist sketched a series of women based purely on the way they described themselves and again as others described them. The artist could only hear their voices, not see their faces.
A video about the experiment, which has been viewed on YouTube more than 22 million times and counting, revealed stark difference between the way the women saw themselves and the way others saw them. Across the board, the self-described portraits were the least attractive -- suggesting, according to the Dove marketing team, that we are all more beautiful than we think we are.
So, why can’t we see ourselves as we really are?
Over the course of our lives, experts said, our sense of self-image develops through a complicated interplay between cultural ideals, life experiences and accumulated comments by others. The result is, inevitably, a distortion of reality.
“You could look at a photograph and you would always be able to pick yourself out because we all have internal representations of what we look like,” said David Schlundt, a psychologist at Vanderbilt University in Nashville.
“But all of your experiences, all the teasing you went through as a child, all the self-consciousness you had as a teenager, and all the worrying about whether you would be accepted as good enough or attractive enough are called forth in” how people think of themselves, Schlundt said. “It’s not a perceptual thing. It’s a combination of emotion, meaning and experience that builds up over our lifetime and gets packaged into a self-schema.”
The bulk of research on self-perception has focused not on facial features but on body image, but the two are related. And research suggests that culture plays a major role in what we consider beautiful and how we think we stack up to others, said Rachel Salk, a doctoral student in clinical psychology at the University of Wisconsin, Madison.
Salk studies “fat talk” among women, the practice of criticizing the size and shape of their bodies together with their friends. Fat talk is a widespread phenomenon, especially among certain demographics. In one of her studies, published in 2011 in the journal Psychology of Women Quarterly, 93 percent of women at a Midwestern University disparaged themselves in social situations, often by denying that their friends were fat while claiming to be fat themselves.
Hearing fat talk made women more likely to engage in it, Salk and her colleague Renee Engeln found in another study. And the more that women engaged in this kind of talk, the more dissatisfied they were with their bodies, even though women who did the most fat talk did not weigh the most, and most of the women in the study were of average weight.
“Women are socialized in Western societies to believe their bodies are never thin enough,” Salk said, adding that men are not immune from body dissatisfaction. “We know it is a normative experience. The majority of women feel that way. They almost feel like they should feel that way.”
Media clearly has a profound influence. In a notorious study of Fiji by Harvard Medical School psychologist Anne Becker, the introduction of western television shows to the Pacific island induced a rapid shift from idealizing full-bodied women to a desire for thinness among girls. The result was a dramatic increase in eating disorders.
For most people, image dissatisfaction is manageable. But people can develop unhealthy behaviors, including binge eating or cyclical and ineffective dieting. On the extreme end are eating disorders and psychological problems like body dysmorphic disorder, when perception of a body part becomes so blown out of proportion that it turns into an obsession.
Read more at Discovery News
A video about the experiment, which has been viewed on YouTube more than 22 million times and counting, revealed stark difference between the way the women saw themselves and the way others saw them. Across the board, the self-described portraits were the least attractive -- suggesting, according to the Dove marketing team, that we are all more beautiful than we think we are.
So, why can’t we see ourselves as we really are?
Over the course of our lives, experts said, our sense of self-image develops through a complicated interplay between cultural ideals, life experiences and accumulated comments by others. The result is, inevitably, a distortion of reality.
“You could look at a photograph and you would always be able to pick yourself out because we all have internal representations of what we look like,” said David Schlundt, a psychologist at Vanderbilt University in Nashville.
“But all of your experiences, all the teasing you went through as a child, all the self-consciousness you had as a teenager, and all the worrying about whether you would be accepted as good enough or attractive enough are called forth in” how people think of themselves, Schlundt said. “It’s not a perceptual thing. It’s a combination of emotion, meaning and experience that builds up over our lifetime and gets packaged into a self-schema.”
The bulk of research on self-perception has focused not on facial features but on body image, but the two are related. And research suggests that culture plays a major role in what we consider beautiful and how we think we stack up to others, said Rachel Salk, a doctoral student in clinical psychology at the University of Wisconsin, Madison.
Salk studies “fat talk” among women, the practice of criticizing the size and shape of their bodies together with their friends. Fat talk is a widespread phenomenon, especially among certain demographics. In one of her studies, published in 2011 in the journal Psychology of Women Quarterly, 93 percent of women at a Midwestern University disparaged themselves in social situations, often by denying that their friends were fat while claiming to be fat themselves.
Hearing fat talk made women more likely to engage in it, Salk and her colleague Renee Engeln found in another study. And the more that women engaged in this kind of talk, the more dissatisfied they were with their bodies, even though women who did the most fat talk did not weigh the most, and most of the women in the study were of average weight.
“Women are socialized in Western societies to believe their bodies are never thin enough,” Salk said, adding that men are not immune from body dissatisfaction. “We know it is a normative experience. The majority of women feel that way. They almost feel like they should feel that way.”
Media clearly has a profound influence. In a notorious study of Fiji by Harvard Medical School psychologist Anne Becker, the introduction of western television shows to the Pacific island induced a rapid shift from idealizing full-bodied women to a desire for thinness among girls. The result was a dramatic increase in eating disorders.
For most people, image dissatisfaction is manageable. But people can develop unhealthy behaviors, including binge eating or cyclical and ineffective dieting. On the extreme end are eating disorders and psychological problems like body dysmorphic disorder, when perception of a body part becomes so blown out of proportion that it turns into an obsession.
Read more at Discovery News
Faith-Healing Parents Arrested for Death of Second Child
A religious couple already on probation for choosing prayer over medicine in the death of their toddler son may be facing similar charges in the death of their newest child.
According to an article on ABC News, “Herbert and Catherine Schaible belong to a fundamentalist Christian church that believes in faith healing. They lost their 8-month-old son, Brandon, last week after he suffered from diarrhea and breathing problems for at least a week, and stopped eating. Four years ago, another son died from bacterial pneumonia.”
That boy, a two-year-old named Kent, died after the Schaibles refused to take him to the doctor when he became sick, relying instead on faith and prayer. The couple were convicted of involuntary manslaughter and sentenced to 10 years on probation.
In the latest tragedy, they told police that they prayed for God to heal Brandon instead of taking him to a doctor when he fell ill. Officials said that an autopsy will be performed on the child, and depending on those results the parents may be charged with a crime.
The couple attend, and have taught at, Philadelphia’s First Century Gospel Church, which cites Biblical scripture favoring prayer and faith over modern medicine. Other religions, including Followers of Christ Church, Christian Scientists, and Scientology, have doctrines that prohibit or discourage modern medicine and therapeutic interventions.
This is not the first time that parents have gone on trial for child abuse or neglect for refusing their children medical attention. Though freedom of religion is guaranteed by the First Amendment to the U.S. Constitution, the practice of that religion does not give followers license to break the law — especially when the result is injury or death to a child.
Proving The Power of Prayer?
If prayer and faith healing had a proven track record of success, it might be argued that they provide a legitimate, proven alternative to medical care. Many people may be surprised to find that intercessory prayer (petitioning a higher power to heal someone else) has been tested. Several studies have been done to see if people who are prayed for recover any faster, or are cured of disease at higher rates, than those who are not prayed for.
In one of the largest studies ever conducted, researchers at six major medical centers including Harvard and the Mayo Clinic looked at patient outcomes of prayer. The research, “Study of the Therapeutic Effects of Intercessory Prayer ‘STEP’ in Cardiac Bypass Patients,” was published in the American Heart Journal and conducted over the course of a decade.
Nearly 2,000 cardiac surgery patients were randomly assigned to one of three conditions: one group was prayed for after being told they’d be prayed for; another group was prayed for after being told they may or may not be prayed for; and the third was not prayed for after being told the same thing. The results: the group that was prayed for did no better than the group that wasn’t prayed for. Intercessory prayer had no beneficial effect at all on recovery time, death rate, or other medical factors.
Read more at Discovery News
According to an article on ABC News, “Herbert and Catherine Schaible belong to a fundamentalist Christian church that believes in faith healing. They lost their 8-month-old son, Brandon, last week after he suffered from diarrhea and breathing problems for at least a week, and stopped eating. Four years ago, another son died from bacterial pneumonia.”
That boy, a two-year-old named Kent, died after the Schaibles refused to take him to the doctor when he became sick, relying instead on faith and prayer. The couple were convicted of involuntary manslaughter and sentenced to 10 years on probation.
In the latest tragedy, they told police that they prayed for God to heal Brandon instead of taking him to a doctor when he fell ill. Officials said that an autopsy will be performed on the child, and depending on those results the parents may be charged with a crime.
The couple attend, and have taught at, Philadelphia’s First Century Gospel Church, which cites Biblical scripture favoring prayer and faith over modern medicine. Other religions, including Followers of Christ Church, Christian Scientists, and Scientology, have doctrines that prohibit or discourage modern medicine and therapeutic interventions.
This is not the first time that parents have gone on trial for child abuse or neglect for refusing their children medical attention. Though freedom of religion is guaranteed by the First Amendment to the U.S. Constitution, the practice of that religion does not give followers license to break the law — especially when the result is injury or death to a child.
Proving The Power of Prayer?
If prayer and faith healing had a proven track record of success, it might be argued that they provide a legitimate, proven alternative to medical care. Many people may be surprised to find that intercessory prayer (petitioning a higher power to heal someone else) has been tested. Several studies have been done to see if people who are prayed for recover any faster, or are cured of disease at higher rates, than those who are not prayed for.
In one of the largest studies ever conducted, researchers at six major medical centers including Harvard and the Mayo Clinic looked at patient outcomes of prayer. The research, “Study of the Therapeutic Effects of Intercessory Prayer ‘STEP’ in Cardiac Bypass Patients,” was published in the American Heart Journal and conducted over the course of a decade.
Nearly 2,000 cardiac surgery patients were randomly assigned to one of three conditions: one group was prayed for after being told they’d be prayed for; another group was prayed for after being told they may or may not be prayed for; and the third was not prayed for after being told the same thing. The results: the group that was prayed for did no better than the group that wasn’t prayed for. Intercessory prayer had no beneficial effect at all on recovery time, death rate, or other medical factors.
Read more at Discovery News
Apr 23, 2013
Nanowires Grown On Graphene Have Surprising Structure
When a team of University of Illinois engineers set out to grow nanowires of a compound semiconductor on top of a sheet of graphene, they did not expect to discover a new paradigm of epitaxy.
The self-assembled wires have a core of one composition and an outer layer of another, a desired trait for many advanced electronics applications. Led by professor Xiuling Li, in collaboration with professors Eric Pop and Joseph Lyding, all professors of electrical and computer engineering, the team published its findings in the journal Nano Letters.
Nanowires, tiny strings of semiconductor material, have great potential for applications in transistors, solar cells, lasers, sensors and more.
"Nanowires are really the major building blocks of future nano-devices," said postdoctoral researcher Parsian Mohseni, first author of the study. "Nanowires are components that can be used, based on what material you grow them out of, for any functional electronics application."
Li's group uses a method called van der Waals epitaxy to grow nanowires from the bottom up on a flat substrate of semiconductor materials, such as silicon. The nanowires are made of a class of materials called III-V (three-five), compound semiconductors that hold particular promise for applications involving light, such as solar cells or lasers.
The group previously reported growing III-V nanowires on silicon. While silicon is the most widely used material in devices, it has a number of shortcomings. Now, the group has grown nanowires of the material indium gallium arsenide (InGaAs) on a sheet of graphene, a 1-atom-thick sheet of carbon with exceptional physical and conductive properties.
Thanks to its thinness, graphene is flexible, while silicon is rigid and brittle. It also conducts like a metal, allowing for direct electrical contact to the nanowires. Furthermore, it is inexpensive, flaked off from a block of graphite or grown from carbon gases.
"One of the reasons we want to grow on graphene is to stay away from thick and expensive substrates," Mohseni said. "About 80 percent of the manufacturing cost of a conventional solar cell comes from the substrate itself. We've done away with that by just using graphene. Not only are there inherent cost benefits, we're also introducing functionality that a typical substrate doesn't have."
The researchers pump gases containing gallium, indium and arsenic into a chamber with a graphene sheet. The nanowires self-assemble, growing by themselves into a dense carpet of vertical wires across the surface of the graphene. Other groups have grown nanowires on graphene with compound semiconductors that only have two elements, but by using three elements, the Illinois group made a unique finding: The InGaAs wires grown on graphene spontaneously segregate into an indium arsenide (InAs) core with an InGaAs shell around the outside of the wire.
"This is unexpected," Li said. "A lot of devices require a core-shell architecture. Normally you grow the core in one growth condition and change conditions to grow the shell on the outside. This is spontaneous, done in one step. The other good thing is that since it's a spontaneous segregation, it produces a perfect interface."
So what causes this spontaneous core-shell structure? By coincidence, the distance between atoms in a crystal of InAs is nearly the same as the distance between whole numbers of carbon atoms in a sheet of graphene. So, when the gases are piped into the chamber and the material begins to crystallize, InAs settles into place on the graphene, a near-perfect fit, while the gallium compound settles on the outside of the wires. This was unexpected, because normally, with van der Waals epitaxy, the respective crystal structures of the material and the substrate are not supposed to matter.
"We didn't expect it, but once we saw it, it made sense," Mohseni said.
In addition, by tuning the ratio of gallium to indium in the semiconductor cocktail, the researchers can tune the optical and conductive properties of the nanowires.
Next, Li's group plans to make solar cells and other optoelectronic devices with their graphene-grown nanowires. Thanks to both the wires' ternary composition and graphene's flexibility and conductivity, Li hopes to integrate the wires in a broad spectrum of applications.
Read more at Science Daily
The self-assembled wires have a core of one composition and an outer layer of another, a desired trait for many advanced electronics applications. Led by professor Xiuling Li, in collaboration with professors Eric Pop and Joseph Lyding, all professors of electrical and computer engineering, the team published its findings in the journal Nano Letters.
Nanowires, tiny strings of semiconductor material, have great potential for applications in transistors, solar cells, lasers, sensors and more.
"Nanowires are really the major building blocks of future nano-devices," said postdoctoral researcher Parsian Mohseni, first author of the study. "Nanowires are components that can be used, based on what material you grow them out of, for any functional electronics application."
Li's group uses a method called van der Waals epitaxy to grow nanowires from the bottom up on a flat substrate of semiconductor materials, such as silicon. The nanowires are made of a class of materials called III-V (three-five), compound semiconductors that hold particular promise for applications involving light, such as solar cells or lasers.
The group previously reported growing III-V nanowires on silicon. While silicon is the most widely used material in devices, it has a number of shortcomings. Now, the group has grown nanowires of the material indium gallium arsenide (InGaAs) on a sheet of graphene, a 1-atom-thick sheet of carbon with exceptional physical and conductive properties.
Thanks to its thinness, graphene is flexible, while silicon is rigid and brittle. It also conducts like a metal, allowing for direct electrical contact to the nanowires. Furthermore, it is inexpensive, flaked off from a block of graphite or grown from carbon gases.
"One of the reasons we want to grow on graphene is to stay away from thick and expensive substrates," Mohseni said. "About 80 percent of the manufacturing cost of a conventional solar cell comes from the substrate itself. We've done away with that by just using graphene. Not only are there inherent cost benefits, we're also introducing functionality that a typical substrate doesn't have."
The researchers pump gases containing gallium, indium and arsenic into a chamber with a graphene sheet. The nanowires self-assemble, growing by themselves into a dense carpet of vertical wires across the surface of the graphene. Other groups have grown nanowires on graphene with compound semiconductors that only have two elements, but by using three elements, the Illinois group made a unique finding: The InGaAs wires grown on graphene spontaneously segregate into an indium arsenide (InAs) core with an InGaAs shell around the outside of the wire.
"This is unexpected," Li said. "A lot of devices require a core-shell architecture. Normally you grow the core in one growth condition and change conditions to grow the shell on the outside. This is spontaneous, done in one step. The other good thing is that since it's a spontaneous segregation, it produces a perfect interface."
So what causes this spontaneous core-shell structure? By coincidence, the distance between atoms in a crystal of InAs is nearly the same as the distance between whole numbers of carbon atoms in a sheet of graphene. So, when the gases are piped into the chamber and the material begins to crystallize, InAs settles into place on the graphene, a near-perfect fit, while the gallium compound settles on the outside of the wires. This was unexpected, because normally, with van der Waals epitaxy, the respective crystal structures of the material and the substrate are not supposed to matter.
"We didn't expect it, but once we saw it, it made sense," Mohseni said.
In addition, by tuning the ratio of gallium to indium in the semiconductor cocktail, the researchers can tune the optical and conductive properties of the nanowires.
Next, Li's group plans to make solar cells and other optoelectronic devices with their graphene-grown nanowires. Thanks to both the wires' ternary composition and graphene's flexibility and conductivity, Li hopes to integrate the wires in a broad spectrum of applications.
Read more at Science Daily
Ancient DNA Reveals Europe's Dynamic Genetic History
Ancient DNA recovered from a series of skeletons in central Germany up to 7,500 years old has been used to reconstruct the first detailed genetic history of modern Europe.
The study, published today in Nature Communications, reveals a dramatic series of events including major migrations from both Western Europe and Eurasia, and signs of an unexplained genetic turnover about 4,000 to 5,000 years ago.
The research was performed at the University of Adelaide's Australian Centre for Ancient DNA (ACAD). Researchers used DNA extracted from bone and teeth samples from prehistoric human skeletons to sequence a group of maternal genetic lineages that are now carried by up to 45% of Europeans.
The international team also included the University of Mainz in Germany and the National Geographic Society's Genographic Project.
"This is the first high-resolution genetic record of these lineages through time, and it is fascinating that we can directly observe both human DNA evolving in 'real-time', and the dramatic population changes that have taken place in Europe," says joint lead author Dr Wolfgang Haak of ACAD.
"We can follow over 4,000 years of prehistory, from the earliest farmers through the early Bronze Age to modern times."
"The record of this maternally inherited genetic group, called Haplogroup H, shows that the first farmers in Central Europe resulted from a wholesale cultural and genetic input via migration, beginning in Turkey and the Near East where farming originated and arriving in Germany around 7,500 years ago," says joint lead author Dr Paul Brotherton, formerly at ACAD and now at the University of Huddersfield, UK.
ACAD Director Professor Alan Cooper says: "What is intriguing is that the genetic markers of this first pan-European culture, which was clearly very successful, were then suddenly replaced around 4,500 years ago, and we don't know why. Something major happened, and the hunt is now on to find out what that was."
The team developed new advances in molecular biology to sequence entire mitochondrial genomes from the ancient skeletons. This is the first ancient population study using a large number of mitochondrial genomes.
"We have established that the genetic foundations for modern Europe were only established in the Mid-Neolithic, after this major genetic transition around 4,000 years ago," says Dr Haak. "This genetic diversity was then modified further by a series of incoming and expanding cultures from Iberia and Eastern Europe through the Late Neolithic."
"The expansion of the Bell Beaker culture (named after their pots) appears to have been a key event, emerging in Iberia around 2800 BC and arriving in Germany several centuries later," says Dr Brotherton. "This is a very interesting group as they have been linked to the expansion of Celtic languages along the Atlantic coast and into central Europe."
"These well-dated ancient genetic sequences provide a unique opportunity to investigate the demographic history of Europe," says Professor Cooper.
"We can not only estimate population sizes but also accurately determine the evolutionary rate of the sequences, providing a far more accurate timescale of significant events in recent human evolution."
The team has been working closely on the genetic prehistory of Europeans for the past 7-8 years.
Professor Kurt Alt (University of Mainz) says: "This work shows the power of archaeology and ancient DNA working together to reconstruct human evolutionary history through time. We are currently expanding this approach to other transects across Europe."
Read more at Science Daily
The study, published today in Nature Communications, reveals a dramatic series of events including major migrations from both Western Europe and Eurasia, and signs of an unexplained genetic turnover about 4,000 to 5,000 years ago.
The research was performed at the University of Adelaide's Australian Centre for Ancient DNA (ACAD). Researchers used DNA extracted from bone and teeth samples from prehistoric human skeletons to sequence a group of maternal genetic lineages that are now carried by up to 45% of Europeans.
The international team also included the University of Mainz in Germany and the National Geographic Society's Genographic Project.
"This is the first high-resolution genetic record of these lineages through time, and it is fascinating that we can directly observe both human DNA evolving in 'real-time', and the dramatic population changes that have taken place in Europe," says joint lead author Dr Wolfgang Haak of ACAD.
"We can follow over 4,000 years of prehistory, from the earliest farmers through the early Bronze Age to modern times."
"The record of this maternally inherited genetic group, called Haplogroup H, shows that the first farmers in Central Europe resulted from a wholesale cultural and genetic input via migration, beginning in Turkey and the Near East where farming originated and arriving in Germany around 7,500 years ago," says joint lead author Dr Paul Brotherton, formerly at ACAD and now at the University of Huddersfield, UK.
ACAD Director Professor Alan Cooper says: "What is intriguing is that the genetic markers of this first pan-European culture, which was clearly very successful, were then suddenly replaced around 4,500 years ago, and we don't know why. Something major happened, and the hunt is now on to find out what that was."
The team developed new advances in molecular biology to sequence entire mitochondrial genomes from the ancient skeletons. This is the first ancient population study using a large number of mitochondrial genomes.
"We have established that the genetic foundations for modern Europe were only established in the Mid-Neolithic, after this major genetic transition around 4,000 years ago," says Dr Haak. "This genetic diversity was then modified further by a series of incoming and expanding cultures from Iberia and Eastern Europe through the Late Neolithic."
"The expansion of the Bell Beaker culture (named after their pots) appears to have been a key event, emerging in Iberia around 2800 BC and arriving in Germany several centuries later," says Dr Brotherton. "This is a very interesting group as they have been linked to the expansion of Celtic languages along the Atlantic coast and into central Europe."
"These well-dated ancient genetic sequences provide a unique opportunity to investigate the demographic history of Europe," says Professor Cooper.
"We can not only estimate population sizes but also accurately determine the evolutionary rate of the sequences, providing a far more accurate timescale of significant events in recent human evolution."
The team has been working closely on the genetic prehistory of Europeans for the past 7-8 years.
Professor Kurt Alt (University of Mainz) says: "This work shows the power of archaeology and ancient DNA working together to reconstruct human evolutionary history through time. We are currently expanding this approach to other transects across Europe."
Read more at Science Daily
Gold-Adorned Skeleton Could Be First Windsor Queen
British archaeologists have unearthed the remains of what might be the first queen of Windsor in a 4,400-year-old female skeleton adorned with some of Britain’s earliest gold jewels. The find could predate Windsor’s royal connection by more than three millennia.
Archaeologists discovered the remains at Kingsmead Quarry, not far from Windsor Castle, which, since the time of Henry I (1068 – 1135), has been associated with British royals.
The burial was dated to the Copper Age, between 2,200 and 2,500 B.C. — just a century or two after the construction of Stonehenge, which stands about 60 miles to the south-west.
The bones, which were almost destroyed by the acidic soil around the grave, indicate the individual was a woman aged at least 35. At the time of her burial she wore a necklace of tubular sheet gold beads and black disks of lignite (a form of coal).
In a row along the body, the archaeologists found a number of pierced amber beads, possibly buttons for her long-vanished woven wool clothes. A number of black beads found near her hand suggest she wore a bracelet.
Interestingly, a large drinking vessel lay by the woman’s hip. Decorated with a comb-like stamp, the fine pottery, known to archaeologist as a beaker, linked the burial to communities which lived across Europe at around 2,500 B.C.
“Beaker graves of this date are almost unknown in southeast England and only a small number of them, and indeed continental Europe, contain gold ornaments,” Stuart Needham, an expert in Copper Age metal work, said.
According to the archaeologists in charge of the excavation, Gareth Chaffey of Wessex Archaeology, the woman was probably “an important person in her society, perhaps holding some standing which gave her access to prestigious, rare and exotic items.”
“She could have been a leader, a person with power and authority, or possibly part of an elite family — perhaps a princess or queen,” Chaffey said.
The bones are too decayed to allow DNA and carbon-14 dating. Experts are now working at determining the origins of the jewellery.
Lead isotope analysis suggest that the gold probably came from deposits located in southeast Ireland and southern Britain. It’s likely the lignite beads came from the east of England, while the amber may have come from as far away as the Baltic.
According to Wessex Archaeology, an “extensive prehistoric landscape” is still buried beneath the quarry and surrounding areas on the edge of West London and East Berkshire.
Read more at Discovery News
Archaeologists discovered the remains at Kingsmead Quarry, not far from Windsor Castle, which, since the time of Henry I (1068 – 1135), has been associated with British royals.
The burial was dated to the Copper Age, between 2,200 and 2,500 B.C. — just a century or two after the construction of Stonehenge, which stands about 60 miles to the south-west.
The bones, which were almost destroyed by the acidic soil around the grave, indicate the individual was a woman aged at least 35. At the time of her burial she wore a necklace of tubular sheet gold beads and black disks of lignite (a form of coal).
In a row along the body, the archaeologists found a number of pierced amber beads, possibly buttons for her long-vanished woven wool clothes. A number of black beads found near her hand suggest she wore a bracelet.
Interestingly, a large drinking vessel lay by the woman’s hip. Decorated with a comb-like stamp, the fine pottery, known to archaeologist as a beaker, linked the burial to communities which lived across Europe at around 2,500 B.C.
“Beaker graves of this date are almost unknown in southeast England and only a small number of them, and indeed continental Europe, contain gold ornaments,” Stuart Needham, an expert in Copper Age metal work, said.
According to the archaeologists in charge of the excavation, Gareth Chaffey of Wessex Archaeology, the woman was probably “an important person in her society, perhaps holding some standing which gave her access to prestigious, rare and exotic items.”
“She could have been a leader, a person with power and authority, or possibly part of an elite family — perhaps a princess or queen,” Chaffey said.
The bones are too decayed to allow DNA and carbon-14 dating. Experts are now working at determining the origins of the jewellery.
Lead isotope analysis suggest that the gold probably came from deposits located in southeast Ireland and southern Britain. It’s likely the lignite beads came from the east of England, while the amber may have come from as far away as the Baltic.
According to Wessex Archaeology, an “extensive prehistoric landscape” is still buried beneath the quarry and surrounding areas on the edge of West London and East Berkshire.
Read more at Discovery News
Oldest Temple in Mexican Valley Hints at Human Sacrifice
A newly discovered temple complex in the Valley of Oaxaca, Mexico, reveals hints of a specialized hierarchy of priests -- who may have committed human sacrifice.
The evidence of such sacrifice is far from conclusive, but researchers did uncover a human tooth and part of what may be a human limb bone from a temple room scattered with animal sacrifice remains and obsidian blades. The temple dates back to 300 B.C. or so, when it was in use by the Zapotec civilization of what is now Oaxaca.
Archaeologists have been excavating a site in the valley called El Palenque for years. The site is the center of what was once an independent mini-state. Between 1997 and 2000, the researchers found and studied the remains of a 9,150-square-foot (850 square meters) palace complex complete with a plaza on the north side of the site. Radiocarbon dating and copious ash reveal that the palace burned down sometime around 60 B.C. or so.
Now, the archaeologists have unearthed an even larger complex of buildings on the east side of El Palenque. The walled-off area appears to be a temple complex, consisting of a main temple flanked by two smaller temple buildings. There are also at least two residences, probably for priests, as well as a number of fireboxes where offerings may have been made.
Sacrificial site
The whole complex measures almost 54,000 square feet (5,000 square meters), and the main temple alone has a 4,090-square-foot (380 square meters) footprint.
The main room of the main temple was scattered with artifacts, including shell, mica and alabaster ornaments, researchers report Monday (April 22) in the journal Proceedings of the National Academy of Sciences. The archaeologists also found ceramic vessels and whistles, as well as incense braziers. Obsidian blades and lances suggest that the priests engaged in ritual bloodletting and animal sacrifice, as did the remains of turkeys, doves and other animals in the temple hearth.
It was in this room that the human tooth and possible human limb bone were discovered, though researchers can't say for certain whether those bones were a sign of human sacrifice at the temple.
The main temple also contained a kitchen much larger than those found in households in El Palenque, suggesting that cooks whipped up meals for large groups in this spot. Behind the temple were several cell-like rooms, perhaps places for priests-in-training or low-ranking priests to sleep.
Hierarchy of priests
Also behind the temple, archaeologists turned up two buildings that appear to be priestly residences. These buildings were earthen-floored and thick-walled, with firepits inside that are characteristic of El Palenque homes. Unlike other homes in the city, though, these probably priestly digs revealed few utilitarian jars, griddles and grinding stones — but there were many serving plates. The artifacts suggest that priests didn't cook their own food, but were served meals in their quarters by temple servants or staff.
Like the palace, the temple complex has been burned and appears to have fallen out of use by the end of the first century B.C or the first century A.D., making it the oldest temple discovered yet in the Valley of Oaxaca. Among the remaining mysteries of the site is a hastily buried body found in one of the temple's fireboxes.
Read more at Discovery News
The evidence of such sacrifice is far from conclusive, but researchers did uncover a human tooth and part of what may be a human limb bone from a temple room scattered with animal sacrifice remains and obsidian blades. The temple dates back to 300 B.C. or so, when it was in use by the Zapotec civilization of what is now Oaxaca.
Archaeologists have been excavating a site in the valley called El Palenque for years. The site is the center of what was once an independent mini-state. Between 1997 and 2000, the researchers found and studied the remains of a 9,150-square-foot (850 square meters) palace complex complete with a plaza on the north side of the site. Radiocarbon dating and copious ash reveal that the palace burned down sometime around 60 B.C. or so.
Now, the archaeologists have unearthed an even larger complex of buildings on the east side of El Palenque. The walled-off area appears to be a temple complex, consisting of a main temple flanked by two smaller temple buildings. There are also at least two residences, probably for priests, as well as a number of fireboxes where offerings may have been made.
Sacrificial site
The whole complex measures almost 54,000 square feet (5,000 square meters), and the main temple alone has a 4,090-square-foot (380 square meters) footprint.
The main room of the main temple was scattered with artifacts, including shell, mica and alabaster ornaments, researchers report Monday (April 22) in the journal Proceedings of the National Academy of Sciences. The archaeologists also found ceramic vessels and whistles, as well as incense braziers. Obsidian blades and lances suggest that the priests engaged in ritual bloodletting and animal sacrifice, as did the remains of turkeys, doves and other animals in the temple hearth.
It was in this room that the human tooth and possible human limb bone were discovered, though researchers can't say for certain whether those bones were a sign of human sacrifice at the temple.
The main temple also contained a kitchen much larger than those found in households in El Palenque, suggesting that cooks whipped up meals for large groups in this spot. Behind the temple were several cell-like rooms, perhaps places for priests-in-training or low-ranking priests to sleep.
Hierarchy of priests
Also behind the temple, archaeologists turned up two buildings that appear to be priestly residences. These buildings were earthen-floored and thick-walled, with firepits inside that are characteristic of El Palenque homes. Unlike other homes in the city, though, these probably priestly digs revealed few utilitarian jars, griddles and grinding stones — but there were many serving plates. The artifacts suggest that priests didn't cook their own food, but were served meals in their quarters by temple servants or staff.
Like the palace, the temple complex has been burned and appears to have fallen out of use by the end of the first century B.C or the first century A.D., making it the oldest temple discovered yet in the Valley of Oaxaca. Among the remaining mysteries of the site is a hastily buried body found in one of the temple's fireboxes.
Read more at Discovery News
Apr 22, 2013
Migraine Treatments May Be Targeting Wrong Source
The pain of migraine headaches might not be caused by expanded blood vessels in the brain, as previously thought. Instead, the real culprit may be overactive pain-signal firing in brain cells, new research suggests.
Moreover, there are several treatments being developed that could address this signaling and, in turn, help treat migraine pain. However, further studies are needed to confirm that the overactive signaling is a cause of migraine pain.
"Many people are nonresponders to existing drugs," said study co-author Dr. Messoud Ashina, a physician at the University of Copenhagen in Denmark. "So there is room for improvement -- that's why we need the new drugs."
Ashina and his co-authors have received money from pharmaceutical companies.The findings, which were published online April 9 in the journal Lancet Neurology, come from studying the brain scans of patients as they were suffering from a migraine.
Mysterious phenomenon
About one in 10 Americans suffer from migraines, with women more likely to be affected. The headaches typically last from four to 72 hours, and are often accompanied by nausea, pain, and sensitivity to light and sound. Past studies showed an increased risk of stroke and depression in women who suffer from migraines.
The most common treatments for migraines, a class of medicines called triptans, were thought to relieve migraine pain by constricting blood vessels in the brain. But past research suggested only 30 percent of people suffering from a migraine are pain-free two hours after taking a triptan, Ashina said.
In recent decades, researchers have wondered whether pain signals from sensory neurons in the brain may be the true cause of migraine pain.
Brain pain
To study migraines in real time, Ashina and his colleagues looked at 19 women who were experiencing migraines. Using a technique called magnetic resonance angiography, the scientists measured the blood flow in the women's brains, and found that only a few blood vessels inside the brain were modestly dilated.
Then, they gave patients a triptan, and their average pain levels reduced dramatically.
However, the triptans constricted blood vessels on the outside of the brain, not the vessels that were mildly dilated during the migraine. That suggests that the triptans relieved migraine pain not by constricting blood vessels, but by another mechanism -- possibly by quieting pain signals from neurons.
Read more at Discovery News
Moreover, there are several treatments being developed that could address this signaling and, in turn, help treat migraine pain. However, further studies are needed to confirm that the overactive signaling is a cause of migraine pain.
"Many people are nonresponders to existing drugs," said study co-author Dr. Messoud Ashina, a physician at the University of Copenhagen in Denmark. "So there is room for improvement -- that's why we need the new drugs."
Ashina and his co-authors have received money from pharmaceutical companies.The findings, which were published online April 9 in the journal Lancet Neurology, come from studying the brain scans of patients as they were suffering from a migraine.
Mysterious phenomenon
About one in 10 Americans suffer from migraines, with women more likely to be affected. The headaches typically last from four to 72 hours, and are often accompanied by nausea, pain, and sensitivity to light and sound. Past studies showed an increased risk of stroke and depression in women who suffer from migraines.
The most common treatments for migraines, a class of medicines called triptans, were thought to relieve migraine pain by constricting blood vessels in the brain. But past research suggested only 30 percent of people suffering from a migraine are pain-free two hours after taking a triptan, Ashina said.
In recent decades, researchers have wondered whether pain signals from sensory neurons in the brain may be the true cause of migraine pain.
Brain pain
To study migraines in real time, Ashina and his colleagues looked at 19 women who were experiencing migraines. Using a technique called magnetic resonance angiography, the scientists measured the blood flow in the women's brains, and found that only a few blood vessels inside the brain were modestly dilated.
Then, they gave patients a triptan, and their average pain levels reduced dramatically.
However, the triptans constricted blood vessels on the outside of the brain, not the vessels that were mildly dilated during the migraine. That suggests that the triptans relieved migraine pain not by constricting blood vessels, but by another mechanism -- possibly by quieting pain signals from neurons.
Read more at Discovery News
66 Ancient Skeletons Found in Indonesian Cave
Talk about your archaeological jackpots: Researchers in Indonesia have reportedly discovered the 3,000-year-old remains of 66 people in a cave in Sumatra.
"Sixty-six is very strange," Truman Simanjuntak of Jakarta's National Research and Development Center for Archaeology said in a statement. He and his colleagues have never before found that many remains in a single cave, Simanjuntak added.
The cave is known as Harimaru or Tiger Cave, and also contains chicken, dog and pig remains. Thousands of years ago, the Tiger Cave and other limestone caverns nearby were occupied by Indonesia's first farmers. They used the caves to bury their dead, explaining the 3,000-year-old cemetery unearthed by Simanjuntak's team. The ancient farmers also manufactured tools in the caves.
And they apparently made art. Tiger Cave contains the first evidence of rock art from Sumatra, Simanjuntak said. And the cave is only partially excavated.
"There are still occupation traces deeper and deeper in the cave, where we have not excavated yet," he said. "So it means the cave is very promising."
The dates of the discoveries so far peg the cave's occupation to a time when the Earth's entire population was only about 50 million. The Zhou dynasty ruled China, and ancient Egypt's prosperous New Kingdom era, during which Tutankamun riegned, was nearing its end.
Read more at Discovery News
"Sixty-six is very strange," Truman Simanjuntak of Jakarta's National Research and Development Center for Archaeology said in a statement. He and his colleagues have never before found that many remains in a single cave, Simanjuntak added.
The cave is known as Harimaru or Tiger Cave, and also contains chicken, dog and pig remains. Thousands of years ago, the Tiger Cave and other limestone caverns nearby were occupied by Indonesia's first farmers. They used the caves to bury their dead, explaining the 3,000-year-old cemetery unearthed by Simanjuntak's team. The ancient farmers also manufactured tools in the caves.
And they apparently made art. Tiger Cave contains the first evidence of rock art from Sumatra, Simanjuntak said. And the cave is only partially excavated.
"There are still occupation traces deeper and deeper in the cave, where we have not excavated yet," he said. "So it means the cave is very promising."
The dates of the discoveries so far peg the cave's occupation to a time when the Earth's entire population was only about 50 million. The Zhou dynasty ruled China, and ancient Egypt's prosperous New Kingdom era, during which Tutankamun riegned, was nearing its end.
Read more at Discovery News
Star Plays Dizzying Dance of Doom with Black Hole
Black holes are probably among the scariest things in the universe, with gravitational forces powerful enough to warp the fabric of spacetime itself. Red dwarfs, on the other hand, are amongst the smallest of stars, shining dimly in the darkness — not exactly the sort of pairing which you might expect to be make dancing partners.
All the same, that’s exactly the pairing you’ll find in a star system known as MAXI J1659-152. This system contains just such an odd couple, locked in a tight orbit where a red dwarf is speeding around it’s heavier companion at an astonishing two million kilometers per hour (1.2 million mph)!
So fast is this orbit that it’s set a record for the fastest binary orbit ever discovered. The reason for this blistering speed is how close these two objects are to each other; the star is just one million kilometers (600 thousand miles) away from the black hole. That’s a little under three times the distance from Earth to the moon which, I can safely say, is a lot closer to a black hole than I’d ever like to be!
Being so close together, this doomed red dwarf is slowly being devoured as material is sucked from its surface into an accretion disk that is feeding the black hole. It was first discovered by NASA’s Swift space telescope in 2010, which mistook it for a gamma ray burst. It wasn’t until later observations by the Japanese MAXI telescope aboard the ISS found a bright x-ray source in the same spot.
X-ray sources in astronomy are a signpost, pointing to some of the most energetic things we can see in the universe around us. A black hole certainly qualifies. As material is torn away from the surface of this star into the black hole’s disk, it gets caught up in intense magnetic and gravitational fields. This heats it to incredible temperatures, eventually tearing apart the very atoms that once belonged to the red dwarf, causing that disk to shine brightly with x-rays.
Of course, the black hole itself emits no detectable light. It will hungrily consume anything which falls into its event horizon, including photons. The disk around a black hole, on the other hand, is actually able to convert matter into energy more efficiently than any star can, which makes it a tremendously powerful source of energy.
In the MAXI J1659-152 system, what we see is essentially a massive and slightly frightening version of the transiting exoplanets that NASA’s Kepler telescope has been hunting for. The black hole’s disk is almost edge-on from our point of view, and when the red dwarf passes between us and that disk, there’s a slight dip in the x-rays we can see. ESA’s XMM-Newton x-ray observatory picked up on this. Staring at this black hole for an uninterrupted 14.5 hours, it saw 7 of these dips, allowing the red dwarf’s orbital period to be measured at a record breaking 2.4 hours (the previous record holder was the Swift J1753.5–0127 system, with an orbit of 3.2 hours).
Oddly, the couple are high above the plane of the Milky Way, a trait that only two other black hole binary systems share. The reason for this weird placement may be down to how the black hole formed.
Stellar-mass black holes are created in supernovae, when massive stars explode. The star which created it would have been huge. A giant blue star, probably over 20 times as massive as the sun. But the most massive stars are also the shortest lived. This luminous blue star was to greedily use up all of its fuel, before swelling into a red supergiant and eventually exploding.
Read more at Discovery News
All the same, that’s exactly the pairing you’ll find in a star system known as MAXI J1659-152. This system contains just such an odd couple, locked in a tight orbit where a red dwarf is speeding around it’s heavier companion at an astonishing two million kilometers per hour (1.2 million mph)!
So fast is this orbit that it’s set a record for the fastest binary orbit ever discovered. The reason for this blistering speed is how close these two objects are to each other; the star is just one million kilometers (600 thousand miles) away from the black hole. That’s a little under three times the distance from Earth to the moon which, I can safely say, is a lot closer to a black hole than I’d ever like to be!
Being so close together, this doomed red dwarf is slowly being devoured as material is sucked from its surface into an accretion disk that is feeding the black hole. It was first discovered by NASA’s Swift space telescope in 2010, which mistook it for a gamma ray burst. It wasn’t until later observations by the Japanese MAXI telescope aboard the ISS found a bright x-ray source in the same spot.
X-ray sources in astronomy are a signpost, pointing to some of the most energetic things we can see in the universe around us. A black hole certainly qualifies. As material is torn away from the surface of this star into the black hole’s disk, it gets caught up in intense magnetic and gravitational fields. This heats it to incredible temperatures, eventually tearing apart the very atoms that once belonged to the red dwarf, causing that disk to shine brightly with x-rays.
Of course, the black hole itself emits no detectable light. It will hungrily consume anything which falls into its event horizon, including photons. The disk around a black hole, on the other hand, is actually able to convert matter into energy more efficiently than any star can, which makes it a tremendously powerful source of energy.
In the MAXI J1659-152 system, what we see is essentially a massive and slightly frightening version of the transiting exoplanets that NASA’s Kepler telescope has been hunting for. The black hole’s disk is almost edge-on from our point of view, and when the red dwarf passes between us and that disk, there’s a slight dip in the x-rays we can see. ESA’s XMM-Newton x-ray observatory picked up on this. Staring at this black hole for an uninterrupted 14.5 hours, it saw 7 of these dips, allowing the red dwarf’s orbital period to be measured at a record breaking 2.4 hours (the previous record holder was the Swift J1753.5–0127 system, with an orbit of 3.2 hours).
Oddly, the couple are high above the plane of the Milky Way, a trait that only two other black hole binary systems share. The reason for this weird placement may be down to how the black hole formed.
Stellar-mass black holes are created in supernovae, when massive stars explode. The star which created it would have been huge. A giant blue star, probably over 20 times as massive as the sun. But the most massive stars are also the shortest lived. This luminous blue star was to greedily use up all of its fuel, before swelling into a red supergiant and eventually exploding.
Read more at Discovery News
How to Form a Star Near a Black Hole
Black holes get a bad rap. We tend to think of them as destructive, all encompassing destroyers of stuff, consuming all in their way to add to their own bloated mass. Okay maybe I’m being a wee bit dramatic, but that’s the kind of pop culture picture of one, right? Scientists have also wondered if it’s even possible to create stars close to a supermassive black hole. Now, we have evidence that it is possible, and it’s happening in the center of our Galaxy.
Astronomers have a rough idea of how stars form, even though many of the details are still open questions. A region of interstellar gas gets dense enough through some process to begin forming a star. The material surrounding a black hole, however, suffers intense shearing forces from rotating around such a dense object. It is hard to imagine how a clump of gas could hold itself together in such close proximity to a gravitational behemoth.
Nevertheless, evidence of star formation has been detected within two light-years of the supermassive black hole at the center of the Milky Way Galaxy, affectionately known as Sagittarius A* (pronounced “A-star,” abbreviated Sgr A*). Using the fledgling Atacama Large Millimeter/Submillimeter Array (ALMA) when it was just 12 antennas and the Combined Array for Millimeter-Wave Astronomy (CARMA), a group of astronomers led by Farhad Yusef-Zadeh of Northwestern University detected emission of silicon monoxide (SiO) near the supermassive black hole.
Finding the molecular clouds of SiO itself isn’t a definite tracer of star formation. However, how the SiO is moving through space is important. The emission lines of the SiO show the clumps to be moving at high velocity in a way that indicates an outflow of material. Such outflows are commonly seen in star forming regions as not all the material falling into a protostar makes it in the end. Some of it is simply blown away in the energetic process. The brightness of the SiO emission is also consistent with star formation scenarios.
Once the team had this evidence to start with, they looked for where these baby stars might actually reside. Using the infrared capabilities of the Very Large Telescope in Chile, two candidate young stellar objects were noticed in the vicinity. Infrared light can penetrate the dust and gas that obscures the visible light coming from forming stars.
How could these stars have formed in such a turbulent environment so close to the several million solar-mass black hole? Two scenarios are put forth. In one, the radiation pressure from the intense ultraviolet coming from the hot stars in the region around Sgr A* compresses the gas to the point at which it can collapse to form a star. That’s right, the very pressure of the photons of light from other stars helps to make the gas form new stars. Another mechanism, amusingly dubbed “clump-clump collision,” involves denser regions of gas smacking in to each other, thus causing a high enough density for star formation.
Read more at Discovery News
Astronomers have a rough idea of how stars form, even though many of the details are still open questions. A region of interstellar gas gets dense enough through some process to begin forming a star. The material surrounding a black hole, however, suffers intense shearing forces from rotating around such a dense object. It is hard to imagine how a clump of gas could hold itself together in such close proximity to a gravitational behemoth.
Nevertheless, evidence of star formation has been detected within two light-years of the supermassive black hole at the center of the Milky Way Galaxy, affectionately known as Sagittarius A* (pronounced “A-star,” abbreviated Sgr A*). Using the fledgling Atacama Large Millimeter/Submillimeter Array (ALMA) when it was just 12 antennas and the Combined Array for Millimeter-Wave Astronomy (CARMA), a group of astronomers led by Farhad Yusef-Zadeh of Northwestern University detected emission of silicon monoxide (SiO) near the supermassive black hole.
Finding the molecular clouds of SiO itself isn’t a definite tracer of star formation. However, how the SiO is moving through space is important. The emission lines of the SiO show the clumps to be moving at high velocity in a way that indicates an outflow of material. Such outflows are commonly seen in star forming regions as not all the material falling into a protostar makes it in the end. Some of it is simply blown away in the energetic process. The brightness of the SiO emission is also consistent with star formation scenarios.
Once the team had this evidence to start with, they looked for where these baby stars might actually reside. Using the infrared capabilities of the Very Large Telescope in Chile, two candidate young stellar objects were noticed in the vicinity. Infrared light can penetrate the dust and gas that obscures the visible light coming from forming stars.
How could these stars have formed in such a turbulent environment so close to the several million solar-mass black hole? Two scenarios are put forth. In one, the radiation pressure from the intense ultraviolet coming from the hot stars in the region around Sgr A* compresses the gas to the point at which it can collapse to form a star. That’s right, the very pressure of the photons of light from other stars helps to make the gas form new stars. Another mechanism, amusingly dubbed “clump-clump collision,” involves denser regions of gas smacking in to each other, thus causing a high enough density for star formation.
Read more at Discovery News
Apr 21, 2013
Tributes and High Security at London Marathon
Security was tight for the London Marathon on Sunday as thousands of runners paid solemn tribute to the victims of the deadly bomb attacks at the Boston Marathon barely a week ago.
A sombre 30-second period of silence was observed ahead of the start of the elite men's and mass races in Greenwich Park on the south bank of the River Thames.
"We will join together in silence to remember our friends and colleagues, for whom a day of joy turned into a day of sadness," the runners were told over the public address system.
Olympic athletes and around 35,000 amateur competitors, many sporting black ribbons in memory of the Boston victims, bowed their heads in memory before streaming on to the streets of the British capital beneath bright blue skies.
One runner held aloft a banner bearing the words "For Boston".
Police numbers along the 26.2-mile (42.2-kilometer) course were increased by 40 percent after the twin bomb blasts during the Boston race on Monday, which left three people dead and around 180 injured.
As well as the black ribbon tribute to those affected by the Boston horror, a social media campaign has encouraged runners to place their hands on their hearts as they cross the finish line.
Organizers have also announced that £2 ($3) for every finisher will be donated to a fund for the Boston victims.
"Marathons are all about people coming together," Keith Luxon, an amateur runner who competed in the Boston Marathon and was also due to race in London, told the BBC.
"Part of that was ruined in Boston, and it's up to us to put some of that back."
The events in Boston prompted organizers of the London race to undertake a security review, but they said there was no known threat to the event.
"Obviously with the shocking pictures (from Boston), it galvanized us into having a look at our security measures," said London Marathon chief executive Nick Bitel.
"We've had an amazing response from the police, from the mayor (Boris Johnson), and from the other security agencies, but also from the wider community.
"We've had to make a few changes and put some new security measures in, but we've had such an amazing response from the runners and the public."
The marathon course snakes past famous London landmarks including Tower Bridge, St Paul's Cathedral and the Houses of Parliament before finishing in front of Queen Elizabeth II's Buckingham Palace residence.
Much of the attention is focused on British Olympic star Mo Farah, winner of the 5,000 meters and 10,000 meters at last year's London Games, even though he will only complete half the marathon distance.
Both the men's and women's races boasted star-studded fields, with defending champion Wilson Kipsang of Kenya starting as the favorite in the men's event.
Fellow Kenyans Emmanuel Mutai, the course record-holder, Geoffrey Mutai, world record-holder Patrick Makau and three-time winner Martin Lel will provide competition, along with London 2012 Olympic champion Stephen Kiprotich of Uganda.
In the women's race, London Olympic champion Tiki Gelana of Ethiopia led a field of four athletes who have run under two hours and 20 minutes, along with Yoko Shibui of Japan and Kenya's Florence and Edna Kiplagat.
There was an early scare for Gelana when she fell after colliding with a men's wheelchair athlete at a drinks station, but she picked herself up and caught up with the leading pack.
Prince Harry, third in line to the British throne, is scheduled to present medals to the leading finishers in the men's, women's and wheelchair races.
Read more at Discovery News
A sombre 30-second period of silence was observed ahead of the start of the elite men's and mass races in Greenwich Park on the south bank of the River Thames.
"We will join together in silence to remember our friends and colleagues, for whom a day of joy turned into a day of sadness," the runners were told over the public address system.
Olympic athletes and around 35,000 amateur competitors, many sporting black ribbons in memory of the Boston victims, bowed their heads in memory before streaming on to the streets of the British capital beneath bright blue skies.
One runner held aloft a banner bearing the words "For Boston".
Police numbers along the 26.2-mile (42.2-kilometer) course were increased by 40 percent after the twin bomb blasts during the Boston race on Monday, which left three people dead and around 180 injured.
As well as the black ribbon tribute to those affected by the Boston horror, a social media campaign has encouraged runners to place their hands on their hearts as they cross the finish line.
Organizers have also announced that £2 ($3) for every finisher will be donated to a fund for the Boston victims.
"Marathons are all about people coming together," Keith Luxon, an amateur runner who competed in the Boston Marathon and was also due to race in London, told the BBC.
"Part of that was ruined in Boston, and it's up to us to put some of that back."
The events in Boston prompted organizers of the London race to undertake a security review, but they said there was no known threat to the event.
"Obviously with the shocking pictures (from Boston), it galvanized us into having a look at our security measures," said London Marathon chief executive Nick Bitel.
"We've had an amazing response from the police, from the mayor (Boris Johnson), and from the other security agencies, but also from the wider community.
"We've had to make a few changes and put some new security measures in, but we've had such an amazing response from the runners and the public."
The marathon course snakes past famous London landmarks including Tower Bridge, St Paul's Cathedral and the Houses of Parliament before finishing in front of Queen Elizabeth II's Buckingham Palace residence.
Much of the attention is focused on British Olympic star Mo Farah, winner of the 5,000 meters and 10,000 meters at last year's London Games, even though he will only complete half the marathon distance.
Both the men's and women's races boasted star-studded fields, with defending champion Wilson Kipsang of Kenya starting as the favorite in the men's event.
Fellow Kenyans Emmanuel Mutai, the course record-holder, Geoffrey Mutai, world record-holder Patrick Makau and three-time winner Martin Lel will provide competition, along with London 2012 Olympic champion Stephen Kiprotich of Uganda.
In the women's race, London Olympic champion Tiki Gelana of Ethiopia led a field of four athletes who have run under two hours and 20 minutes, along with Yoko Shibui of Japan and Kenya's Florence and Edna Kiplagat.
There was an early scare for Gelana when she fell after colliding with a men's wheelchair athlete at a drinks station, but she picked herself up and caught up with the leading pack.
Prince Harry, third in line to the British throne, is scheduled to present medals to the leading finishers in the men's, women's and wheelchair races.
Read more at Discovery News
Dark matter could be next big discovery for Large Hadron Collider scientists
Scientists working at CERN, the home of the Large Hadron Collider, are developing new types of radiotherapy that can destroy tumours while damaging less of the surrounding tissue, helping to reduce side effects.
They have begun a five year research project to test different beams of ions – electrically charged atoms – for their ability to kill cancer cells.
Engineers are carrying out a £14 million upgrade on one of the particle accelerators linked to the LHC so that it can carry out medical research.
Physicists behind the project hope it will allow them to produce more effective treatments that can be afforded by the NHS.
Dr Stephen Myers, director of accelerator technology at CERN, said they were already working with a British company to build smaller versions of the 250 foot long ring needed to produce the particles so that it can be installed in hospitals.
He said: “We are hoping to develop new types of cancer therapy by testing all the different types of ions – like oxygen or carbon – to see which is the best.
“Current radiotherapies caused collateral damage to the surrounding tissue and that makes it difficult to treat some types of cancer, like eye melanomas or those that are hard to reach.
“Low energy ion beams can cause less damage as the destruction of the cells is dependent on the energy of the beam and it can be focused very precisely onto a tumour.
“This can allow patients to recover faster and surgeons can destroy more of the tumour, so survival rates are much better.
“We would like to see if we can bring everything down to a regular sized from and put one in every teaching hospital in Europe.”
Current radiotherapy techniques use X-rays and electron beams that are fired into the body to kill cancer cells, but can cause a lot of damage to healthy tissues, bringing unpleasant side effects.
A new type of radiotherapy which uses beams of particles known as protons is already starting to be used and has been found to produce better results.
The protons can be focused with greater accuracy than current radiotherapy methods, meaning that doctors can target more of the cancer without damaging the surrounding tissue.
However, proton beam therapy, as it is known, is available in just 32 hospitals around the world and just one in the UK – the Clatterbridge Cancer Centre, where it is used to treat eye tumours.
Two more proton beam therapy centres are planned in Britain – with one due to be built in Manchester and another in London.
However, it costs hospitals £120 million for a proton beam therapy machine and treating a patient can cost between £90,000 and £120,000 each.
Scientists at CERN are now working with London-based company Advanced Oncotherapy to develop smaller and cheaper proton beam devices so that they can be more widely available.
Dr Michael Sinclair, the firm’s chief executive, hopes to install at least 10 new machines within the next five years.
He said that it could mean 12,000 cancer patients could receive the new type of treatment.
He said: “Proton beam therapy offers a significant improvement for patients with cancer than conventional radiotherapy, but so far the big problem has always been the cost.
“The machine developed by CERN has significant clinical advantages and will cost a third of equivalent equipment that is currently available.
“This is a game-changer – bringing a more effective cancer treatment to the masses.”
Britain contributes around £100 million a year to CERN, with the bulk of that being used to pay for the Large Hadron Collider.
Earlier this year, scientists announced that they had discovered a new type of particle that is believed to be a Higgs boson – the elusive so called God Particle that is believed to give other subatomic particles mass.
Read more at The Telegraph
Subscribe to:
Posts (Atom)