Last year, a team of University of Pennsylvania physicists showed how to undo the "coffee-ring effect," a commonplace occurrence when drops of liquid with suspended particles dry, leaving a ring-shaped stain at the drop's edges. Now the team is exploring how those particles stack up as they reach the drop's edge, and they discovered that different particles make smoother or rougher deposition profiles at the drop edge depending on their shape.
These resultant growth profiles offer tests of deep mathematical ideas about growing interfaces and are potentially relevant for many commercial and industrial coating applications.
The new research was conducted by the members of the original team: professor Arjun Yodh, director of the Laboratory for Research on the Structure of Matter; doctoral candidates Peter Yunker and Matthew Lohr; and postdoctoral fellow Tim Still, all of the Department of Physics and Astronomy in Penn's School of Arts and Sciences. New to the collaboration were professor D.J. Durian of the Department of Physics and Astronomy and Alexei Borodin, professor of mathematics at the Massachusetts Institute of Technology.
Their study was published in the journal Physical Review Letters.
In the "coffee-ring effect," drop edges are "pinned" to a surface, meaning that when the liquid evaporates, the drop can't shrink in circumference and particles are convectively pushed to its edges. The Penn team's earlier research showed that this phenomenon was highly dependent on particle shape. Spherical particles could flow under or over each other to aggregate on the edges, but ellipsoidal particles formed loosely packed logjams as they interacted with one another on the surface of the drop.
MIT's Borodin saw the Penn team's earlier experimental videos online, and they reminded him of analytical and simulation work he and others in the math community had performed on interfacial growth processes. These problems had some similarity to the random-walker problem, a classic example in probability theory that involves tracing the path of an object that randomly picks a direction each time it takes a step. In the present case, however, the random motion involved the shape of a surface: the edge of the drop where new particles are added to the system. Borodin was curious about these growth processes in drying drops, especially whether particle shape had any effect.
"Interfacial growth processes are ubiquitous in nature and industry, ranging from vapor deposition coatings to growing bacterial colonies, but not all growth processes are the same," Yunker said. "Theorists have identified several qualitatively distinct classes of these processes, but these predictions have proven difficult to test experimentally."
The two classes of particular interest are "Poisson" and "Kardar-Parisi-Zhang" processes. Poisson processes arise when growth is random in space and time; in the context of an interfacial growth process, the growth of one individual region is independent of neighboring regions. Kardar-Parisi-Zhang, or KPZ, processes are more complicated, arising when growth of an individual region depends on neighboring regions.
A purely mathematical simulation of an interfacial growth process might look like a game of Tetris but with single square blocks. These blocks fall at random into a series of adjacent columns, forming stacks.
In a Poisson process, since individual regions are independent, a tall stack is just as likely to be next to a short stack as another tall stack. Taking the top layers of the stacks as the "surface" of the system, Poisson processes produce a very rough surface, with large changes in surface height from one column to the next.
In contrast, KPZ processes arise when the blocks are "sticky." When these blocks fall into a column, they don't always fall all the way to the bottom but can stick to adjacent columns at their highest point. This means that short columns quickly catch up to their tall neighbors, and the resulting growth surfaces are smoother. There will be fewer abrupt changes in height from one column to the next.
"Many theoretical simulations have demonstrated KPZ processes, a fact which might lead one to think this process should be ubiquitous in nature," Yunker said. "However, few experiments have identified signatures of KPZ processes."
"The relative paucity of identified KPZ processes in experiments is likely due to two main factors," Yodh said. "First, a clean experiment is required; the presence of impurities or particle aggregation can destroy signatures of growth processes. Second, a substantial amount of data must be collected to make comparisons to theoretical predictions.
"Thus, experiments must be very precise and must characterize a wide range of size scales from the particle diameter to the growth fronts. Moreover, they must be repeated many times under exactly the same conditions to accumulate statistically meaningful amounts of homogeneous data."
As in the previous research, the Penn team's experiment involved drying drops of water with differently shaped plastic particles under a microscope. The team measured the growth fronts of particles at the drying edge, especially their height fluctuations -- the edge's roughness -- over time. When using spherical particles, they found their deposition at the edges of the drop exhibited a classic Poisson growth process. As they tried with increasingly elongated particles, however, the deposition pattern changed.
Slightly elliptical particles -- spheres stretched by 20 percent -- produced the elusive KPZ class of growth. Stretching the spheres further, 250 percent out of round, produced a third growth process known as KPZQ, or Kardar-Parisi-Zhang with Quenched Disorder. It is also called the "colloidal Matthew effect" as the surface's growth is proportional to the local particle density so that particle-rich regions get richer, while particle poor regions stay poor.
In practical terms, the experiment showed that when spheres and highly stretched particles are deposited, surface roughness grows at a high rate. However, when slightly stretched particles are deposited, surface roughness grows at a relatively slow rate.
The ability to control surface roughness can be important for industrial and commercial applications, as non-uniformity in films and coatings can lead to structural weakness or poor aesthetics. Surface roughness is controlled passively in the team's experiments, making this process potentially attractive alternative for more costly or complicated smoothing processes currently in use.
Read more at Science Daily
Jan 19, 2013
Removing Doubt Over Croc Snout Clout
Researchers have shown how the shape of a crocodile's snout could determine its ability to feast on certain types of prey, from large mammals to small fish.
Led by Dr Colin McHenry and PhD student Chris Walmsley, from Monash University's School of Biomedical Sciences, a team of researchers compared the jaw strength of different types of crocodiles when feeding on large prey. Using computer technology they subjected the jaws to the sorts of biting, shaking, and twisting loads that crocodiles use to feed on large prey. The team generated 3D images showing the strain measured on the jaws of seven diverse species of crocodile.
They found the lower jaws of short-snouted crocodiles performed well under the loads applied to mimic the feeding behaviour on large prey, but those with elongated jaws were more likely to break under the same loads, showing their limited ability to feed on large prey.
Detailed January 17 in PLoS One, the findings contribute to the understanding of how the shape of the crocodile's skull correlates with strength. It is the first study of its kind to investigate the mechanics that underlie the link between the shape of the lower jaw and diet.
"The notion that long, narrow snouted crocodiles feed primarily on fish or small prey is well established, but the biomechanics of the crocodiles' lower jaw, the mandible, have not been previously explored," Mr Walmsley said.
"To test the jaw biomechanics of large crocodiles we used a computational engineering approach, called Finite Element Analysis, that is widely used to design planes, cars, boats, buildings, bridges and many other structures.
"We found that mandible shape correlated consistently with jaw biomechanics. This means that the lower jaws of long-snouted species were not as strong and more likely to break during feeding on large prey. It's therefore no surprise that they tend to concentrate on small, agile, aquatic prey whilst shorter and more robust-snouted animals are capable of taking much larger prey."
Dr McHenry said the findings were relevant to a broad range of aquatic predators including dolphins and fossil marine reptiles.
"Interestingly the amount of strain a jaw was under was directly proportional to the length of the symphysis, which is the joint between two halves of the lower jaw. This means the animal's biomechanical response to force could be accurately predicted by knowing the length of its chin.
"Killer whales, alligators and salt-water crocodiles can all feed on large prey. In all of these species the symphysis is a small proportion of the length of the jaw. Whereas, fish-eating crocodiles and dolphins have long, narrow chins."
Dr McHenry said further research was needed to explain why crocodiles that feed on small prey had elongated snouts.
"We suspect the answer lies in the hydrodynamic efficiency of the elongate jaws, and we plan to explore this further using other computational engineering techniques," Dr McHenry said.
Read more at Science Daily
Led by Dr Colin McHenry and PhD student Chris Walmsley, from Monash University's School of Biomedical Sciences, a team of researchers compared the jaw strength of different types of crocodiles when feeding on large prey. Using computer technology they subjected the jaws to the sorts of biting, shaking, and twisting loads that crocodiles use to feed on large prey. The team generated 3D images showing the strain measured on the jaws of seven diverse species of crocodile.
They found the lower jaws of short-snouted crocodiles performed well under the loads applied to mimic the feeding behaviour on large prey, but those with elongated jaws were more likely to break under the same loads, showing their limited ability to feed on large prey.
Detailed January 17 in PLoS One, the findings contribute to the understanding of how the shape of the crocodile's skull correlates with strength. It is the first study of its kind to investigate the mechanics that underlie the link between the shape of the lower jaw and diet.
"The notion that long, narrow snouted crocodiles feed primarily on fish or small prey is well established, but the biomechanics of the crocodiles' lower jaw, the mandible, have not been previously explored," Mr Walmsley said.
"To test the jaw biomechanics of large crocodiles we used a computational engineering approach, called Finite Element Analysis, that is widely used to design planes, cars, boats, buildings, bridges and many other structures.
"We found that mandible shape correlated consistently with jaw biomechanics. This means that the lower jaws of long-snouted species were not as strong and more likely to break during feeding on large prey. It's therefore no surprise that they tend to concentrate on small, agile, aquatic prey whilst shorter and more robust-snouted animals are capable of taking much larger prey."
Dr McHenry said the findings were relevant to a broad range of aquatic predators including dolphins and fossil marine reptiles.
"Interestingly the amount of strain a jaw was under was directly proportional to the length of the symphysis, which is the joint between two halves of the lower jaw. This means the animal's biomechanical response to force could be accurately predicted by knowing the length of its chin.
"Killer whales, alligators and salt-water crocodiles can all feed on large prey. In all of these species the symphysis is a small proportion of the length of the jaw. Whereas, fish-eating crocodiles and dolphins have long, narrow chins."
Dr McHenry said further research was needed to explain why crocodiles that feed on small prey had elongated snouts.
"We suspect the answer lies in the hydrodynamic efficiency of the elongate jaws, and we plan to explore this further using other computational engineering techniques," Dr McHenry said.
Read more at Science Daily
Jan 18, 2013
Studying Ancient Earth's Geochemistry
Researchers still have much to learn about the volcanism that shaped our planet's early history. New evidence from a team led by Carnegie's Frances Jenner demonstrates that some of the tectonic processes driving volcanic activity, such as those taking place today, were occurring as early as 3.8 billion years ago. Their work is published in Geology.
Upwelling and melting of Earth's mantle at mid-ocean ridges, as well as the eruption of new magmas on the seafloor, drive the continual production of the oceanic crust. As the oceanic crust moves away from the mid-ocean ridges and cools it becomes denser than the underlying mantle. Over time the majority of this oceanic crust sinks back into the mantle, which can trigger further volcanic eruptions. This process is known as subduction and it takes place at plate boundaries.
Volcanic eruptions that are triggered by subduction of oceanic crust are chemically distinct from those erupting at mid-ocean ridges and oceanic island chains, such as Hawaii. The differences between the chemistry of magmas produced at each of these tectonic settings provide 'geochemical fingerprints' that can be used to try to identify the types of tectonic activity taking place early in Earth's history.
Previous geochemical studies have used similarities between modern subduction zone magmas and those erupted about 3.8 billion years ago, during the Eoarchean era, to argue that subduction-style tectonic activity was taking place early in Earth's history. But no one was able to locate any suites of volcanic rocks with compositions comparable to modern mid-ocean ridge or oceanic island magmas that were older than 3 billion years and were also free from contamination by continental crust.
Because of this missing piece of the puzzle, it has been ambiguous whether the subduction-like compositions of volcanic rocks erupted 3.8 billion years ago really were generated at subduction zones, or whether this magmatism should be attributed to other processes taking place early in Earth's history. Consequently, evidence for subduction-related tectonics earlier than 3 billion years ago has been highly debated in scientific literature.
Jenner and her team collected 3.8 billion-year-old volcanic rocks from Innersuartuut, an island in southwest Greenland, and found the samples have compositions comparable to modern oceanic islands, such as Hawaii.
Read more at Science Daily
Upwelling and melting of Earth's mantle at mid-ocean ridges, as well as the eruption of new magmas on the seafloor, drive the continual production of the oceanic crust. As the oceanic crust moves away from the mid-ocean ridges and cools it becomes denser than the underlying mantle. Over time the majority of this oceanic crust sinks back into the mantle, which can trigger further volcanic eruptions. This process is known as subduction and it takes place at plate boundaries.
Volcanic eruptions that are triggered by subduction of oceanic crust are chemically distinct from those erupting at mid-ocean ridges and oceanic island chains, such as Hawaii. The differences between the chemistry of magmas produced at each of these tectonic settings provide 'geochemical fingerprints' that can be used to try to identify the types of tectonic activity taking place early in Earth's history.
Previous geochemical studies have used similarities between modern subduction zone magmas and those erupted about 3.8 billion years ago, during the Eoarchean era, to argue that subduction-style tectonic activity was taking place early in Earth's history. But no one was able to locate any suites of volcanic rocks with compositions comparable to modern mid-ocean ridge or oceanic island magmas that were older than 3 billion years and were also free from contamination by continental crust.
Because of this missing piece of the puzzle, it has been ambiguous whether the subduction-like compositions of volcanic rocks erupted 3.8 billion years ago really were generated at subduction zones, or whether this magmatism should be attributed to other processes taking place early in Earth's history. Consequently, evidence for subduction-related tectonics earlier than 3 billion years ago has been highly debated in scientific literature.
Jenner and her team collected 3.8 billion-year-old volcanic rocks from Innersuartuut, an island in southwest Greenland, and found the samples have compositions comparable to modern oceanic islands, such as Hawaii.
Read more at Science Daily
Ancient 'Killer Walrus' Not So Deadly After All
A "killer walrus" thought to have terrorized the North Pacific 15 million years ago may not have been such a savvy slayer after all, researchers say.
A new analysis of fossil evidence of the prehistoric beast shows it was more of a fish-eater than an apex predator with a bone-crushing bite.
Traces of the middle Miocene walrus, named Pelagiarctos thomasi, were first found in the 1980s in the Sharktooth Hill bone bed of California. A chunk of a robust jawbone and sharp pointed teeth, which resembled those of the bone-cracking hyena, led researchers to believe the walrus ripped apart birds and other marine mammals in addition to the fish that modern walruses eat today.
But a more complete lower jaw and teeth from the long-gone species were recently discovered in the Topanga Canyon Formation near Los Angeles. Researchers say the shape of the teeth from this new specimen suggest the walrus was unlikely adapted to regularly feed on large prey. Instead, they think it was a generalist predator, feasting on fish, invertebrates and the occasional warm-blooded snack.
"When we examined the new specimen and the original fossils, we found that the teeth really weren't that sharp at all — in fact, the teeth looked like scaled-up versions of the teeth of a much smaller sea lion," researcher Robert Boessenecker, a geology doctoral student at the University of Otago in New Zealand, told the PLOS ONE Community Blog.
Using a model to estimate body size based on the size of the jaw, Boessenecker and Morgan Churchill of the University of Wyoming found that Pelagiarctos was quite large — about 770 pounds (350 kilograms), or similar in size to some modern male sea lions. But they noted that a big body alone likely wouldn't indicate that the species was a dominant predator. That's because both large and small modern species in the pinniped family — which includes seals, sea lions and walruses — are dietary generalists that tend to eat mostly fish.
Boessenecker added that the new findings give a clearer picture of the modern walrus' evolutionary past.
Read more at Discovery News
A new analysis of fossil evidence of the prehistoric beast shows it was more of a fish-eater than an apex predator with a bone-crushing bite.
Traces of the middle Miocene walrus, named Pelagiarctos thomasi, were first found in the 1980s in the Sharktooth Hill bone bed of California. A chunk of a robust jawbone and sharp pointed teeth, which resembled those of the bone-cracking hyena, led researchers to believe the walrus ripped apart birds and other marine mammals in addition to the fish that modern walruses eat today.
But a more complete lower jaw and teeth from the long-gone species were recently discovered in the Topanga Canyon Formation near Los Angeles. Researchers say the shape of the teeth from this new specimen suggest the walrus was unlikely adapted to regularly feed on large prey. Instead, they think it was a generalist predator, feasting on fish, invertebrates and the occasional warm-blooded snack.
"When we examined the new specimen and the original fossils, we found that the teeth really weren't that sharp at all — in fact, the teeth looked like scaled-up versions of the teeth of a much smaller sea lion," researcher Robert Boessenecker, a geology doctoral student at the University of Otago in New Zealand, told the PLOS ONE Community Blog.
Using a model to estimate body size based on the size of the jaw, Boessenecker and Morgan Churchill of the University of Wyoming found that Pelagiarctos was quite large — about 770 pounds (350 kilograms), or similar in size to some modern male sea lions. But they noted that a big body alone likely wouldn't indicate that the species was a dominant predator. That's because both large and small modern species in the pinniped family — which includes seals, sea lions and walruses — are dietary generalists that tend to eat mostly fish.
Boessenecker added that the new findings give a clearer picture of the modern walrus' evolutionary past.
Read more at Discovery News
The 5 Most Outrageous Hoaxes
Charismatic Notre Dame linebacker Manti Te'o may have just sealed his place in the history books, not for his impressive victories on the field, but for being involved in (knowingly or not) a widespread hoax involving a dying girlfriend who, it turns out, never existed.
In interviews last year, Te'o spoke about the personal obstacles and tragedies he'd overcome on his way to football excellence — most notably the deaths of his beloved grandmother and his girlfriend and love of his life, Lennay Kekua, within the same day. Te'o talked about the pain of losing both so close to him, as well as Kekua's emotional struggle — and finally losing battle — with leukemia. Though Te'o never met Kekua, the pair communicated mainly through e-mails and text messages.
Information is contradictory and details remain murky: Was Te'o in on the hoax, capitalizing on a crowd-pleasing sympathetic rags-to-riches story? Or was he himself the victim of a cruel hoax by someone sharing her (or his) fictional but emotionally moving life story? Or does the truth lie somewhere in between?
There are of course many different types of hoaxes. For example author James Frey wrote a 2003 novel about drug addiction recovery, claiming it was a memoir; homeowners in Amityville, N.Y., created a hoax in 1977 by claiming that their house was haunted by demons; and in 1996 physicist Alan Sokal submitted a gibberish article that was accepted and published in "Social Text," a respected cultural studies journal.
A hoax that costs money, embarrassment, or inconvenience may be merely a nuisance. But some of the most damaging and outrageous hoaxes are those that manipulate people's emotions and outrage the world. Here are a few of the most outlandish.
Flight of the Balloon Boy
In 2009 a 6-year-old boy named Falcon Heene was said to be in grave danger as he floated through Colorado skies in a silvery weather balloon created by his inventor father. His family claimed that he had climbed aboard the homemade balloon and launched, triggering a nationwide police search and rescue mission. It turned out that Heene, who became known as balloon boy, was in fact safe at home, and the family was suspected of staging the event in hopes of getting a reality TV show.
The Protocols of the Elders of Zion
Perhaps the most malicious religious hoax in history, "The Protocols of the Learned Elders of Zion" is a book supposedly revealing a secret Jewish conspiracy to take over the world. It first appeared in Russia in 1905, and though the book has been completely discredited as a forgery, it is still in print and remains widely circulated. Many people have endorsed this religious hoax, including actor Mel Gibson, Adolf Hitler, and automaker Henry Ford, who in 1920 paid to have a half-million copies of the book published.
The Tawana Brawley Attack
In 1987 America was riveted by the tragic news story of a young black girl named Tawana Brawley, who said she had been gang-raped by six white men, including several police officers. Rev. Al Sharpton and others fanned racial tensions and accused police of a cover-up. The following year, after an extensive investigation (and revelations about contradictions in Brawley's story), a grand jury concluded that the girl had hoaxed the incident. A New York prosecutor successfully sued both Brawley and Sharpton for defamation in a case whose racial legacy remains today.
The Innocence of Muslims
The trailer for the 2012 film "Innocence of Muslims" led to riots over its depiction of the prophet Muhammad as a womanizer, child molester and criminal. Several Americans were killed in protests linked to the film. To date it's not clear that the finished film actually exists, though a trailer for it does (it appeared on YouTube, sparking the riots). The producer hoaxed the actors and crew, later dubbing inflammatory lines that insulted Islam over their real dialogue. Whether or not the infamous anti-Muslim film exists, many around the world were led to believe it did, and people died because of it.
Read more at Discovery News
In interviews last year, Te'o spoke about the personal obstacles and tragedies he'd overcome on his way to football excellence — most notably the deaths of his beloved grandmother and his girlfriend and love of his life, Lennay Kekua, within the same day. Te'o talked about the pain of losing both so close to him, as well as Kekua's emotional struggle — and finally losing battle — with leukemia. Though Te'o never met Kekua, the pair communicated mainly through e-mails and text messages.
Information is contradictory and details remain murky: Was Te'o in on the hoax, capitalizing on a crowd-pleasing sympathetic rags-to-riches story? Or was he himself the victim of a cruel hoax by someone sharing her (or his) fictional but emotionally moving life story? Or does the truth lie somewhere in between?
There are of course many different types of hoaxes. For example author James Frey wrote a 2003 novel about drug addiction recovery, claiming it was a memoir; homeowners in Amityville, N.Y., created a hoax in 1977 by claiming that their house was haunted by demons; and in 1996 physicist Alan Sokal submitted a gibberish article that was accepted and published in "Social Text," a respected cultural studies journal.
A hoax that costs money, embarrassment, or inconvenience may be merely a nuisance. But some of the most damaging and outrageous hoaxes are those that manipulate people's emotions and outrage the world. Here are a few of the most outlandish.
Flight of the Balloon Boy
In 2009 a 6-year-old boy named Falcon Heene was said to be in grave danger as he floated through Colorado skies in a silvery weather balloon created by his inventor father. His family claimed that he had climbed aboard the homemade balloon and launched, triggering a nationwide police search and rescue mission. It turned out that Heene, who became known as balloon boy, was in fact safe at home, and the family was suspected of staging the event in hopes of getting a reality TV show.
The Protocols of the Elders of Zion
Perhaps the most malicious religious hoax in history, "The Protocols of the Learned Elders of Zion" is a book supposedly revealing a secret Jewish conspiracy to take over the world. It first appeared in Russia in 1905, and though the book has been completely discredited as a forgery, it is still in print and remains widely circulated. Many people have endorsed this religious hoax, including actor Mel Gibson, Adolf Hitler, and automaker Henry Ford, who in 1920 paid to have a half-million copies of the book published.
The Tawana Brawley Attack
In 1987 America was riveted by the tragic news story of a young black girl named Tawana Brawley, who said she had been gang-raped by six white men, including several police officers. Rev. Al Sharpton and others fanned racial tensions and accused police of a cover-up. The following year, after an extensive investigation (and revelations about contradictions in Brawley's story), a grand jury concluded that the girl had hoaxed the incident. A New York prosecutor successfully sued both Brawley and Sharpton for defamation in a case whose racial legacy remains today.
The Innocence of Muslims
The trailer for the 2012 film "Innocence of Muslims" led to riots over its depiction of the prophet Muhammad as a womanizer, child molester and criminal. Several Americans were killed in protests linked to the film. To date it's not clear that the finished film actually exists, though a trailer for it does (it appeared on YouTube, sparking the riots). The producer hoaxed the actors and crew, later dubbing inflammatory lines that insulted Islam over their real dialogue. Whether or not the infamous anti-Muslim film exists, many around the world were led to believe it did, and people died because of it.
Read more at Discovery News
A Supercomputer Fit to Dominate the Cosmos
It may not be self-aware (yet) but this computing monster is ready to take over the world. Well, at least a telescope in Chile.
Say hello to the correlator for the Atacama Large Millimeter/Submilimeter Array, or ALMA. The correlator is the computer that runs at the backend of an array of radio telescopes called an interferometer. It, very basically, combines all the signals of the antennas so that it can function as one single telescope.
The correlator was largely constructed in the building right across from where I did a lot of my graduate work at the National Radio Astronomy Observatory in Charlottesville, Virginia. I used to take any excuse, usually a visiting tour group, to gaze at the supercomputing monstrosity, although they were only working on it in sections. The picture above gives you a better sense of its full size.
Though large (134 million processors) and fast (17 quadrillion operations per second), this computer has just one purpose: to suck in all of the data from ALMA’s 66 dishes and transform it into data that can then be sent to the astronomers to calibrate and analyze. The correlator gives the interferometer its power to see incredibly fine detail and small structures, such as protoplanetary disks and distant star-forming galaxies.
The correlator came online in December as yet another step towards completing ALMA, a telescope that will give astronomers an unprecedented look at the sky in millimeter wavelengths. First science results have already been coming in from a partial array and correlator, giving scientists a tantalizing glimpse at what the full power of the array will hold.
Below is a picture of the back of a tiny section of this giant correlator. With 66 antennas, there are over 2000 combinations of antenna pairs, which leads to thousands and thousands of physical connections that must be made between circuit boards by hand. I think it is fair to say that wires were crossed more than once, and a consistent labeling scheme was necessary.
As impressive as this correlator is, it may be the last of its kind in an era where correlation is also being done by different methods, such as software correlators. The current correlator in use at the Very Long Baseline Array (VLBA) looks like, and actually is, a small cluster of commercially available computer towers. Not so long ago, the VLBA correlator was fed by huge tape machines which I also used to watch whir and spin while taking a break from research in Socorro, New Mexico.
Read more at Discovery News
Say hello to the correlator for the Atacama Large Millimeter/Submilimeter Array, or ALMA. The correlator is the computer that runs at the backend of an array of radio telescopes called an interferometer. It, very basically, combines all the signals of the antennas so that it can function as one single telescope.
The correlator was largely constructed in the building right across from where I did a lot of my graduate work at the National Radio Astronomy Observatory in Charlottesville, Virginia. I used to take any excuse, usually a visiting tour group, to gaze at the supercomputing monstrosity, although they were only working on it in sections. The picture above gives you a better sense of its full size.
Though large (134 million processors) and fast (17 quadrillion operations per second), this computer has just one purpose: to suck in all of the data from ALMA’s 66 dishes and transform it into data that can then be sent to the astronomers to calibrate and analyze. The correlator gives the interferometer its power to see incredibly fine detail and small structures, such as protoplanetary disks and distant star-forming galaxies.
The correlator came online in December as yet another step towards completing ALMA, a telescope that will give astronomers an unprecedented look at the sky in millimeter wavelengths. First science results have already been coming in from a partial array and correlator, giving scientists a tantalizing glimpse at what the full power of the array will hold.
Below is a picture of the back of a tiny section of this giant correlator. With 66 antennas, there are over 2000 combinations of antenna pairs, which leads to thousands and thousands of physical connections that must be made between circuit boards by hand. I think it is fair to say that wires were crossed more than once, and a consistent labeling scheme was necessary.
As impressive as this correlator is, it may be the last of its kind in an era where correlation is also being done by different methods, such as software correlators. The current correlator in use at the Very Long Baseline Array (VLBA) looks like, and actually is, a small cluster of commercially available computer towers. Not so long ago, the VLBA correlator was fed by huge tape machines which I also used to watch whir and spin while taking a break from research in Socorro, New Mexico.
Read more at Discovery News
Labels:
Computers,
Human,
Science,
Space,
Technology
Jan 17, 2013
New Key to Organism Complexity Identified
The enormously diverse complexity seen amongst individual species within the animal kingdom evolved from a surprisingly small gene pool. For example, mice effectively serve as medical research models because humans and mice share 80-percent of the same protein-coding genes. The key to morphological and behavioral complexity, a growing body of scientific evidence suggests, is the regulation of gene expression by a family of DNA-binding proteins called "transcription factors." Now, a team of researchers with the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley has discovered the secret behind how one these critical transcription factors is able to perform -- a split personality.
Using a technique called single-particle cryo-electron microscopy, the team, which was led by biophysicist Eva Nogales, showed that the transcription factor known as "TFIID" can co-exist in two distinct structural states. These two states -- the canonical and the rearranged -- differ only in the translocation of a single substructural element -- known as lobe A -- by 100 angstroms (an atom of hydrogen is about one angstrom in diameter). This structural shift enables initiation of the transcription process by which the genetic message of DNA is copied to RNA for the eventual production of proteins.
"TFIID by itself fluctuates between the canonical and rearranged states," Nogales says. "When TFIID becomes bound to another transcription factor, TFIIA, it shifts mostly to the canonical state, but in the presence of both TFIIA and DNA, the TFIID shifts to the rearranged state, which enables recognition and binding to key DNA sequences and marks the start of the transcriptional process."
Understanding the reorganization of TFIID and its role in transcription provides new insight into the regulation of gene expression, Nogales says, a process critical to the growth, development, health and survival of all organisms.
Nogales is a leading authority on electron microscopy and holds joint appointments with Berkeley Lab, the University of California (UC) at Berkeley, and the Howard Hughes Medical Institute (HHMI). She is the corresponding author of a paper describing this research in the journal Cell, titled "Human TFIID Binds to Core Promoter DNA in a Reorganized Structural State." Co-authors are Michael Cianfrocco, George Kassavetis, Patricia Grob, Jie Fang, Tamar Juven-Gershon and James Kadonaga.
The growing number of organisms whose genomes have been sequenced and made available for comparative analyses shows that the total number of genes in an organism's genome is no measure of its complexity. The fruit fly, Drosophila, for example, is far more complex than the nematode worm, Caenorhabditis elegans, but has about 6,000 fewer genes than the worm's 20,000. The total number of human genes is estimated to fall somewhere between 30,000 and 40,000. By comparison, the expression of the genes of both the fruit fly and the nematode are regulated through about 1,000 transcription factors, whereas the human genome boasts approximately 3,000 transcription factors. That multiple transcription factors often act in various combinations with one another creates even more evolutionary roads to organism complexity.
"Although the number of protein coding genes has remained fairly constant throughout metazoan evolution, the number of regulatory DNA elements has increased dramatically," Nogales says. "Our discovery of the existence of two structurally and functionally distinct forms of TFIID suggests a potential molecular mechanism by which a combination of transcription factors can tune the expression level of genes and thereby give rise to a diversity of outcomes."
Despite its critical role in transcription, high-resolution structural information of TFIID has been restricted to crystal structures of a handful of protein subunits and domains. Nogales and her colleagues are the first group to obtain three-dimensional visualization of human TFIID that is bound to DNA. The single-particle cryo-electron microscopy technique they employed records a series of two-dimensional images of an individual molecules or macromolecular complexes frozen in random orientations, then computationally combines these images into high-resolution 3D reconstructions.
"Through cryo-EM and extensive image-sorting, we found that TFIID exhibits a surprising degree of flexibility, moving its lobe A, a region that covers approximately one-third of the complex, by 100 angstroms across its central channel," says Cianfrocco, lead author of the Cell paper. "This movement of the lobe A is absolutely essential for TFIID to bind to DNA."
Nogales says that while many macromolecular complexes are known to be flexible, this typically involves the limited movement of a small region within the complex, or some tiny motion of the entire complex. The movement of TFIID's lobe A represents an entire restructuring that dramatically alters what the molecule can do. In the canonical state, TFIID's lobe A is bound to its lobe C, which appears to be the preferred form of free TFIID. In the rearranged state, TFIID's lobe A is bound to its lobe B, which is the state in which it can then strongly bind to DNA promoters.
"The TFIIA molecule serves as the mediator for this transition, maintaining TFIID in the canonical state in the absence of DNA and initiating the formation of the rearranged state in the presence of promoter DNA," Cianfrocco says. "Without the presence of TFIIA, the binding of TFIID to DNA is very weak."
Nogales and her colleagues are now studying how TFIID, once it is bound to DNA, recruits the rest of the machinery required to transcribe the genetic message into RNA.
"Our new work will involve constructing a macromolecular complex that is well over two million Daltons in size, which is about the size of a bacterial ribosome," Nogales says. "The size and relative instability of our complex will represent a major experimental challenge."
Read more at Science Daily
Using a technique called single-particle cryo-electron microscopy, the team, which was led by biophysicist Eva Nogales, showed that the transcription factor known as "TFIID" can co-exist in two distinct structural states. These two states -- the canonical and the rearranged -- differ only in the translocation of a single substructural element -- known as lobe A -- by 100 angstroms (an atom of hydrogen is about one angstrom in diameter). This structural shift enables initiation of the transcription process by which the genetic message of DNA is copied to RNA for the eventual production of proteins.
"TFIID by itself fluctuates between the canonical and rearranged states," Nogales says. "When TFIID becomes bound to another transcription factor, TFIIA, it shifts mostly to the canonical state, but in the presence of both TFIIA and DNA, the TFIID shifts to the rearranged state, which enables recognition and binding to key DNA sequences and marks the start of the transcriptional process."
Understanding the reorganization of TFIID and its role in transcription provides new insight into the regulation of gene expression, Nogales says, a process critical to the growth, development, health and survival of all organisms.
Nogales is a leading authority on electron microscopy and holds joint appointments with Berkeley Lab, the University of California (UC) at Berkeley, and the Howard Hughes Medical Institute (HHMI). She is the corresponding author of a paper describing this research in the journal Cell, titled "Human TFIID Binds to Core Promoter DNA in a Reorganized Structural State." Co-authors are Michael Cianfrocco, George Kassavetis, Patricia Grob, Jie Fang, Tamar Juven-Gershon and James Kadonaga.
The growing number of organisms whose genomes have been sequenced and made available for comparative analyses shows that the total number of genes in an organism's genome is no measure of its complexity. The fruit fly, Drosophila, for example, is far more complex than the nematode worm, Caenorhabditis elegans, but has about 6,000 fewer genes than the worm's 20,000. The total number of human genes is estimated to fall somewhere between 30,000 and 40,000. By comparison, the expression of the genes of both the fruit fly and the nematode are regulated through about 1,000 transcription factors, whereas the human genome boasts approximately 3,000 transcription factors. That multiple transcription factors often act in various combinations with one another creates even more evolutionary roads to organism complexity.
"Although the number of protein coding genes has remained fairly constant throughout metazoan evolution, the number of regulatory DNA elements has increased dramatically," Nogales says. "Our discovery of the existence of two structurally and functionally distinct forms of TFIID suggests a potential molecular mechanism by which a combination of transcription factors can tune the expression level of genes and thereby give rise to a diversity of outcomes."
Despite its critical role in transcription, high-resolution structural information of TFIID has been restricted to crystal structures of a handful of protein subunits and domains. Nogales and her colleagues are the first group to obtain three-dimensional visualization of human TFIID that is bound to DNA. The single-particle cryo-electron microscopy technique they employed records a series of two-dimensional images of an individual molecules or macromolecular complexes frozen in random orientations, then computationally combines these images into high-resolution 3D reconstructions.
"Through cryo-EM and extensive image-sorting, we found that TFIID exhibits a surprising degree of flexibility, moving its lobe A, a region that covers approximately one-third of the complex, by 100 angstroms across its central channel," says Cianfrocco, lead author of the Cell paper. "This movement of the lobe A is absolutely essential for TFIID to bind to DNA."
Nogales says that while many macromolecular complexes are known to be flexible, this typically involves the limited movement of a small region within the complex, or some tiny motion of the entire complex. The movement of TFIID's lobe A represents an entire restructuring that dramatically alters what the molecule can do. In the canonical state, TFIID's lobe A is bound to its lobe C, which appears to be the preferred form of free TFIID. In the rearranged state, TFIID's lobe A is bound to its lobe B, which is the state in which it can then strongly bind to DNA promoters.
"The TFIIA molecule serves as the mediator for this transition, maintaining TFIID in the canonical state in the absence of DNA and initiating the formation of the rearranged state in the presence of promoter DNA," Cianfrocco says. "Without the presence of TFIIA, the binding of TFIID to DNA is very weak."
Nogales and her colleagues are now studying how TFIID, once it is bound to DNA, recruits the rest of the machinery required to transcribe the genetic message into RNA.
"Our new work will involve constructing a macromolecular complex that is well over two million Daltons in size, which is about the size of a bacterial ribosome," Nogales says. "The size and relative instability of our complex will represent a major experimental challenge."
Read more at Science Daily
Why Wolves Are Forever Wild, but Dogs Can Be Tamed
Dogs and wolves are genetically so similar, it's been difficult for biologists to understand why wolves remain fiercely wild, while dogs can gladly become "man's best friend." Now, doctoral research by evolutionary biologist Kathryn Lord at the University of Massachusetts Amherst suggests the different behaviors are related to the animals' earliest sensory experiences and the critical period of socialization. Details appear in the current issue of Ethology.
Until now, little was known about sensory development in wolf pups, and assumptions were usually extrapolated from what is known for dogs, Lord explains. This would be reasonable, except scientists already know there are significant differences in early development between wolf and dog pups, chief among them timing of the ability to walk, she adds.
To address this knowledge gap, she studied responses of seven wolf pups and 43 dogs to both familiar and new smells, sounds and visual stimuli, tested them weekly, and found they did develop their senses at the same time. But her study also revealed new information about how the two subspecies of Canis lupus experience their environment during a four-week developmental window called the critical period of socialization, and the new facts may significantly change understanding of wolf and dog development.
When the socialization window is open, wolf and dog pups begin walking and exploring without fear and will retain familiarity throughout their lives with those things they contact. Domestic dogs can be introduced to humans, horses and even cats at this stage and be comfortable with them forever. But as the period progresses, fear increases and after the window closes, new sights, sounds and smells will elicit a fear response.
Through observations, Lord confirmed that both wolf pups and dogs develop the sense of smell at age two weeks, hearing at four weeks and vision by age six weeks on average. However, these two subspecies enter the critical period of socialization at different ages. Dogs begin the period at four weeks, while wolves begin at two weeks. Therefore, how each subspecies experiences the world during that all-important month is extremely different, and likely leads to different developmental paths, she says.
Lord reports for the first time that wolf pups are still blind and deaf when they begin to walk and explore their environment at age two weeks. "No one knew this about wolves, that when they begin exploring they're blind and deaf and rely primarily on smell at this stage, so this is very exciting," she notes.
She adds, "When wolf pups first start to hear, they are frightened of the new sounds initially, and when they first start to see they are also initially afraid of new visual stimuli. As each sense engages, wolf pups experience a new round of sensory shocks that dog puppies do not."
Meanwhile, dog pups only begin to explore and walk after all three senses, smell, hearing and sight, are functioning. Overall, "It's quite startling how different dogs and wolves are from each other at that early age, given how close they are genetically. A litter of dog puppies at two weeks are just basically little puddles, unable to get up or walk around. But wolf pups are exploring actively, walking strongly with good coordination and starting to be able to climb up little steps and hills."
These significant, development-related differences in dog and wolf pups' experiences put them on distinctly different trajectories in relation to the ability to form interspecies social attachments, notably with humans, Lord says. This new information has implications for managing wild and captive wolf populations, she says.
Her experiments analyzed the behavior of three groups of young animals: 11 wolves from three litters and 43 dogs total. Of the dogs, 33 border collies and German shepherds were raised by their mothers and a control group of 10 German shepherd pups were hand-raised, meaning a human was introduced soon after birth.
Read more at Science Daily
Until now, little was known about sensory development in wolf pups, and assumptions were usually extrapolated from what is known for dogs, Lord explains. This would be reasonable, except scientists already know there are significant differences in early development between wolf and dog pups, chief among them timing of the ability to walk, she adds.
To address this knowledge gap, she studied responses of seven wolf pups and 43 dogs to both familiar and new smells, sounds and visual stimuli, tested them weekly, and found they did develop their senses at the same time. But her study also revealed new information about how the two subspecies of Canis lupus experience their environment during a four-week developmental window called the critical period of socialization, and the new facts may significantly change understanding of wolf and dog development.
When the socialization window is open, wolf and dog pups begin walking and exploring without fear and will retain familiarity throughout their lives with those things they contact. Domestic dogs can be introduced to humans, horses and even cats at this stage and be comfortable with them forever. But as the period progresses, fear increases and after the window closes, new sights, sounds and smells will elicit a fear response.
Through observations, Lord confirmed that both wolf pups and dogs develop the sense of smell at age two weeks, hearing at four weeks and vision by age six weeks on average. However, these two subspecies enter the critical period of socialization at different ages. Dogs begin the period at four weeks, while wolves begin at two weeks. Therefore, how each subspecies experiences the world during that all-important month is extremely different, and likely leads to different developmental paths, she says.
Lord reports for the first time that wolf pups are still blind and deaf when they begin to walk and explore their environment at age two weeks. "No one knew this about wolves, that when they begin exploring they're blind and deaf and rely primarily on smell at this stage, so this is very exciting," she notes.
She adds, "When wolf pups first start to hear, they are frightened of the new sounds initially, and when they first start to see they are also initially afraid of new visual stimuli. As each sense engages, wolf pups experience a new round of sensory shocks that dog puppies do not."
Meanwhile, dog pups only begin to explore and walk after all three senses, smell, hearing and sight, are functioning. Overall, "It's quite startling how different dogs and wolves are from each other at that early age, given how close they are genetically. A litter of dog puppies at two weeks are just basically little puddles, unable to get up or walk around. But wolf pups are exploring actively, walking strongly with good coordination and starting to be able to climb up little steps and hills."
These significant, development-related differences in dog and wolf pups' experiences put them on distinctly different trajectories in relation to the ability to form interspecies social attachments, notably with humans, Lord says. This new information has implications for managing wild and captive wolf populations, she says.
Her experiments analyzed the behavior of three groups of young animals: 11 wolves from three litters and 43 dogs total. Of the dogs, 33 border collies and German shepherds were raised by their mothers and a control group of 10 German shepherd pups were hand-raised, meaning a human was introduced soon after birth.
Read more at Science Daily
Famed Warrior Medici Died From Gangrene
The legendary Renaissance warrior Giovanni de’ Medici did not die from an improperly amputated leg, as widely believed, but an infection.
Also known as “Giovanni dalle Bande Nere” for the black bands of mourning he wore after the death of Pope Leo X, the 16th century army commander was exhumed last November from his tomb in the Medici Chapels in Florence. Researchers also exhumed the bones of his wife, Maria Salviati.
The couple married in 1516, when she was 17 and he was 18. The marriage produced only one child: Cosimo I, who reigned as the first Grand Duke of Tuscany, creating the Uffizi and the magnificent Boboli Gardens as well as finishing the Pitti Palace.
Led by Gino Fornaciari, professor of forensic anthropology and director of the pathology Museum at the University of Pisa, the exhumation aimed at establishing whether the surgery carried on the celebrated condottiero (mercenary soldier) was improperly performed.
Although he had acquired a reputation for invincibility, Giovanni of the Black Bands (1498-1526) died at only 28 after being hit by a cannon ball, in a battle in Lombardy on Nov. 25, 1526. He was fighting the Imperialist troops marching to the sack of Rome.
As the ball crashed the right leg above the knee, the condottiero was taken to the palace of marquis Luigi Alessandro Gonzaga in Mantua. Gangrene soon set in, and Gonzaga’s surgeon Maestro Abram decided to intervene by amputating the leg.
“It was believed that the amputation was not carried above the wound, but slighly above the ankle. This would have meant a death sentence for Giovanni, ” Fornaciari told Discovery News.
The surgeon was unfairly accused, Fornaciari explained. Giovanni’s tibia and fibula were sawed off, but the researchers found no signs of lesions above the amputation. Neither they noticed any damage at the knee and the femur (thigh bone).
“The leg was already partially amputated by the cannon ball, so the surgeon simply completed the amputation by cleaning the wound and smoothing the stump,” Fornaciari said.
According to a report by the poet Pietro Aretino, Giovanni’s close friend and eyewitness to the surgery, 10 men were summoned to hold down the warrior during the procedure.
“‘Not even 20,’ Giovanni said smiling, ‘could hold me,’ and he took a candle in his hand, so that he could make light onto himself,” Aretino wrote.
Despite his stoic behaviour during the agonizing procedure, Giovanni died five days later, on Nov. 30, 1526.
“Maestro Abram did all he could, but the gangrene infection was at a too advanced stage,” Fornaciari said.
Anthropological investigations also established that the warrior was about 5 feet, 8 inches tall and very sturdy.
“We found many vertebral hernias, a consequence of wearing heavy armors,” Fornaciari said.
The researchers also discovered that Maria Salviati, Giovanni’s wife, suffered from a serious parodontal disease, an abscess and 10 cavities.
Read more at Discovery News
Also known as “Giovanni dalle Bande Nere” for the black bands of mourning he wore after the death of Pope Leo X, the 16th century army commander was exhumed last November from his tomb in the Medici Chapels in Florence. Researchers also exhumed the bones of his wife, Maria Salviati.
The couple married in 1516, when she was 17 and he was 18. The marriage produced only one child: Cosimo I, who reigned as the first Grand Duke of Tuscany, creating the Uffizi and the magnificent Boboli Gardens as well as finishing the Pitti Palace.
Led by Gino Fornaciari, professor of forensic anthropology and director of the pathology Museum at the University of Pisa, the exhumation aimed at establishing whether the surgery carried on the celebrated condottiero (mercenary soldier) was improperly performed.
Although he had acquired a reputation for invincibility, Giovanni of the Black Bands (1498-1526) died at only 28 after being hit by a cannon ball, in a battle in Lombardy on Nov. 25, 1526. He was fighting the Imperialist troops marching to the sack of Rome.
As the ball crashed the right leg above the knee, the condottiero was taken to the palace of marquis Luigi Alessandro Gonzaga in Mantua. Gangrene soon set in, and Gonzaga’s surgeon Maestro Abram decided to intervene by amputating the leg.
“It was believed that the amputation was not carried above the wound, but slighly above the ankle. This would have meant a death sentence for Giovanni, ” Fornaciari told Discovery News.
The surgeon was unfairly accused, Fornaciari explained. Giovanni’s tibia and fibula were sawed off, but the researchers found no signs of lesions above the amputation. Neither they noticed any damage at the knee and the femur (thigh bone).
“The leg was already partially amputated by the cannon ball, so the surgeon simply completed the amputation by cleaning the wound and smoothing the stump,” Fornaciari said.
According to a report by the poet Pietro Aretino, Giovanni’s close friend and eyewitness to the surgery, 10 men were summoned to hold down the warrior during the procedure.
“‘Not even 20,’ Giovanni said smiling, ‘could hold me,’ and he took a candle in his hand, so that he could make light onto himself,” Aretino wrote.
Despite his stoic behaviour during the agonizing procedure, Giovanni died five days later, on Nov. 30, 1526.
“Maestro Abram did all he could, but the gangrene infection was at a too advanced stage,” Fornaciari said.
Anthropological investigations also established that the warrior was about 5 feet, 8 inches tall and very sturdy.
“We found many vertebral hernias, a consequence of wearing heavy armors,” Fornaciari said.
The researchers also discovered that Maria Salviati, Giovanni’s wife, suffered from a serious parodontal disease, an abscess and 10 cavities.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Deforestation Robs Amazon Soil of Life
Standing where the Amazon rainforest became pasture, it is easy to recognize the absence of trees. What you might not notice is the drop in biodiversity under your feet, where a crucial part of every ecosystem lives.
“We knew that if you cut the trees, the animals will disappear, but we didn’t know what happened to the microbes,” said Jorge Rodrigues, assistant professor of biology at University of Texas at Arlington and leader of the research team. “We showed for the first time, in the case of the Amazon, you would see losses of microbe species when moving land use from the forest to the pasture.” Rodrigues published that finding recently in the journal Proceedings of the National Academy of Science (PNAS).
One reason to be concerned about the health of microorganism populations is that they help the entire ecosystem function.
“The microbes are the ones that keep everything cycling,” said soil scientist Teri Balser of the University of Florida, who was not involved with the research project.
These tiny organisms are crucial for tasks like decomposing matter into plant food, particularly in tropical soils like those in the Amazon rainforest.
Tropical soils are not as fertile and have less organic matter mixed in with the mineral components of soil. Since there aren’t a lot of extra nutrients in tropical soil, Balser said plants are more dependent on microorganisms to recycle undecomposed material back into a plant-ready form.
However, Balser said even though decreased microbial diversity may have consequences for the land, the actual significance of homogenizing microorganism populations is not well understood.
“We really don’t know how much it matters when you lose microbial diversity,” she said. It is a lingering question for soil microbiologists.
“We’re working on such a different scale than plants and animals that it’s really hard to know how important a change in microbes is,” she said. “A cubic meter of ground for a microbe is the equivalent of the whole planet for a plant.” That difference in scale makes interpreting changes in microbial diversity challenging.
Even without knowing exactly how an ecosystem functions with a homogenized microorganism population, Rodrigues said another reason to be concerned about losing diversity is the loss of material with possible value for medicine.
“Lots of antibiotics are made by bacteria,” Rodrigues said. “We might be losing a lot of potential antibiotics, or biotechnological advances later on.”
A diversity of bacteria and other microorganisms helps land respond to stress according to Rodrigues. Decreasing the types of microscopic life in Amazonian soil may decrease the land’s ability to adapt to change he said.
Read more at Discovery News
“We knew that if you cut the trees, the animals will disappear, but we didn’t know what happened to the microbes,” said Jorge Rodrigues, assistant professor of biology at University of Texas at Arlington and leader of the research team. “We showed for the first time, in the case of the Amazon, you would see losses of microbe species when moving land use from the forest to the pasture.” Rodrigues published that finding recently in the journal Proceedings of the National Academy of Science (PNAS).
One reason to be concerned about the health of microorganism populations is that they help the entire ecosystem function.
“The microbes are the ones that keep everything cycling,” said soil scientist Teri Balser of the University of Florida, who was not involved with the research project.
These tiny organisms are crucial for tasks like decomposing matter into plant food, particularly in tropical soils like those in the Amazon rainforest.
Tropical soils are not as fertile and have less organic matter mixed in with the mineral components of soil. Since there aren’t a lot of extra nutrients in tropical soil, Balser said plants are more dependent on microorganisms to recycle undecomposed material back into a plant-ready form.
However, Balser said even though decreased microbial diversity may have consequences for the land, the actual significance of homogenizing microorganism populations is not well understood.
“We really don’t know how much it matters when you lose microbial diversity,” she said. It is a lingering question for soil microbiologists.
“We’re working on such a different scale than plants and animals that it’s really hard to know how important a change in microbes is,” she said. “A cubic meter of ground for a microbe is the equivalent of the whole planet for a plant.” That difference in scale makes interpreting changes in microbial diversity challenging.
Even without knowing exactly how an ecosystem functions with a homogenized microorganism population, Rodrigues said another reason to be concerned about losing diversity is the loss of material with possible value for medicine.
“Lots of antibiotics are made by bacteria,” Rodrigues said. “We might be losing a lot of potential antibiotics, or biotechnological advances later on.”
A diversity of bacteria and other microorganisms helps land respond to stress according to Rodrigues. Decreasing the types of microscopic life in Amazonian soil may decrease the land’s ability to adapt to change he said.
Read more at Discovery News
Jan 16, 2013
Choice of Partner Affects Health
Individuals tend to choose partners of equal socio-economic status. This factor may also be significant in terms of health.
"Married and common-law couples often share similar attitudes, behaviour and levels of education. Our study revealed that this tendency can exacerbate social inequality related to health," explains doctoral fellow, Sara Marie Nilsen of the Norwegian University of Science and Technology (NTNU) in Trondheim.
Sara Marie Nilsen has examined the significance of differences in education and how this relates to the individual's subjective perception of his or her own health in general, and of anxiety and depression in particular, among the nearly 19,000 Norwegian couples who participated in the Nord-Trøndelag health study (HUNT). HUNT is a very large longitudinal population health study carried out in Norway. Ms Nilsen's doctoral research project is funded under the Research Programme on Public Health (FOLKEHELSE) at the Research Council of Norway.
Lower education, poorer health
"Our findings show that spouses and partners often have a similar perception of their health status. Sharing the same level of education may be a factor behind this correlation. The highly educated are often healthier than people with lower levels of education," states Ms Nilsen.
However, her thesis also indicates that the individual's health is directly affected by the education of the partner. For example, individuals with a lower level of education will feel healthier if they live together with someone with a higher education.
"It is also turns out that partners with different levels of education share a fairly similar perception of their health," Sara Marie Nilsen explains.
Social inequality as a health factor
Social inequality is one of the FOLKEHELSE programme's four thematic priority areas. Research activities take as their point of departure the fact that the higher our social status, the better our health, and vice-versa. Factors such as income, occupation and education play a pivotal role in whether a person will develop cardiovascular disease, cancer, chronic illness or the like.
Ms Nilsen has chosen education as a measure of socio-economic standing since educational background forms the basis for an individual's work life and subsequent level of income, and is also the key to social status.
Her findings indicate that the higher a couple's combined status, the healthier each of them will be. The opposite also holds true; the lower their combined status, the poorer their individual health.
Research on couples provides new insight
Two out of three Norwegians live together in relationships. It can be difficult to explain social inequality in health, especially for women, without taking into consideration the impact partners have on each other. Social position can also affect children's health.
Previously, health research has primarily focused on individual risk. Examining partners and spouses as a unit is a relatively new approach in this context.
"Health researchers would benefit from focusing more attention on the social contexts we live in, as couples, families and households. This is precisely where the majority of us spend most of our time," Ms Nilsen points out.
"Social inequalities in health have been, and remain, a sensitive topic. Researchers are apprehensive about amplifying the feeling of failure among those who are struggling to begin with," says Steinar Westin, professor of social medicine at NTNU.
Read more at Science Daily
"Married and common-law couples often share similar attitudes, behaviour and levels of education. Our study revealed that this tendency can exacerbate social inequality related to health," explains doctoral fellow, Sara Marie Nilsen of the Norwegian University of Science and Technology (NTNU) in Trondheim.
Sara Marie Nilsen has examined the significance of differences in education and how this relates to the individual's subjective perception of his or her own health in general, and of anxiety and depression in particular, among the nearly 19,000 Norwegian couples who participated in the Nord-Trøndelag health study (HUNT). HUNT is a very large longitudinal population health study carried out in Norway. Ms Nilsen's doctoral research project is funded under the Research Programme on Public Health (FOLKEHELSE) at the Research Council of Norway.
Lower education, poorer health
"Our findings show that spouses and partners often have a similar perception of their health status. Sharing the same level of education may be a factor behind this correlation. The highly educated are often healthier than people with lower levels of education," states Ms Nilsen.
However, her thesis also indicates that the individual's health is directly affected by the education of the partner. For example, individuals with a lower level of education will feel healthier if they live together with someone with a higher education.
"It is also turns out that partners with different levels of education share a fairly similar perception of their health," Sara Marie Nilsen explains.
Social inequality as a health factor
Social inequality is one of the FOLKEHELSE programme's four thematic priority areas. Research activities take as their point of departure the fact that the higher our social status, the better our health, and vice-versa. Factors such as income, occupation and education play a pivotal role in whether a person will develop cardiovascular disease, cancer, chronic illness or the like.
Ms Nilsen has chosen education as a measure of socio-economic standing since educational background forms the basis for an individual's work life and subsequent level of income, and is also the key to social status.
Her findings indicate that the higher a couple's combined status, the healthier each of them will be. The opposite also holds true; the lower their combined status, the poorer their individual health.
Research on couples provides new insight
Two out of three Norwegians live together in relationships. It can be difficult to explain social inequality in health, especially for women, without taking into consideration the impact partners have on each other. Social position can also affect children's health.
Previously, health research has primarily focused on individual risk. Examining partners and spouses as a unit is a relatively new approach in this context.
"Health researchers would benefit from focusing more attention on the social contexts we live in, as couples, families and households. This is precisely where the majority of us spend most of our time," Ms Nilsen points out.
"Social inequalities in health have been, and remain, a sensitive topic. Researchers are apprehensive about amplifying the feeling of failure among those who are struggling to begin with," says Steinar Westin, professor of social medicine at NTNU.
Read more at Science Daily
Mathematical Breakthrough Sets out Rules for More Effective Teleportation
For the last ten years, theoretical physicists have shown that the intense connections generated between particles as established in the quantum law of 'entanglement' may hold the key to eventual teleportation of quantum information.
Now, for the first time, researchers have worked out how entanglement could be 'recycled' to increase the efficiency of these connections. Published in the journal Physical Review Letters, the result could conceivably take us a step closer to sci-fi style teleportation in the future, although this research is purely theoretical in nature.
The team have also devised a generalised form of teleportation, which allows for a wide variety of potential applications in quantum physics.
Once considered impossible, in 1993 a team of scientists calculated that teleportation could work in principle using quantum laws. Quantum teleportation harnesses the 'entanglement' law to transmit particle-sized bites of information across potentially vast distances in an instant.
Entanglement involves a pair of quantum particles such as electrons or protons that are intrinsically bound together, retaining synchronisation between the two that holds whether the particles are next to each other or on opposing sides of a galaxy. Through this connection, quantum bits of information -- qubits -- can be relayed using only traditional forms of classical communication.
Previous teleportation protocols, have fallen into one of two camps, those that could only send scrambled information requiring correction by the receiver, or more recently, "port-based" teleportation that doesn't require a correction, but needed an impractical amount of entanglement -- each object sent would destroy the entangled state.
Now, physicists from Cambridge, University College London, and the University of Gdansk have developed a protocol to provide an optimal solution in which the entangled state is 'recycled', so that the gateway between particles holds for the teleportation of multiple objects.
They have even devised a protocol in which multiple qubits can be teleported simultaneously, although the entangled state degrades proportionally to the amount of qubits sent in both cases.
"The first protocol consists of sequentially teleporting states, and the second teleports them in a bulk," said Sergii Strelchuck from Cambridge's Department of Applied Mathematics and Theoretical Physics, who led the research with colleagues Jonathan Oppenheim of Cambridge and UCL and Michal Horodecki of the University of Gdansk.
"We have also found a generalised teleportation technique which we hope will find applications in areas such as quantum computation."
Einstein famously loathed the theory of quantum entanglement, dismissing it as "spooky action at a distance." But entanglement has since been proven to be a very real feature of our universe, and one that has extraordinary potential to advance all manner of scientific endeavor.
"There is a close connection between teleportation and quantum computers, which are devices which exploit quantum mechanics to perform computations which would not be feasible on a classical computer," said Strelchuck.
"Building a quantum computer is one of the great challenges of modern physics, and it is hoped that the new teleportation protocol will lead to advances in this area."
While the Cambridge physicists' protocol is completely theoretical, last year a team of Chinese scientists reported teleporting photons over 143km, breaking previous records, and quantum entanglement is increasingly seen as an important area of scientific investment. Teleportation of information carried by single atoms is feasible with current technologies, but the teleportation of large objects -- such as Captain Kirk -- remains in the realm of science fiction.
Read more at Science Daily
Now, for the first time, researchers have worked out how entanglement could be 'recycled' to increase the efficiency of these connections. Published in the journal Physical Review Letters, the result could conceivably take us a step closer to sci-fi style teleportation in the future, although this research is purely theoretical in nature.
The team have also devised a generalised form of teleportation, which allows for a wide variety of potential applications in quantum physics.
Once considered impossible, in 1993 a team of scientists calculated that teleportation could work in principle using quantum laws. Quantum teleportation harnesses the 'entanglement' law to transmit particle-sized bites of information across potentially vast distances in an instant.
Entanglement involves a pair of quantum particles such as electrons or protons that are intrinsically bound together, retaining synchronisation between the two that holds whether the particles are next to each other or on opposing sides of a galaxy. Through this connection, quantum bits of information -- qubits -- can be relayed using only traditional forms of classical communication.
Previous teleportation protocols, have fallen into one of two camps, those that could only send scrambled information requiring correction by the receiver, or more recently, "port-based" teleportation that doesn't require a correction, but needed an impractical amount of entanglement -- each object sent would destroy the entangled state.
Now, physicists from Cambridge, University College London, and the University of Gdansk have developed a protocol to provide an optimal solution in which the entangled state is 'recycled', so that the gateway between particles holds for the teleportation of multiple objects.
They have even devised a protocol in which multiple qubits can be teleported simultaneously, although the entangled state degrades proportionally to the amount of qubits sent in both cases.
"The first protocol consists of sequentially teleporting states, and the second teleports them in a bulk," said Sergii Strelchuck from Cambridge's Department of Applied Mathematics and Theoretical Physics, who led the research with colleagues Jonathan Oppenheim of Cambridge and UCL and Michal Horodecki of the University of Gdansk.
"We have also found a generalised teleportation technique which we hope will find applications in areas such as quantum computation."
Einstein famously loathed the theory of quantum entanglement, dismissing it as "spooky action at a distance." But entanglement has since been proven to be a very real feature of our universe, and one that has extraordinary potential to advance all manner of scientific endeavor.
"There is a close connection between teleportation and quantum computers, which are devices which exploit quantum mechanics to perform computations which would not be feasible on a classical computer," said Strelchuck.
"Building a quantum computer is one of the great challenges of modern physics, and it is hoped that the new teleportation protocol will lead to advances in this area."
While the Cambridge physicists' protocol is completely theoretical, last year a team of Chinese scientists reported teleporting photons over 143km, breaking previous records, and quantum entanglement is increasingly seen as an important area of scientific investment. Teleportation of information carried by single atoms is feasible with current technologies, but the teleportation of large objects -- such as Captain Kirk -- remains in the realm of science fiction.
Read more at Science Daily
To Save a Cathedral, Marinate in Olive Oil
One of the most beautiful and revered cathedrals in Christendom, York Minster in northern England has survived war, looting, fire, pillaging and other threats over the centuries. But the Gothic masterpiece is crumbling due to a relatively recent enemy: acid rain. Preservationists, however, may have found a way to protect it using a common kitchen item.
The limestone rock used to build the church is vulnerable to acids, and has been under attack since the Industrial Revolution began filling the skies of England with acidic pollution, according to Gizmag.com. The result is acid rain that can wreak havoc on earthly structures.
And despite their best efforts, preservationists have found no protective coating that could keep the towering spires of York Minster safe, until they hit upon a novel treatment: olive oil.
The extract contains oleic acid, a compound that can bind with the stone and protect it, according to Dr. Karen Wilson of the Cardiff School of Chemistry at Cardiff University in Wales.
"The nice thing with oleic acid is that [a molecule of] it has one end ... which will selectively react with the stone, and then the other end, which is a very long hydrocarbon chain, will give you the hydrophobic properties to repel the water," Wilson told NPR.
By combining the olive oil with a Teflon-like material, researchers hope to protect the porous limestone from acid rain, while also allowing the stone to breathe, according to Gizmag.com. (Acid rain forms from both natural sources, such as volcanoes and decomposing plants, and man-made sources, mainly the sulfur dioxide and nitrogen oxides from fossil fuel combustion.)
A trial is now underway using a limestone sample to determine the long-term effects of the treatment.
Olive oil has a long and storied association with human civilization. Originally used to prepare and preserve food, it has also been revealed as an ingredient in ancient medicines.
Some people even use olive oil as a shaving lubricant, to remove paint from hair and skin, as furniture polish, to free stuck zippers and as an additive in cat food to prevent hairballs, according to Curbly.com.
Read more at Discovery News
The limestone rock used to build the church is vulnerable to acids, and has been under attack since the Industrial Revolution began filling the skies of England with acidic pollution, according to Gizmag.com. The result is acid rain that can wreak havoc on earthly structures.
And despite their best efforts, preservationists have found no protective coating that could keep the towering spires of York Minster safe, until they hit upon a novel treatment: olive oil.
The extract contains oleic acid, a compound that can bind with the stone and protect it, according to Dr. Karen Wilson of the Cardiff School of Chemistry at Cardiff University in Wales.
"The nice thing with oleic acid is that [a molecule of] it has one end ... which will selectively react with the stone, and then the other end, which is a very long hydrocarbon chain, will give you the hydrophobic properties to repel the water," Wilson told NPR.
By combining the olive oil with a Teflon-like material, researchers hope to protect the porous limestone from acid rain, while also allowing the stone to breathe, according to Gizmag.com. (Acid rain forms from both natural sources, such as volcanoes and decomposing plants, and man-made sources, mainly the sulfur dioxide and nitrogen oxides from fossil fuel combustion.)
A trial is now underway using a limestone sample to determine the long-term effects of the treatment.
Olive oil has a long and storied association with human civilization. Originally used to prepare and preserve food, it has also been revealed as an ingredient in ancient medicines.
Some people even use olive oil as a shaving lubricant, to remove paint from hair and skin, as furniture polish, to free stuck zippers and as an additive in cat food to prevent hairballs, according to Curbly.com.
Read more at Discovery News
Mysterious Shaman Stones Uncovered in Panama
Archaeologists have unearthed nearly 5,000-year-old shaman's stones in a rock shelter in Panama. The stone collection may be the earliest evidence of shamanic rituals in that region of Central America, researchers say.
The 12 stones were found in the Casita de Piedra rock shelter, in the Isthmus of Panama. The rocks, which carbon-dating of surrounding material showed to be between 4,000 and 4,800 years old, were clustered in a tight pile. That suggests they had been carried there, likely in a leather pouch that has long-since disintegrated, said study co-author, Ruth Dickau, an archaeologist at the University of Exeter, in an email.
"If our interpretation is correct, it constitutes the earliest material evidence in lower Central America of shamanistic practice," the authors wrote in the article.
The findings were published online Dec. 27 in the journal Archaeological and Anthropological Sciences.
The Pre-Columbian rock shelter was first discovered in the 1970s, and was initially thought to have been used by people since about 6,500 years ago. In 2006 Dickau reanalyzed the shelter and found that people had used the shady nook for cooking and tool-making for over 9,000 years. During the excavations, she also uncovered the mysterious cache of stones.
The collection, which included translucent quartz, pyrite, magnetic rocks and bladed tools, was likely used in shamanic rituals because of how closely together they were packed, Dickau told LiveScience. Some of the rocks contained grains of iron called magnetite, and showed magnetic properties by deflecting a compass needle. In addition, the stone types themselves don't come from the rock shelter, but were historically used in shamanic rituals throughout the region.
The stones came from a distant, gold-rich region of Panama called the Central Cordillera up to 3,000 years before mining of the precious metal began, said study co-author and consulting geologist Stewart Redwood in a statement.
"However, there are no gold artifacts in the rock shelter, and there's no evidence that the stones were collected in the course of gold prospecting as the age of the cache pre-dates the earliest known gold artifacts from Panama by more than 2,000 years," Redwood said in a statement.
The shaman who once used these rocks probably belonged to an indigenous culture that lived off maize, manioc and wild tubers. But the story of the rocks themselves may remain an enigma.
"We will never be entirely sure how the ancient people used the stones in the past," Dickau wrote.
Read more at Discovery News
The 12 stones were found in the Casita de Piedra rock shelter, in the Isthmus of Panama. The rocks, which carbon-dating of surrounding material showed to be between 4,000 and 4,800 years old, were clustered in a tight pile. That suggests they had been carried there, likely in a leather pouch that has long-since disintegrated, said study co-author, Ruth Dickau, an archaeologist at the University of Exeter, in an email.
"If our interpretation is correct, it constitutes the earliest material evidence in lower Central America of shamanistic practice," the authors wrote in the article.
The findings were published online Dec. 27 in the journal Archaeological and Anthropological Sciences.
The Pre-Columbian rock shelter was first discovered in the 1970s, and was initially thought to have been used by people since about 6,500 years ago. In 2006 Dickau reanalyzed the shelter and found that people had used the shady nook for cooking and tool-making for over 9,000 years. During the excavations, she also uncovered the mysterious cache of stones.
The collection, which included translucent quartz, pyrite, magnetic rocks and bladed tools, was likely used in shamanic rituals because of how closely together they were packed, Dickau told LiveScience. Some of the rocks contained grains of iron called magnetite, and showed magnetic properties by deflecting a compass needle. In addition, the stone types themselves don't come from the rock shelter, but were historically used in shamanic rituals throughout the region.
The stones came from a distant, gold-rich region of Panama called the Central Cordillera up to 3,000 years before mining of the precious metal began, said study co-author and consulting geologist Stewart Redwood in a statement.
"However, there are no gold artifacts in the rock shelter, and there's no evidence that the stones were collected in the course of gold prospecting as the age of the cache pre-dates the earliest known gold artifacts from Panama by more than 2,000 years," Redwood said in a statement.
The shaman who once used these rocks probably belonged to an indigenous culture that lived off maize, manioc and wild tubers. But the story of the rocks themselves may remain an enigma.
"We will never be entirely sure how the ancient people used the stones in the past," Dickau wrote.
Read more at Discovery News
Jan 15, 2013
Chimps Have a Sense of Fairness
Humans aren't the only ones who cry "no fair." In a classic test of fairness called the ultimatum game, apes will dole out an equitable share of their bananas — and when they don't, their partners will complain, a new study shows.
The findings, published today (Jan. 14) in the Proceedings of the National Academy of Sciences, suggests that humans and chimpanzees may share an evolved sense of fairness common to many cooperative species, said lead study author Darby Proctor, a primatologist at Emory University.
"If you're involved in some cooperative act you need to be sure you're engaging in something that's beneficial to you," Proctor told LiveScience. "Comparing your rewards with others' seems like it would be really, really important."
Fair and square
In a classic economic trial called the ultimatum game, people are given $100 and can give some fraction of it to an anonymous partner they'll never see again. The recipients can reject the offer if they don't like it, in which case both people get nothing.
Rationally, the "smart" response would be to take any offer, no matter how low, but participants routinely reject offers lower than $10 or $20, said Manfred Milinski, an evolutionary biologist at the Max Planck Institute for Evolutionary Biology in Germany, who was not involved in the study. Most people offer around $40 to their partners and in some countries, people offer more than half of the money to partners, Milinski told LiveScience.
Selfish apes
But past studies of the ultimatum game in chimpanzees (with raisins) had suggested our closest living relatives were "rational maximizers" who would accept even the stingiest offering without getting ruffled. They even accepted zero-raisin offers without even a squawk. That suggested their main goal — getting more tasty raisins — overrode any meager sense of fairness they may have had.
Those studies, however, instantly started a new round of the game if the apes accepted, but made them wait a full minute after rejecting the offer, raising the possibility that the apes realized it was more fruitful to accept quickly to get more raisins, rather than rejecting low-ball offers.
Chimps and children
In the new study, the team trained the primates to dole out tokens that stood for bananas, with one token symbolizing an equal split, while the other was an unfair deal that benefitted the first chimp.
No chimp recipients rejected unfair offers, but they did occasionally hiss, spit or shout at unequal distributions. One recipient even spit a mouthful of water at its partner, Proctor said.
At first the chimps were stingy, but very quickly, they switched to offering equitable splits in the ultimatum game.
To test the method, the researchers had 3- to 5-year-old children participate in a similar experiment using stickers instead of bananas. The little ones started out greedy but quickly offered the tokens for fairer distributions of stickers. And those who got a raw deal complained.
Read more at Discovery News
The findings, published today (Jan. 14) in the Proceedings of the National Academy of Sciences, suggests that humans and chimpanzees may share an evolved sense of fairness common to many cooperative species, said lead study author Darby Proctor, a primatologist at Emory University.
"If you're involved in some cooperative act you need to be sure you're engaging in something that's beneficial to you," Proctor told LiveScience. "Comparing your rewards with others' seems like it would be really, really important."
Fair and square
In a classic economic trial called the ultimatum game, people are given $100 and can give some fraction of it to an anonymous partner they'll never see again. The recipients can reject the offer if they don't like it, in which case both people get nothing.
Rationally, the "smart" response would be to take any offer, no matter how low, but participants routinely reject offers lower than $10 or $20, said Manfred Milinski, an evolutionary biologist at the Max Planck Institute for Evolutionary Biology in Germany, who was not involved in the study. Most people offer around $40 to their partners and in some countries, people offer more than half of the money to partners, Milinski told LiveScience.
Selfish apes
But past studies of the ultimatum game in chimpanzees (with raisins) had suggested our closest living relatives were "rational maximizers" who would accept even the stingiest offering without getting ruffled. They even accepted zero-raisin offers without even a squawk. That suggested their main goal — getting more tasty raisins — overrode any meager sense of fairness they may have had.
Those studies, however, instantly started a new round of the game if the apes accepted, but made them wait a full minute after rejecting the offer, raising the possibility that the apes realized it was more fruitful to accept quickly to get more raisins, rather than rejecting low-ball offers.
Chimps and children
In the new study, the team trained the primates to dole out tokens that stood for bananas, with one token symbolizing an equal split, while the other was an unfair deal that benefitted the first chimp.
No chimp recipients rejected unfair offers, but they did occasionally hiss, spit or shout at unequal distributions. One recipient even spit a mouthful of water at its partner, Proctor said.
At first the chimps were stingy, but very quickly, they switched to offering equitable splits in the ultimatum game.
To test the method, the researchers had 3- to 5-year-old children participate in a similar experiment using stickers instead of bananas. The little ones started out greedy but quickly offered the tokens for fairer distributions of stickers. And those who got a raw deal complained.
Read more at Discovery News
Do Scientists Fear the Paranormal?
The question has been asked for decades: why haven’t psychic powers been proven yet? Psychics have been studied for decades, both in and out of the laboratory, yet the scientific community (and the public at large) remains unconvinced.
In a recent book, “Science & Psychic Phenomena: The Fall of the House of Skeptics,” author Chris Carter insists that the reason that psychic powers have not been proven is because scientists are unaware of the research or refuse to take it seriously because “Clearly many scientists find the claims of parapsychology disturbing.”
This is a common charge leveled against skeptics and scientists: that they refuse to acknowledge the existence of paranormal phenomenon (psychic abilities, ghosts, etc.) because it would somehow challenge or “disturb” their worldview.
Skeptics and scientists, they say, are deeply personally and professional invested in defending the scientific status quo and cannot psychologically tolerate the idea that they could be wrong. This results in a closed-minded refusal to accept, or even seriously examine, the evidence.
But is this really true? Do scientists ignore and dismiss claims and evidence that challenge dominant scientific ideas? Let’s examine some recent examples.
Psychic Powers
A study published in 2011 in a scientific journal claimed to have found strong evidence for the existence of psychic powers such as ESP. The paper, written by Cornell professor Daryl J. Bem, was published in The Journal of Personality and Social Psychology and quickly made headlines around the world for its implication: that psychic powers had been scientifically proven.
Bem’s claim of evidence for ESP wasn’t ridiculed or ignored; instead it was taken seriously and tested by scientific researchers.
Replication is of course the hallmark of valid scientific research — if the findings are true and accurate, they should be able to be replicated by others. Otherwise the results may simply be due to normal and expected statistical variations and errors. If other experimenters cannot get the same result using the same techniques, it’s usually a sign that the original study was flawed in one or more ways.
A team of researchers collaborated to accurately replicate Bem’s final experiment, and found no evidence for any psychic powers. Their results were published in the journal PLoS ONE. Bem — explicitly contradicting Carter’s suggestion that skeptics set out to discredit his work or refused to look at it — acknowledged that the findings did not support his claims and wrote that the researchers had “made a competent, good-faith effort to replicate the results of one of my experiments on precognition.”
The following year a second group of scientists also tried to replicate Bem’s ESP experiments, and once again found no evidence for psychic power. The article, “Correcting the Past: Failures to Replicate Psi,” was published in The Journal of Personality and Social Psychology and is available on the web page of the Social Science Research Network.
Einstein’s Mistake?
In September 2011, news shot around the world that Italian physicists had measured particles traveling faster than light. The neutrino in the experiment only exceeded the speed of light by a little tiny bit — 60 nanoseconds — but if validated would violate the fundamental laws of physics.
Questions swirled: Would the findings hold up under repeated experiments? Could this team have proven Einstein wrong about the speed of light?
What was the reaction from the scientific community to the news of this fundamentals-of-physics-challenging finding? They didn’t ignore the results, hoping the inconvenient truth would go away; they didn’t brand the scientists liars or hoaxers; they didn’t shout, “Burn the witch, this is heresy and cannot be true!”
Instead, they did what all scientists do when confronted with such anomalous evidence: they took a closer look at the experiment to make sure the results were valid, and tried to replicate the research. It later turned out that the anomaly was caused by at least two measurement errors, possibly including a loose cable: the experiment was flawed.
The scientists were not skeptical because accepting that Einstein was wrong about something would lead to a nervous breakdown, or that their whole worldview would crumble beneath them, or that they would have to accept that science doesn’t know everything.
The reason scientists were skeptical is because the new study contradicted all previous experiments. That’s what good science does: When you do a study or experiment — especially one whose results conflict with earlier conclusions, you study it closely and question it before accepting the results.
In science, those who disprove dominant theories are rewarded, not punished. Disproving one of Einstein’s best-known predictions (or proving the existence of psychic powers) would earn the dissenting scientists a place in the history books, if not a Nobel Prize.
The same pattern exists in other areas of the unexplained. For example many scientists have worked on analyzing alleged hair from mysterious animals such as Bigfoot and the Chupacabra. Researchers from Oxford University spent part of last year collecting samples of alleged Bigfoot hair for possible genetic identification; geneticist Bryan Sykes conducted DNA analysis and plans to publish his results in a peer-reviewed scientific journal soon.
Read more at Discovery News
In a recent book, “Science & Psychic Phenomena: The Fall of the House of Skeptics,” author Chris Carter insists that the reason that psychic powers have not been proven is because scientists are unaware of the research or refuse to take it seriously because “Clearly many scientists find the claims of parapsychology disturbing.”
This is a common charge leveled against skeptics and scientists: that they refuse to acknowledge the existence of paranormal phenomenon (psychic abilities, ghosts, etc.) because it would somehow challenge or “disturb” their worldview.
Skeptics and scientists, they say, are deeply personally and professional invested in defending the scientific status quo and cannot psychologically tolerate the idea that they could be wrong. This results in a closed-minded refusal to accept, or even seriously examine, the evidence.
But is this really true? Do scientists ignore and dismiss claims and evidence that challenge dominant scientific ideas? Let’s examine some recent examples.
Psychic Powers
A study published in 2011 in a scientific journal claimed to have found strong evidence for the existence of psychic powers such as ESP. The paper, written by Cornell professor Daryl J. Bem, was published in The Journal of Personality and Social Psychology and quickly made headlines around the world for its implication: that psychic powers had been scientifically proven.
Bem’s claim of evidence for ESP wasn’t ridiculed or ignored; instead it was taken seriously and tested by scientific researchers.
Replication is of course the hallmark of valid scientific research — if the findings are true and accurate, they should be able to be replicated by others. Otherwise the results may simply be due to normal and expected statistical variations and errors. If other experimenters cannot get the same result using the same techniques, it’s usually a sign that the original study was flawed in one or more ways.
A team of researchers collaborated to accurately replicate Bem’s final experiment, and found no evidence for any psychic powers. Their results were published in the journal PLoS ONE. Bem — explicitly contradicting Carter’s suggestion that skeptics set out to discredit his work or refused to look at it — acknowledged that the findings did not support his claims and wrote that the researchers had “made a competent, good-faith effort to replicate the results of one of my experiments on precognition.”
The following year a second group of scientists also tried to replicate Bem’s ESP experiments, and once again found no evidence for psychic power. The article, “Correcting the Past: Failures to Replicate Psi,” was published in The Journal of Personality and Social Psychology and is available on the web page of the Social Science Research Network.
Einstein’s Mistake?
In September 2011, news shot around the world that Italian physicists had measured particles traveling faster than light. The neutrino in the experiment only exceeded the speed of light by a little tiny bit — 60 nanoseconds — but if validated would violate the fundamental laws of physics.
Questions swirled: Would the findings hold up under repeated experiments? Could this team have proven Einstein wrong about the speed of light?
What was the reaction from the scientific community to the news of this fundamentals-of-physics-challenging finding? They didn’t ignore the results, hoping the inconvenient truth would go away; they didn’t brand the scientists liars or hoaxers; they didn’t shout, “Burn the witch, this is heresy and cannot be true!”
Instead, they did what all scientists do when confronted with such anomalous evidence: they took a closer look at the experiment to make sure the results were valid, and tried to replicate the research. It later turned out that the anomaly was caused by at least two measurement errors, possibly including a loose cable: the experiment was flawed.
The scientists were not skeptical because accepting that Einstein was wrong about something would lead to a nervous breakdown, or that their whole worldview would crumble beneath them, or that they would have to accept that science doesn’t know everything.
The reason scientists were skeptical is because the new study contradicted all previous experiments. That’s what good science does: When you do a study or experiment — especially one whose results conflict with earlier conclusions, you study it closely and question it before accepting the results.
In science, those who disprove dominant theories are rewarded, not punished. Disproving one of Einstein’s best-known predictions (or proving the existence of psychic powers) would earn the dissenting scientists a place in the history books, if not a Nobel Prize.
The same pattern exists in other areas of the unexplained. For example many scientists have worked on analyzing alleged hair from mysterious animals such as Bigfoot and the Chupacabra. Researchers from Oxford University spent part of last year collecting samples of alleged Bigfoot hair for possible genetic identification; geneticist Bryan Sykes conducted DNA analysis and plans to publish his results in a peer-reviewed scientific journal soon.
Read more at Discovery News
'Death Star' Superweapon Idea Not New
More than 34,000 people signed an online petition calling on the Obama administration to build the “Star Wars” super-weapon to spur job growth and bolster national defense. With tongue firmly embedded in cheek, the White House reported last Friday that a Death Star would cost too much to build and that the President, “does not support blowing up planets.”
As silly as this sounds, the idea of mega-weapons in space is nothing new.
As the U.S. scrambled to catch up with the Soviets at the launch of the Space Race in the late 1950s, the Air Force considered dropping an atomic bomb on the moon to display U.S. superiority. At the same time, the U.S. Army looked into the feasibility of building a $8 billion lunar outpost to “protect potential United States interests on the moon,” and do surveillance of Earth.
The military also considered placing nuclear missiles on the moon as a sort of doomsday weapon. It would allow for a second strike on the USSR should the U.S. be decimated by a Soviet ICBM surprise first strike.
The 1968 the film classic 2001: A Space Odyssey showed orbiting nuclear bombs (seen at top) in establishing scenes (though they were not explicitly identified as space weapons). In one version of the Arthur C. Clarke and Stanley Kubrick screenplay the alien reincarnated “Star Child” defiantly explodes the space arsenal. Kubrick dropped this from the film ending because it was too similar to the ending of his 1964 dark comedy classic, Dr. Strangelove, which imagines a “doomsday machine” nuking all of Earth.
Ironically, a year earlier nearly 100 countries signed the Outer Space Treaty that bans the stationing of weapons of mass destruction beyond Earth. (In 1958 the U.S. detonated a couple low-yield tactical atomic bombs at high altitude over South Africa.) The treaty also prohibits the militarization of celestial bodies — so long Starship Troopers. This could become problematic if we ever considered launching a super-nuke to deflect an earthbound asteroid.
In the early 1980s President Ronald Regan envisioned a multi-layered defensive shield called the Strategic Defense Initiative (SDI) that was almost like the imaginary force field in sci-fi stories. The weapons envisioned were straight out of the film Star Wars. Among the proposed arsenal were particle beam weapons and orbiting gamma-ray lasers that would shoot soviet ICBMs out of the sky as effectively as the arcade game Missile Command. The laser would be powered by detonating a nuclear bomb in Earth orbit.
Let’s imagine for a moment that an Evil Empire-type super-civilization wanted to essentially sterilize a planet. They might want to colonize the planet but not deal with its indigenous life forms, which could be a Jurassic Park of pretty vicious predators. The planet could be wiped clean for colonization without building a megabucks death-ray space battle station. The aliens could simply tap a lot of kinetic energy by retargeting an asteroid to smash into the planet obliterate the surface biosphere. After waiting a few years for the dust to settle the conquering civilization would move in with their own genetically engineered Noah’s Ark life forms.
Ironically, the secret U. S. Vela satellites launched in the 1960s to monitor any gamma rays from rogue above ground atomic bomb tests (banned under the 1963 Partial Test Ban Treaty by the Soviet Union), picked up, on a daily basis, intense bursts from universe’s natural death stars. Years later it was determined that explosions — hypernovae (an imploding star that unleashes much more energy than a supernova) — focused devastating beams of gamma rays seen by Vela as gamma-ray bursts.
Read more at Discovery News
As silly as this sounds, the idea of mega-weapons in space is nothing new.
As the U.S. scrambled to catch up with the Soviets at the launch of the Space Race in the late 1950s, the Air Force considered dropping an atomic bomb on the moon to display U.S. superiority. At the same time, the U.S. Army looked into the feasibility of building a $8 billion lunar outpost to “protect potential United States interests on the moon,” and do surveillance of Earth.
The military also considered placing nuclear missiles on the moon as a sort of doomsday weapon. It would allow for a second strike on the USSR should the U.S. be decimated by a Soviet ICBM surprise first strike.
The 1968 the film classic 2001: A Space Odyssey showed orbiting nuclear bombs (seen at top) in establishing scenes (though they were not explicitly identified as space weapons). In one version of the Arthur C. Clarke and Stanley Kubrick screenplay the alien reincarnated “Star Child” defiantly explodes the space arsenal. Kubrick dropped this from the film ending because it was too similar to the ending of his 1964 dark comedy classic, Dr. Strangelove, which imagines a “doomsday machine” nuking all of Earth.
Ironically, a year earlier nearly 100 countries signed the Outer Space Treaty that bans the stationing of weapons of mass destruction beyond Earth. (In 1958 the U.S. detonated a couple low-yield tactical atomic bombs at high altitude over South Africa.) The treaty also prohibits the militarization of celestial bodies — so long Starship Troopers. This could become problematic if we ever considered launching a super-nuke to deflect an earthbound asteroid.
In the early 1980s President Ronald Regan envisioned a multi-layered defensive shield called the Strategic Defense Initiative (SDI) that was almost like the imaginary force field in sci-fi stories. The weapons envisioned were straight out of the film Star Wars. Among the proposed arsenal were particle beam weapons and orbiting gamma-ray lasers that would shoot soviet ICBMs out of the sky as effectively as the arcade game Missile Command. The laser would be powered by detonating a nuclear bomb in Earth orbit.
Let’s imagine for a moment that an Evil Empire-type super-civilization wanted to essentially sterilize a planet. They might want to colonize the planet but not deal with its indigenous life forms, which could be a Jurassic Park of pretty vicious predators. The planet could be wiped clean for colonization without building a megabucks death-ray space battle station. The aliens could simply tap a lot of kinetic energy by retargeting an asteroid to smash into the planet obliterate the surface biosphere. After waiting a few years for the dust to settle the conquering civilization would move in with their own genetically engineered Noah’s Ark life forms.
Ironically, the secret U. S. Vela satellites launched in the 1960s to monitor any gamma rays from rogue above ground atomic bomb tests (banned under the 1963 Partial Test Ban Treaty by the Soviet Union), picked up, on a daily basis, intense bursts from universe’s natural death stars. Years later it was determined that explosions — hypernovae (an imploding star that unleashes much more energy than a supernova) — focused devastating beams of gamma rays seen by Vela as gamma-ray bursts.
Read more at Discovery News
Incoming ISON to be Dazzling Daytime Comet?
Later this year, the world could be in for a once-in-a-century astronomical treat: Comet ISON may become as bright as a full moon and be a daytime comet. Yes, on a clear day, you should be able to go outside and see ISON hanging in a blue abyss (just as Comet McNaught did in 2007, pictured above).
The comet, designated C/2012 S1, was discovered last year by the Russian International Scientific Optical Network (ISON — hence the comet’s name) and it quickly became apparent that it could be the “Comet of the Century.” If it lives up to the hype, we’ll be in for a very exciting nighttime and daytime show this November. However, astronomers urge caution: comets don’t always behave as expected.
It appears that ISON is a pristine comet freshly ejected from the Oort Cloud (a hypothetical population of comets that surround the solar system around one light-year from the sun), so it could be a pretty robust object packed with primordial ice and dust that was created during the solar system’s formative years. If this is the case, it could survive its death-defying journey past the sun, creating a wonderful tail of ice, gas and dust as it does so.
As pointed out by NASA’s Tony Philips at Spaceweather.com:
Or, it might be a dud. But it’s good to be prepared for something awesome.
Read more at Discovery News
The comet, designated C/2012 S1, was discovered last year by the Russian International Scientific Optical Network (ISON — hence the comet’s name) and it quickly became apparent that it could be the “Comet of the Century.” If it lives up to the hype, we’ll be in for a very exciting nighttime and daytime show this November. However, astronomers urge caution: comets don’t always behave as expected.
It appears that ISON is a pristine comet freshly ejected from the Oort Cloud (a hypothetical population of comets that surround the solar system around one light-year from the sun), so it could be a pretty robust object packed with primordial ice and dust that was created during the solar system’s formative years. If this is the case, it could survive its death-defying journey past the sun, creating a wonderful tail of ice, gas and dust as it does so.
As pointed out by NASA’s Tony Philips at Spaceweather.com:
“Comet ISON is a sungrazer. On Nov. 28, 2013, it will fly through the sun’s outer atmosphere only 1.2 million km from the stellar surface below. If the comet survives the encounter, it could emerge glowing as brightly as the Moon, visible near the sun in the blue daylight sky. The comet’s dusty tail stretching into the night would create a worldwide sensation.”
Or, it might be a dud. But it’s good to be prepared for something awesome.
Read more at Discovery News
Jan 14, 2013
Building Electronics from the Ground Up
There's hardly a moment in modern life that doesn't involve electronic devices, whether they're guiding you to a destination by GPS or deciding which incoming messages merit a beep, ring or vibration. But our expectation that the next shopping season will inevitably offer an upgrade to more-powerful gadgets largely depends on size -- namely, the ability of the industry to shrink transistors so that more can fit on ever-tinier chip surfaces.
Engineers have been up to the task of electronics miniaturization for decades now, and the principle that the computer industry will be able to do it on a regular schedule -- as codified in Moore's Law -- won't come into doubt any time soon, thanks to researchers like the University of South Carolina's Chuanbing Tang.
Tang is a leader in constructing miniscule structures from the bottom up, rather than the top down. Currently, modern electronics are primarily fabricated by the latter method: the smooth surface of a starting material -- say, a wafer of silicon -- is etched through micro- or nanolithography to establish a pattern on it. The top-down method might involve a prefabricated template, such as a photomask, to establish the pattern. But the approach is becoming more and more challenging, because reducing the size of the features on the requisite templates is getting extremely expensive as engineers work their way further down the nanoscale. "Going from 500 to sub-30 nanometers is cost prohibitive for large-scale production," said Tang, an assistant professor in the department of chemistry and biochemistry in USC's College of Arts and Sciences.
As a chemist, Tang uses a bottom-up approach: he works with the individual molecules that go onto a surface, coaxing them to self-arrange into the patterns needed. One established method of doing this involves block copolymers, in which a polymer chain is made up of two or more sections of different polymerized monomers.
If the different block sections are properly designed, the blocks will self-aggregate when placed on a surface, and the aggregation can be harnessed to create desirable patterns on the nanoscale without the need for any templates. Di-block copolymers of poly(ethylene oxide) and polystyrene, for example, have been used to construct highly ordered arrays of perpendicular cylinders of nanoscale materials. Solvent evaporation, or annealing, of these polymers on surfaces exerts an external directional field that can enhance the patterning process and create nearly defect-free arrays.
Tang's laboratory just published a paper for the special "Emerging Investigators 2013" issue of the journal Chemical Communications that takes this method to a new level. Working together with graduate student Christopher Hardy, Tang led a team that fabricated nanoparticles of pure, crystalline iron oxide with controlled size and spacing on silicon wafers by covalently incorporating a ferrocene moiety into a tri-block copolymer.
Incorporating metals into nanoscale designs is crucial for fabricating electronic devices, and Tang's method is a step forward for the field. Because ferrocene is covalently bonded to the block copolymer, there is no need for a complexation step to add a metal-containing compound to the surface -- a burdensome requirement of most previous methods. Moreover, their technique is a step beyond related polymer systems that contain covalent ferrocenylsilane linkages, in which removal of the organic components leaves behind silicon oxide as an impurity in the metal oxide.
The technique is a promising addition to the available tools for addressing the chronic need to decrease the size of electronic components. "The industry won't replace top-down methods," Tang said, "but they plan to use bottom-up together with the existing top-down methods soon."
Read more at Science Daily
Engineers have been up to the task of electronics miniaturization for decades now, and the principle that the computer industry will be able to do it on a regular schedule -- as codified in Moore's Law -- won't come into doubt any time soon, thanks to researchers like the University of South Carolina's Chuanbing Tang.
Tang is a leader in constructing miniscule structures from the bottom up, rather than the top down. Currently, modern electronics are primarily fabricated by the latter method: the smooth surface of a starting material -- say, a wafer of silicon -- is etched through micro- or nanolithography to establish a pattern on it. The top-down method might involve a prefabricated template, such as a photomask, to establish the pattern. But the approach is becoming more and more challenging, because reducing the size of the features on the requisite templates is getting extremely expensive as engineers work their way further down the nanoscale. "Going from 500 to sub-30 nanometers is cost prohibitive for large-scale production," said Tang, an assistant professor in the department of chemistry and biochemistry in USC's College of Arts and Sciences.
As a chemist, Tang uses a bottom-up approach: he works with the individual molecules that go onto a surface, coaxing them to self-arrange into the patterns needed. One established method of doing this involves block copolymers, in which a polymer chain is made up of two or more sections of different polymerized monomers.
If the different block sections are properly designed, the blocks will self-aggregate when placed on a surface, and the aggregation can be harnessed to create desirable patterns on the nanoscale without the need for any templates. Di-block copolymers of poly(ethylene oxide) and polystyrene, for example, have been used to construct highly ordered arrays of perpendicular cylinders of nanoscale materials. Solvent evaporation, or annealing, of these polymers on surfaces exerts an external directional field that can enhance the patterning process and create nearly defect-free arrays.
Tang's laboratory just published a paper for the special "Emerging Investigators 2013" issue of the journal Chemical Communications that takes this method to a new level. Working together with graduate student Christopher Hardy, Tang led a team that fabricated nanoparticles of pure, crystalline iron oxide with controlled size and spacing on silicon wafers by covalently incorporating a ferrocene moiety into a tri-block copolymer.
Incorporating metals into nanoscale designs is crucial for fabricating electronic devices, and Tang's method is a step forward for the field. Because ferrocene is covalently bonded to the block copolymer, there is no need for a complexation step to add a metal-containing compound to the surface -- a burdensome requirement of most previous methods. Moreover, their technique is a step beyond related polymer systems that contain covalent ferrocenylsilane linkages, in which removal of the organic components leaves behind silicon oxide as an impurity in the metal oxide.
The technique is a promising addition to the available tools for addressing the chronic need to decrease the size of electronic components. "The industry won't replace top-down methods," Tang said, "but they plan to use bottom-up together with the existing top-down methods soon."
Read more at Science Daily
New Implant Replaces Impaired Middle Ear
Functionally deaf patients can gain normal hearing with a new implant that replaces the middle ear. The unique invention from the Chalmers University of Technology has been approved for a clinical study. The first operation was performed on a patient in December 2012.
With the new hearing implant, developed at Chalmers in collaboration with Sahlgrenska University Hospital in Gothenburg, the patient has an operation to insert an implant slightly less than six centimetres long just behind the ear, under the skin and attached to the skull bone itself. The new technique uses the skull bone to transmit sound vibrations to the inner ear, so-called bone conduction.
"You hear 50 percent of your own voice through bone conduction, so you perceive this sound as quite natural," says Professor Bo Håkansson, of the Department of Signals and Systems, Chalmers.
The new implant, BCI (Bone Conduction Implant), was developed by Bo Håkansson and his team of researchers. Unlike the type of bone-conduction device used today, the new hearing implant does not need to be anchored in the skull bone using a titanium screw through the skin. The patient has no need to fear losing the screw and there is no risk of skin infections arising around the fixing.
The first operation was performed on 5 December 2012 by Måns Eeg-Olofsson, Senior Physician at Sahlgrenska University Hospital, Gothenburg, and went entirely according to plan.
"Once the implant was in place, we tested its function and everything seems to be working as intended so far. Now, the wound needs to heal for six weeks before we can turn the hearing sound processor on," says Måns Eeg-Olofsson, who has been in charge of the medical aspects of the project for the past two years.
The technique has been designed to treat mechanical hearing loss in individuals who have been affected by chronic inflammation of the outer or middle ear, or bone disease, or who have congenital malformations of the outer ear, auditory canal or middle ear. Such people often have major problems with their hearing. Normal hearing aids, which compensate for neurological problems in the inner ear, rarely work for them. On the other hand, bone-anchored devices often provide a dramatic improvement.
In addition, the new device may also help people with impaired inner ear.
"Patients can probably have a neural impairment of down to 30-40 dB even in the cochlea. We are going to try to establish how much of an impairment can be tolerated through this clinical study," says Bo Håkansson.
If the technique works, patients have even more to gain. Earlier tests indicate that the volume may be around 5 decibels higher and the quality of sound at high frequencies will be better with BCI than with previous bone-anchored techniques.
Now it's soon time to activate the first patient's implant, and adapt it to the patient's hearing and wishes. Then hearing tests and checks will be performed roughly every three months until a year after the operation.
"At that point, we will end the process with a final X-ray examination and final hearing tests. If we get good early indications we will continue operating other patients during this spring already," says Måns Eeg-Olofsson.
Read more at Science Daily
With the new hearing implant, developed at Chalmers in collaboration with Sahlgrenska University Hospital in Gothenburg, the patient has an operation to insert an implant slightly less than six centimetres long just behind the ear, under the skin and attached to the skull bone itself. The new technique uses the skull bone to transmit sound vibrations to the inner ear, so-called bone conduction.
"You hear 50 percent of your own voice through bone conduction, so you perceive this sound as quite natural," says Professor Bo Håkansson, of the Department of Signals and Systems, Chalmers.
The new implant, BCI (Bone Conduction Implant), was developed by Bo Håkansson and his team of researchers. Unlike the type of bone-conduction device used today, the new hearing implant does not need to be anchored in the skull bone using a titanium screw through the skin. The patient has no need to fear losing the screw and there is no risk of skin infections arising around the fixing.
The first operation was performed on 5 December 2012 by Måns Eeg-Olofsson, Senior Physician at Sahlgrenska University Hospital, Gothenburg, and went entirely according to plan.
"Once the implant was in place, we tested its function and everything seems to be working as intended so far. Now, the wound needs to heal for six weeks before we can turn the hearing sound processor on," says Måns Eeg-Olofsson, who has been in charge of the medical aspects of the project for the past two years.
The technique has been designed to treat mechanical hearing loss in individuals who have been affected by chronic inflammation of the outer or middle ear, or bone disease, or who have congenital malformations of the outer ear, auditory canal or middle ear. Such people often have major problems with their hearing. Normal hearing aids, which compensate for neurological problems in the inner ear, rarely work for them. On the other hand, bone-anchored devices often provide a dramatic improvement.
In addition, the new device may also help people with impaired inner ear.
"Patients can probably have a neural impairment of down to 30-40 dB even in the cochlea. We are going to try to establish how much of an impairment can be tolerated through this clinical study," says Bo Håkansson.
If the technique works, patients have even more to gain. Earlier tests indicate that the volume may be around 5 decibels higher and the quality of sound at high frequencies will be better with BCI than with previous bone-anchored techniques.
Now it's soon time to activate the first patient's implant, and adapt it to the patient's hearing and wishes. Then hearing tests and checks will be performed roughly every three months until a year after the operation.
"At that point, we will end the process with a final X-ray examination and final hearing tests. If we get good early indications we will continue operating other patients during this spring already," says Måns Eeg-Olofsson.
Read more at Science Daily
Alternative Medicine Use High Among Children With Chronic Conditions
Children who regularly see specialists for chronic medical conditions are also using complementary medicine at a high rate, demonstrates recently published research from the University of Alberta and the University of Ottawa.
About 71 per cent of pediatric patients attending various specialty clinics at the Stollery Children's Hospital in Edmonton used alternative medicine, while the rate of use at the Children's Hospital of Eastern Ontario in Ottawa was 42 per cent. Nearly 20 per cent of the families who took part in the study said they never told their physician or pharmacist about concurrently using prescription and alternative medicine.
Sunita Vohra, a researcher with the Faculty of Medicine & Dentistry at the U of A, was the lead investigator on the study, which was recently published in the peer-reviewed journal Pediatrics. Her co-investigator was W. James King from the University of Ottawa.
"The children in this study are often given prescription medicines," says Vohra, a pediatrician who works in the Department of Pediatrics and the School of Public Health at the U of A.
"And many of these children used complementary therapies at the same time or instead of taking prescription medicine. We asked families if they would like to talk about the use of alternative medicine, more than 80 per cent of them said, 'yes, please.'
"Right now, these families are getting information about alternative medicine from friends, family and the Internet, but a key place they should be getting this information from is their doctor or another member of their health-care team, who would know about possible drug interactions with prescription medicines." Vohra said the study "identified a gap in communications" in dealing with pediatric patients and their families.
"It's important to get these conversations going with every patient, especially when you consider it's not widely recognized how common it is for children with chronic illnesses to use alternative medicine," says the Alberta Innovates-Health Solutions scholar.
"We need to make sure these families are comfortable telling their specialists they are taking other therapies," she said. Right now, Vohra and her colleagues at the U of A have developed curricula for undergraduate medical students about the use of alternative medicine by pediatric patients, which is considered innovative and novel. Ensuring medical students receive information about alternative medicine is key because it arms them with more knowledge about potential interactions with prescription medicine, says Vohra.
Read more at Science Daily
About 71 per cent of pediatric patients attending various specialty clinics at the Stollery Children's Hospital in Edmonton used alternative medicine, while the rate of use at the Children's Hospital of Eastern Ontario in Ottawa was 42 per cent. Nearly 20 per cent of the families who took part in the study said they never told their physician or pharmacist about concurrently using prescription and alternative medicine.
Sunita Vohra, a researcher with the Faculty of Medicine & Dentistry at the U of A, was the lead investigator on the study, which was recently published in the peer-reviewed journal Pediatrics. Her co-investigator was W. James King from the University of Ottawa.
"The children in this study are often given prescription medicines," says Vohra, a pediatrician who works in the Department of Pediatrics and the School of Public Health at the U of A.
"And many of these children used complementary therapies at the same time or instead of taking prescription medicine. We asked families if they would like to talk about the use of alternative medicine, more than 80 per cent of them said, 'yes, please.'
"Right now, these families are getting information about alternative medicine from friends, family and the Internet, but a key place they should be getting this information from is their doctor or another member of their health-care team, who would know about possible drug interactions with prescription medicines." Vohra said the study "identified a gap in communications" in dealing with pediatric patients and their families.
"It's important to get these conversations going with every patient, especially when you consider it's not widely recognized how common it is for children with chronic illnesses to use alternative medicine," says the Alberta Innovates-Health Solutions scholar.
"We need to make sure these families are comfortable telling their specialists they are taking other therapies," she said. Right now, Vohra and her colleagues at the U of A have developed curricula for undergraduate medical students about the use of alternative medicine by pediatric patients, which is considered innovative and novel. Ensuring medical students receive information about alternative medicine is key because it arms them with more knowledge about potential interactions with prescription medicine, says Vohra.
Read more at Science Daily
Secret of Dingo's Down-Under Origin Revealed
Indians migrating to Australia more than 4,000 years ago may have introduced dingoes to the island continent, along with novel stone tools and new ways to remove toxins from edible plants, researchers say.
Australia was thought to have remained largely isolated from the rest of the world between its initial colonization about 40,000 years ago by the ancestors of aboriginal Australians and the arrival of Europeans in the late 1800s.
"Outside Africa, aboriginal Australians are the oldest continuous population in the world," said researcher Irina Pugach, a molecular anthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
Still,researchers had not really explored the genetic history of Australians in great enough detail to address this question.
Isolated continent?
"The extent of isolation of aboriginal Australia has been debated for a long time," Pugach told LiveScience. "The Australian archaeological record documents some changes that occur in Australia around 4,000 years ago, which could have been potentially, but not necessarily, brought in from the outside."
To find out more, the researchers analyzed DNA from 344 people, including aboriginal Australians, highlanders of Papua New Guinea, Southeast Asian islanders, Indians, Nigerians, individuals of European descent living in Utah and Han Chinese from Beijing.
The scientists found a common origin for populations from Australia, New Guinea and the Mamanwa, a group from the Philippines. The researchers estimate these groups split from one another about 36,000 years ago. This supports ideas that the groups descended from an ancient southwards migration out of Africa.
The researchers also detected substantial gene flow from Indian populations into Australia about 4,230 years ago. Scientists estimate this Indian genetic influence appears in about 10 percent of the aboriginal Australian populations they analyzed.
At about the same time, the dingo first appears in the Australian fossil record, an animal that most closely resembles Indian dogs.
In addition, at about that time, "archaeologists describe a sudden shift in stone tool technologies, with new implements known as the Small Tool Tradition appearing for the first time" in Australia, Pugach said. These represented stone tools that were smaller and more finely worked than before, she explained.
Moreover, at about that time, new techniques for altering dangerous plants to make them edible also appeared in Australia. For instance, while plants known as cycads can be toxic, soaking or fermenting their kernels can remove the poisons.
"Aboriginal Australians use the fruits of these plants as an important food source despite them being highly toxic," Pugach said.
The researchers caution the migration "may not have actually been from India, but from some population somewhere else that subsequently no longer exists, but whose closest living relative — at least, among populations we examined — are Dravidian-speakers from southern India," Pugach said.
The researchers also emphasized they are not claiming some Indian group members are the ancestors of aboriginal Australians. "The migration happened about 4,000 years ago. By that time, people lived in Australia for more than 40,000 years," Pugach said.
Read more at Discovery News
Australia was thought to have remained largely isolated from the rest of the world between its initial colonization about 40,000 years ago by the ancestors of aboriginal Australians and the arrival of Europeans in the late 1800s.
"Outside Africa, aboriginal Australians are the oldest continuous population in the world," said researcher Irina Pugach, a molecular anthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
Still,researchers had not really explored the genetic history of Australians in great enough detail to address this question.
Isolated continent?
"The extent of isolation of aboriginal Australia has been debated for a long time," Pugach told LiveScience. "The Australian archaeological record documents some changes that occur in Australia around 4,000 years ago, which could have been potentially, but not necessarily, brought in from the outside."
To find out more, the researchers analyzed DNA from 344 people, including aboriginal Australians, highlanders of Papua New Guinea, Southeast Asian islanders, Indians, Nigerians, individuals of European descent living in Utah and Han Chinese from Beijing.
The scientists found a common origin for populations from Australia, New Guinea and the Mamanwa, a group from the Philippines. The researchers estimate these groups split from one another about 36,000 years ago. This supports ideas that the groups descended from an ancient southwards migration out of Africa.
The researchers also detected substantial gene flow from Indian populations into Australia about 4,230 years ago. Scientists estimate this Indian genetic influence appears in about 10 percent of the aboriginal Australian populations they analyzed.
At about the same time, the dingo first appears in the Australian fossil record, an animal that most closely resembles Indian dogs.
In addition, at about that time, "archaeologists describe a sudden shift in stone tool technologies, with new implements known as the Small Tool Tradition appearing for the first time" in Australia, Pugach said. These represented stone tools that were smaller and more finely worked than before, she explained.
Moreover, at about that time, new techniques for altering dangerous plants to make them edible also appeared in Australia. For instance, while plants known as cycads can be toxic, soaking or fermenting their kernels can remove the poisons.
"Aboriginal Australians use the fruits of these plants as an important food source despite them being highly toxic," Pugach said.
The researchers caution the migration "may not have actually been from India, but from some population somewhere else that subsequently no longer exists, but whose closest living relative — at least, among populations we examined — are Dravidian-speakers from southern India," Pugach said.
The researchers also emphasized they are not claiming some Indian group members are the ancestors of aboriginal Australians. "The migration happened about 4,000 years ago. By that time, people lived in Australia for more than 40,000 years," Pugach said.
Read more at Discovery News
Jan 13, 2013
Fish cannot feel pain say scientists
A study has found that, even when caught on a hook and wriggling, the fish is impervious to pain because it does not have the necessary brain power.
The research, conducted by a team of seven scientists and published in the journal Fish and Fisheries, concluded that the fish’s reaction to being hooked is in fact just an unconscious reaction, rather than a response to pain.
Fish have already been found to have “nociceptors” - sensory receptors that in humans respond to potentially damaging stimuli by sending signals to the brain, allowing them to feel pain.
However, the latest research concluded that the mere presence of the receptors did not mean the animals felt pain, but only triggered a unconscious reaction to the threat.
The latest findings contradict previous research, which suggested that these nociceptors enabled the creatures to feel reflexive and cognitive pain.
In an earlier study done by the University of Edinburgh, rainbow trout were injected in the lips with an acid solution.
Researchers pointed to the fish’s behavioural changes, such as them rubbing their mouths on the gravel, and moving in a rocking motion similar to that seen in stressed mammals, as evidence of pain.
However, the new research, which reviewed a series of studies conducted over the years, discovered that only an extremely small number of “C fibres” - a type of nociceptor responsible for pain - can be found in trout and other fish.
Professor James Rose from the University of Wyoming in the US, who led the study, also found that the fish brain does not contain the highly developed neocortex needed to feel pain, so do not experience it in any meaningful way like humans.
He concluded that fish are able to experience unconscious, basic instinctive responses, but that these did not lead to conscious feelings or pain.
The trout’s reactions in the earlier study were therefore not ones of discomfort, as they lack the capacity to experience it, Prof Rose found.
The new research also referred to a study done on fish which were caught with a hook and then released.
The fish resumed feeding and normal activity immediately or within minutes and went on to show good long-term survival, which indicated they had not experienced pain.
Professor Robert Arlinghaus, one of the team’s researchers, said the presumption that fish feel pain has hindered scientists for decades and has stigmatised anglers.
“I think that fish welfare is very important, but I also think that fishing and science is too,” he said.
“There are many conflicts surrounding the issue of pain and whether fish can feel it, and often anglers are portrayed as cruel sadists. It's an unnecessary social conflict.”
Mark Lloyd, head of the Anglers’ Trust, said: “This debate about fish feeling pain has always been a red herring, so to speak.
“Anglers care passionately about the protection of fish stocks and do more than any other group to protect and improve freshwater and marine environments.
Read more at The Telegraph
The research, conducted by a team of seven scientists and published in the journal Fish and Fisheries, concluded that the fish’s reaction to being hooked is in fact just an unconscious reaction, rather than a response to pain.
Fish have already been found to have “nociceptors” - sensory receptors that in humans respond to potentially damaging stimuli by sending signals to the brain, allowing them to feel pain.
However, the latest research concluded that the mere presence of the receptors did not mean the animals felt pain, but only triggered a unconscious reaction to the threat.
The latest findings contradict previous research, which suggested that these nociceptors enabled the creatures to feel reflexive and cognitive pain.
In an earlier study done by the University of Edinburgh, rainbow trout were injected in the lips with an acid solution.
Researchers pointed to the fish’s behavioural changes, such as them rubbing their mouths on the gravel, and moving in a rocking motion similar to that seen in stressed mammals, as evidence of pain.
However, the new research, which reviewed a series of studies conducted over the years, discovered that only an extremely small number of “C fibres” - a type of nociceptor responsible for pain - can be found in trout and other fish.
Professor James Rose from the University of Wyoming in the US, who led the study, also found that the fish brain does not contain the highly developed neocortex needed to feel pain, so do not experience it in any meaningful way like humans.
He concluded that fish are able to experience unconscious, basic instinctive responses, but that these did not lead to conscious feelings or pain.
The trout’s reactions in the earlier study were therefore not ones of discomfort, as they lack the capacity to experience it, Prof Rose found.
The new research also referred to a study done on fish which were caught with a hook and then released.
The fish resumed feeding and normal activity immediately or within minutes and went on to show good long-term survival, which indicated they had not experienced pain.
Professor Robert Arlinghaus, one of the team’s researchers, said the presumption that fish feel pain has hindered scientists for decades and has stigmatised anglers.
“I think that fish welfare is very important, but I also think that fishing and science is too,” he said.
“There are many conflicts surrounding the issue of pain and whether fish can feel it, and often anglers are portrayed as cruel sadists. It's an unnecessary social conflict.”
Mark Lloyd, head of the Anglers’ Trust, said: “This debate about fish feeling pain has always been a red herring, so to speak.
“Anglers care passionately about the protection of fish stocks and do more than any other group to protect and improve freshwater and marine environments.
Read more at The Telegraph
First Land Animals Shuffled Like Seals
The world’s first 3D reconstruction of a 4-legged animal backbone reveals that the first animals on land moved like seals.
One of the studied animals was a fierce-looking, toothy beast known as Ichthyostega. It lived 374 – 359 million years ago and was a transitional species between fish and terrestrial animals.
Ichthyostega is thought to have navigated through shallow water in swamps, probably lured by food.
Now we know that it probably moved by dragging itself across flat ground, using its front legs to crutch itself forward, much like that of a mudskipper or seal.
The findings are published in the journal Nature.
Lead author Stephanie Pierce, of the University of Cambridge’s Department of Zoology, was quoted as saying in a press release, “The results of this study force us to re-write the textbook on backbone evolution in the earliest limbed animals.”
Pierce and her colleagues bombarded 360-million-year-old early fossils for four-legged animals with high-energy synchrotron radiation. The resulting high resolution X-ray images allowed the researchers to reconstruct the backbones of the extinct animals in exceptional detail.
Today, all four-limbed animals (technically known as tetrapods) possess a backbone. It is formed from many bony segments, called vertebrae, all connected in a row from head to tail/rear end. Unlike the backbone of living tetrapods like humans, in which each vertebra is composed of only one bone, early tetrapods had vertebrae made up of multiple parts.
Pierce said, “For more than 100 years, early tetrapods were thought to have vertebrae composed of three sets of bones — one bone in front, one on top, and a pair behind. But, by peering inside the fossils using synchrotron X-rays, we have discovered that this traditional view literally got it back-to-front.”
The team of scientists discovered that what was thought to be the first bone – known as the intercentrum — is actually the last in the series.
“By understanding how each of the bones fit together we can begin to explore the mobility of the spine and test how it may have transferred forces between the limbs during the early stages of land movement,” Pierce said.
Aside from deducing that Ichthyostega and its ilk moved like seals, the researchers also discovered a string of bones that extended down the middle of its chest.
Read more at Discovery News
One of the studied animals was a fierce-looking, toothy beast known as Ichthyostega. It lived 374 – 359 million years ago and was a transitional species between fish and terrestrial animals.
Ichthyostega is thought to have navigated through shallow water in swamps, probably lured by food.
Now we know that it probably moved by dragging itself across flat ground, using its front legs to crutch itself forward, much like that of a mudskipper or seal.
The findings are published in the journal Nature.
Lead author Stephanie Pierce, of the University of Cambridge’s Department of Zoology, was quoted as saying in a press release, “The results of this study force us to re-write the textbook on backbone evolution in the earliest limbed animals.”
Pierce and her colleagues bombarded 360-million-year-old early fossils for four-legged animals with high-energy synchrotron radiation. The resulting high resolution X-ray images allowed the researchers to reconstruct the backbones of the extinct animals in exceptional detail.
Today, all four-limbed animals (technically known as tetrapods) possess a backbone. It is formed from many bony segments, called vertebrae, all connected in a row from head to tail/rear end. Unlike the backbone of living tetrapods like humans, in which each vertebra is composed of only one bone, early tetrapods had vertebrae made up of multiple parts.
Pierce said, “For more than 100 years, early tetrapods were thought to have vertebrae composed of three sets of bones — one bone in front, one on top, and a pair behind. But, by peering inside the fossils using synchrotron X-rays, we have discovered that this traditional view literally got it back-to-front.”
The team of scientists discovered that what was thought to be the first bone – known as the intercentrum — is actually the last in the series.
“By understanding how each of the bones fit together we can begin to explore the mobility of the spine and test how it may have transferred forces between the limbs during the early stages of land movement,” Pierce said.
Aside from deducing that Ichthyostega and its ilk moved like seals, the researchers also discovered a string of bones that extended down the middle of its chest.
Read more at Discovery News
Subscribe to:
Posts (Atom)