A team of Massachusetts General Hospital (MGH) investigators working to create "protocells" -- primitive synthetic cells consisting of a nucleic acid strand encased within a membrane-bound compartment -- have accomplished an important step towards their goal. In the November 28 issue of Science, the investigators describe a solution to what could have been a critical problem -- the potential incompatibility between a chemical requirement of RNA copying and the stability of the protocell membrane.
"For the first time, we've been able to do nonenzymatic RNA copying inside fatty acid vesicles," says Jack Szostak, PhD, of the MGH Department of Molecular Biology and the Center for Computational and Integrative Biology. "We've found a solution to a longstanding problem in the origin of cellular life: RNA copying chemistry requires the presence of the magnesium ion Mg2+, but high Mg2+ levels can break down the simple, fatty acid membranes that probably surrounded the first living cells."
Szostak's team has been working for more than a decade to understand how the first cells developed from a "primordial soup" of chemicals into living organisms capable of copying their genetic material and reproducing. Part of that work is developing a model protocell made from components probably present in the primitive Earth environment. They have made significant progress towards developing cell membranes from the kind of fatty acids that would have been abundant and naturally form themselves into bubble-like vesicles when concentrated in water. But the genetic component -- an RNA or DNA molecule capable of replication -- has been missing.
Since the primitive environment in which such cells could have developed would not have had the kind of complex enzymes that modern cells use in replicating nucleic acids, Szostak and lead author Katarzyna Adamala, PhD, then a graduate student in Szostak's lab, investigated whether simple chemical processes could drive nonenzymatic replication of RNA, which many scientists believe was the first nucleic acid to develop.
To address the incompatibility between the need for Mg2+ to drive assembly of the RNA molecule and the ion's ability to degrade fatty acid membranes, they tested several chelators -- small molecules that bind tightly to metal ions -- for their ability to protect fatty acid vesicles from the potentially destabilizing effects of Mg2+. Citrate and several other chelators were found to be effective in protecting the membranes of fatty acid vesicles from disruption.
To test whether the presence of the tested chelators would allow Mg2+-catalyzed RNA assembly, the investigators placed molecules consisting of short primer RNA strands bound to longer RNA templates into fatty acid vesicles. The unbound, single-strand portion of the template consisted of a sequence of cytosine (C) nucleotides. In the presence of Mg2+ and one of four chelating molecules, one of which was citrate, the researchers then added activated G, the nucleotide that base-pairs with C in nucleic acids.
The desired reaction -- diffusion of G nucleotides through the vesicle membrane to complete a double-stranded RNA molecule by binding to the C nucleotides of the template -- proceeded fastest in the presence of citrate. In fact two of the other tested chelators completely prevented extension of the RNA primer.
"While other molecules can protect membranes against the magnesium ion," Szostak explains, "they don't let RNA chemistry go on. We think that citrate is able both to protect membranes and to allow RNA copying to proceed by covering only one face to the magnesium ion, protecting the membrane while allowing RNA chemistry to work." He and Adamala also found that continually refreshing the activated guanine nucleotide solution by flushing out broken down molecules and adding fresh nucleotides improved the efficiency of RNA replication.
Read more at Science Daily
Nov 30, 2013
Crows Are No Bird-Brains: Neurobiologists Investigate Neuronal Basis of Crows' Intelligence
Scientists have long suspected that corvids -- the family of birds including ravens, crows and magpies -- are highly intelligent. Now, Tübingen neurobiologists Lena Veit und Professor Andreas Nieder have demonstrated how the brains of crows produce intelligent behavior when the birds have to make strategic decisions.
Their results are published in the latest edition of Nature Communications.
Crows are no bird-brains. Behavioral biologists have even called them "feathered primates" because the birds make and use tools, are able to remember large numbers of feeding sites, and plan their social behavior according to what other members of their group do. This high level of intelligence might seem surprising because birds' brains are constructed in a fundamentally different way from those of mammals, including primates -- which are usually used to investigate these behaviors.
The Tübingen researchers are the first to investigate the brain physiology of crows' intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage -- and which is an effort even for humans.
The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image -- while another group of cells always responded when they were operating on the "different image" rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.
The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. "Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs," says Lena Veit. "This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy."
Read more at Science Daily
Their results are published in the latest edition of Nature Communications.
Crows are no bird-brains. Behavioral biologists have even called them "feathered primates" because the birds make and use tools, are able to remember large numbers of feeding sites, and plan their social behavior according to what other members of their group do. This high level of intelligence might seem surprising because birds' brains are constructed in a fundamentally different way from those of mammals, including primates -- which are usually used to investigate these behaviors.
The Tübingen researchers are the first to investigate the brain physiology of crows' intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage -- and which is an effort even for humans.
The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image -- while another group of cells always responded when they were operating on the "different image" rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.
The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. "Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs," says Lena Veit. "This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy."
Read more at Science Daily
Nov 29, 2013
Controversy Over Use of Roman Ingots to Investigate Dark Matter, Neutrinos
The properties of these lead bricks recovered from ancient shipwrecks are ideal for experiments in particle physics. Scientists from the CDMS dark matter detection project in Minnesota (USA) and from the CUORE neutrino observatory at the Gran Sasso Laboratory in Italy have begun to use them, but archaeologists have raised alarm about the destruction and trading of cultural heritage that lies behind this.
Two thousand years ago, a Roman vessel with ingots of lead extracted from the Sierra of Cartagena sank across the waters from the coast of Sardinia. Since 2011, more than a hundred of these ingots have been used to build the 'Cryogenic Underground Observatory for Rare Events' (CUORE), an advanced detector of neutrinos -- almost weightless subatomic particles -- at the Gran Sasso National Laboratory in Italy.
In the 18th century, another ship loaded with lead ingots was wrecked on the French coast. A company of treasure hunters retrieved this material and, despite problems with French authorities, managed to sell it to the Cryogenic Dark Matter Search (CDMS) team. This detector located in a mine in Minnesota (USA) looks for signs of the enigmatic dark matter, which is believed to constitute a quarter of the universe.
These two examples have served as reference for the discussion that two researchers have opened between archaeologists, worried by the destruction of underwater cultural heritage, and particle physicists, pleased to have found a unique material for research on neutrinos and dark matter.
As Elena Perez-Alvaro from the University of Birmingham explains: "Roman lead is essential for conducting these experiments because it offers purity and such low levels of radioactivity -- all the more so the longer it has spent underwater -- which current methods for producing this metal cannot reach."
"Lead extracted today is naturally contaminated with the isotope Pb-210, which prevents it from being used as shielding for particle detectors," adds physicist Fernando González Zalba from the University of Cambridge.
The two researchers have published a study in the journal 'Rosetta', also commented upon this month in 'Science', which poses a dilemma: Should we sacrifice part of our cultural heritage in order to achieve greater knowledge of the universe and the origin of humankind? Should we yield part of our past to discover more about our future?
"Underwater archaeologists see destruction of heritage as a loss of our past, our history, whilst physicists support basic research to look for answers we do not yet have," remarks Perez-Alvaro, "although this has led to situations in which, for example, private companies like Odyssey trade lead recovered from sunken ships." This is the company that had to return the treasure of the frigate Nuestra Señora de las Mercedes to Spain.
Dialogue between underwater archaeologists and particle physicists
The underwater archaeologist and the physicist are encouraging dialogue between both collectives, as well as developing legislation that regulates these kinds of activities, without limiting them exclusively to archaeologists, and including scientists. "Recovery for knowledge in both fields, and not merely for commercial reasons," the scientists stress.
The jury is still out. In the case of the CUORE detector, for example, in principle the lead from the least well-preserved Roman ingots is used, although their inscriptions are cut and preserved. Some archaeologists also suggests that there are other pieces of valuable metal, such as anchor stocks, rings or tackles for fishing that we should assess whether or not to "sacrifice for science." The problem is that they are protected by UNESCO's 2001 Convention on the protection of underwater cultural heritage if they have been under water more than 10 years and the 2003 Convention for safeguarding intangible cultural heritage.
Regarding the habitual use that Romans made of these ingots, Pérez Álvaro points out that there are many theories, "but they were generally used as water-resistant material for pipes, water tanks or roofs, but also in the manufacture of arms and ammunition."
Read more at Science Daily
Two thousand years ago, a Roman vessel with ingots of lead extracted from the Sierra of Cartagena sank across the waters from the coast of Sardinia. Since 2011, more than a hundred of these ingots have been used to build the 'Cryogenic Underground Observatory for Rare Events' (CUORE), an advanced detector of neutrinos -- almost weightless subatomic particles -- at the Gran Sasso National Laboratory in Italy.
In the 18th century, another ship loaded with lead ingots was wrecked on the French coast. A company of treasure hunters retrieved this material and, despite problems with French authorities, managed to sell it to the Cryogenic Dark Matter Search (CDMS) team. This detector located in a mine in Minnesota (USA) looks for signs of the enigmatic dark matter, which is believed to constitute a quarter of the universe.
These two examples have served as reference for the discussion that two researchers have opened between archaeologists, worried by the destruction of underwater cultural heritage, and particle physicists, pleased to have found a unique material for research on neutrinos and dark matter.
As Elena Perez-Alvaro from the University of Birmingham explains: "Roman lead is essential for conducting these experiments because it offers purity and such low levels of radioactivity -- all the more so the longer it has spent underwater -- which current methods for producing this metal cannot reach."
"Lead extracted today is naturally contaminated with the isotope Pb-210, which prevents it from being used as shielding for particle detectors," adds physicist Fernando González Zalba from the University of Cambridge.
The two researchers have published a study in the journal 'Rosetta', also commented upon this month in 'Science', which poses a dilemma: Should we sacrifice part of our cultural heritage in order to achieve greater knowledge of the universe and the origin of humankind? Should we yield part of our past to discover more about our future?
"Underwater archaeologists see destruction of heritage as a loss of our past, our history, whilst physicists support basic research to look for answers we do not yet have," remarks Perez-Alvaro, "although this has led to situations in which, for example, private companies like Odyssey trade lead recovered from sunken ships." This is the company that had to return the treasure of the frigate Nuestra Señora de las Mercedes to Spain.
Dialogue between underwater archaeologists and particle physicists
The underwater archaeologist and the physicist are encouraging dialogue between both collectives, as well as developing legislation that regulates these kinds of activities, without limiting them exclusively to archaeologists, and including scientists. "Recovery for knowledge in both fields, and not merely for commercial reasons," the scientists stress.
The jury is still out. In the case of the CUORE detector, for example, in principle the lead from the least well-preserved Roman ingots is used, although their inscriptions are cut and preserved. Some archaeologists also suggests that there are other pieces of valuable metal, such as anchor stocks, rings or tackles for fishing that we should assess whether or not to "sacrifice for science." The problem is that they are protected by UNESCO's 2001 Convention on the protection of underwater cultural heritage if they have been under water more than 10 years and the 2003 Convention for safeguarding intangible cultural heritage.
Regarding the habitual use that Romans made of these ingots, Pérez Álvaro points out that there are many theories, "but they were generally used as water-resistant material for pipes, water tanks or roofs, but also in the manufacture of arms and ammunition."
Read more at Science Daily
The More the Better: Polyandry in Salamanders
Researchers at Bielefeld University and the Technische Universität Braunschweig are the first to confirm the benefit of multiple paternities for a vertebrate under completely natural conditions. Together with their team, Dr. Barbara Caspers and Dr. Sebastian Steinfartz have shown that female fire salamanders mate with several males under natural conditions (so-called polyandry). This grants them fitness-relevant benefits by increasing their number of offspring. The results of their study are being published in the Early View version of Molecular Ecology.´
For a long time, it was assumed that females in the animal world are monogamous, that is, they mate with only one male. Males, in contrast, can increase their reproductive success by mating with several females. Nowadays, however, polyandry is assumed to be the rule in the animal world and monogamy to be more of an exception.
Currently, researchers from completely different disciplines are interested in why females mate with several males and what benefits this brings for them or their offspring. There is a particular interest in studies that permit insights and conclusions on these processes under completely natural conditions. As a rule, however, such studies are hard to implement without disturbing the individuals or studying their mating behaviour completely or partially in the laboratory.
Researchers at Bielefeld University's Chair of Animal Behaviour in the group of Dr. Barbara Caspers, Dr. Sebastian Steinfartz, research group leader at the TU Braunschweig, and Professor Michael Kopp from Aix-Marseille University have studied the influence of mating behaviour on the number of offspring in the black and yellow fire salamander (Salamandra salamandra), a widespread European tailed amphibian species. Over the course of the spring season, a female salamander can deposit up to 50 living larvae in small streams and ponds. For their study, the scientists captured female salamanders on their way to deposit their larvae in a forest and took the pregnant females to the laboratory where they deposited their larvae. Every day, the scientists collected the new-born larvae, took a small tissue sample, and returned both mothers and their larvae to the forest. By subjecting these tissue samples to genetic paternity analyses, the researchers could precisely reconstruct how many males each female had mated with and whether or not the sperm of the different males had been mixed -- female salamanders can store the sperm of different males for several months in internal receptive organs called spermathecae. The eggs of the female will only be fertilized with the stored sperm, if environmental conditions are optimal and after eggs have developed into full larvae these are deposited in streams and ponds.
Read more at Science Daily
For a long time, it was assumed that females in the animal world are monogamous, that is, they mate with only one male. Males, in contrast, can increase their reproductive success by mating with several females. Nowadays, however, polyandry is assumed to be the rule in the animal world and monogamy to be more of an exception.
Currently, researchers from completely different disciplines are interested in why females mate with several males and what benefits this brings for them or their offspring. There is a particular interest in studies that permit insights and conclusions on these processes under completely natural conditions. As a rule, however, such studies are hard to implement without disturbing the individuals or studying their mating behaviour completely or partially in the laboratory.
Researchers at Bielefeld University's Chair of Animal Behaviour in the group of Dr. Barbara Caspers, Dr. Sebastian Steinfartz, research group leader at the TU Braunschweig, and Professor Michael Kopp from Aix-Marseille University have studied the influence of mating behaviour on the number of offspring in the black and yellow fire salamander (Salamandra salamandra), a widespread European tailed amphibian species. Over the course of the spring season, a female salamander can deposit up to 50 living larvae in small streams and ponds. For their study, the scientists captured female salamanders on their way to deposit their larvae in a forest and took the pregnant females to the laboratory where they deposited their larvae. Every day, the scientists collected the new-born larvae, took a small tissue sample, and returned both mothers and their larvae to the forest. By subjecting these tissue samples to genetic paternity analyses, the researchers could precisely reconstruct how many males each female had mated with and whether or not the sperm of the different males had been mixed -- female salamanders can store the sperm of different males for several months in internal receptive organs called spermathecae. The eggs of the female will only be fertilized with the stored sperm, if environmental conditions are optimal and after eggs have developed into full larvae these are deposited in streams and ponds.
Read more at Science Daily
Architects of Nanoworld Behind the Screens
New types of building blocks for electronics will be the future, that is clear for researcher Nauta. "It is already possible to give a molecule the functionality of a transistor. But compare that to the huge complexity of current chips, with eight or nine 'highways' above each other, connecting all elements. How to reach this using these new molecules? There's still a huge gap there. Silicon research and industry has shown an immense effort, that's still going on for some time." Nauta stresses that current chips like microprocessors already contain billions of transistor with sizes in the nanometer domain. Microelectronics had become nanoelectronics already. "They are so small, around 22 nanometer, that you can count the individual atoms."
Not self-evident at all
In his lecture 'The invisible circuit', Nauta asks his audience to imagine a world without chips. "If we wouldn't have chips in our daily life, suddenly a lot of things like social media and internet, aren't possible anymore. That would really mean 'back to the fifties'." That is: almost back to the time the very first transistor was invented, in 1947. Still, we take it for granted whenever there is a new generation of smartphones, tablets or other gadgets in the shops. "This is not self-evident at all. This requires top research and huge investments in new chip factories." Nauta's own group, one of the world's leading groups in chip design, delivered several inventions that found their way to smart phones and TV's. A well-known example is their noise-cancelling circuit that surprised the semiconductor world at first, but is a textbook example by now.
Cognitive radio
Nauta specializes in circuits translating the analogue outside world into the digital inside of the smartphone: the part of the circuitry taking care of transmitting and receiving, or 'radio'. Complexity is growing rapidly there: with more and more mobile standards, a good quality has to be guaranteed with low noise, and if possible, using less energy. And all that on the tiniest possible silicon surface. "For each standard, you would need a separate filter. But that would take far too much surface. We now develop a filter that is tunable and can be integrated on-chip. That's a development the whole world is looking at, because integration of conventional filters is almost impossible. Within five years, it will be commercially available." This new type of filter would also be the candidate for new radio techniques employing every free part of the frequency spectrum, so-called cognitive radio.
Read more at Science Daily
Not self-evident at all
In his lecture 'The invisible circuit', Nauta asks his audience to imagine a world without chips. "If we wouldn't have chips in our daily life, suddenly a lot of things like social media and internet, aren't possible anymore. That would really mean 'back to the fifties'." That is: almost back to the time the very first transistor was invented, in 1947. Still, we take it for granted whenever there is a new generation of smartphones, tablets or other gadgets in the shops. "This is not self-evident at all. This requires top research and huge investments in new chip factories." Nauta's own group, one of the world's leading groups in chip design, delivered several inventions that found their way to smart phones and TV's. A well-known example is their noise-cancelling circuit that surprised the semiconductor world at first, but is a textbook example by now.
Cognitive radio
Nauta specializes in circuits translating the analogue outside world into the digital inside of the smartphone: the part of the circuitry taking care of transmitting and receiving, or 'radio'. Complexity is growing rapidly there: with more and more mobile standards, a good quality has to be guaranteed with low noise, and if possible, using less energy. And all that on the tiniest possible silicon surface. "For each standard, you would need a separate filter. But that would take far too much surface. We now develop a filter that is tunable and can be integrated on-chip. That's a development the whole world is looking at, because integration of conventional filters is almost impossible. Within five years, it will be commercially available." This new type of filter would also be the candidate for new radio techniques employing every free part of the frequency spectrum, so-called cognitive radio.
Read more at Science Daily
Glow-in-the-Dark Shark Makes Cookies Out of Flesh
Marathon swimmer Mike Spalding was 10 hours into an epic 33-mile voyage between Maui and the Big Island when his escort boat lost sight of him. Being the middle of the night and all, the captain was forced to fire up his lights to reestablish contact with the kayaker at Spalding’s side.
This, ironically enough, is the absolute last resort when you get lost swimming in the darkness. With the kayak’s light now blazing as well, the creatures of the nighttime sea began to take notice. Squid amassed around Spalding as he slogged on, forming a slowly moving bait ball. He took a hit from one, and then another and another. After the fourth bump, Spalding felt a sharp pain in his chest.
It was the first bite, albeit just a nibble. The 62-year-old (that’s not a typo) Spalding broke for the kayak.
“As I was eggbeatering to get into the kayak with my legs perpendicular to the surface of the water, I felt this sharp hit on my leg,” he told WIRED. “It wasn’t painful, but it was like you got punched or something. And so I ran my fingers down my calf and I felt this hole.
“It’s a bigass hole.”
Spalding had earned the dubious title of first living human confirmed to have been attacked by a cookiecutter shark, which gored a 3-inch-wide crater in his leg. At no more than two feet long, this diminutive terror nevertheless packs a set of teeth that are bigger than any other shark relative to body size, according to George Burgess, an ichthyologist and director of the Florida Program for Shark Research at the Florida Museum of Natural History. It’s a glow-in-the-dark evolutionary marvel of the open ocean that takes on beasts hundreds of times its size, including submarines. And it almost always wins.
The cookiecutter shark doesn’t set out to kill its prey. Instead, it makes sneak attacks, using its fleshy lips to suction like a Nerf dart onto a whale or tuna or pretty much any other large critter. Its saw-like teeth easily tear through flesh as it “rotates its body in a 360-degree fashion around and around and around like a drill,” said Burgess. “And as it’s digging in, it gradually closes its jaw little by little, thereby making the crater wound as opposed to just a cylinder.”
Burgess, who authored a paper on Spalding’s attack, likens the action to using a melon baller, and in so doing has forever ruined melon for me. It all happens in no more than a second or two, and just like that, the cookiecutter is gone. It’s an ambush predator of the highest order.
The creature’s lower teeth are exceedingly sharp, even for a shark, and thus excavate very clean wounds. They’ve evolved to fuse together into what looks like a white picket fence of grave bodily injury, but like any other shark, the cookiecutter will lose these in its day-to-day gougings, perhaps as often as every two weeks, according to Burgess. But waiting in that jaw are row after row of beautiful new chompers.
In addition to such handy hunting tools as electroreceptors and a good sense of smell that come with being a shark, the cookiecutter has enormous eyes and a green bioluminescent glow, suggesting the creature is primarily a nighttime hunter.
This bioluminescence comes from light-emitting organs in its skin called photophores, Burgess says. “The control over showing or not showing the light is done by use of little cells called melanophores that are sort of masking organs,” he said. “And so they use these dark-colored cells to go over the top of the light or move away from the light.” In this way the cookiecutter can flash like a strobe, perhaps to communicate with its own species.
Interestingly, though, whereas the deep-sea anglerfish attracts smaller prey with its glowing lure, the cookiecutter may use a riskier strategy: luring big predators that could easily swallow it whole, only to juke around at the last second and torpedo their flanks.
This behavior might seem … really, really dumb. But animals obviously don’t evolve to die prematurely. Genes that aid in survival get passed along. Those that don’t will end up dissolving in the stomachs of predators. So if the cookiecutter is indeed playing chicken of the sea, it’s been doing it right for a real long time. Just call it the ocean’s James Dean.
“I’ve never seen a cookiecutter in the stomach of any other animal,” said Burgess. “Which means that they’re pretty wily, and they must be pretty fast and reclusive at the same time.”
Burgess reckons that like a lot of marine creatures, the cookiecutter patrols near the surface in the evening, then retreats deeper during the day, a behavior called diel vertical migration, diel being a fancy 10-dollar word meaning 24 hours. Its hunting tactics have never been observed, apart from poor Spalding observing the hole in his leg, but Burgess notes that the cookiecutter is often associated with bioluminescent squid, which also flash flamboyantly.
“We think that probably they simply stay close to these other critters,” he said, “and wait for predators to come in who are cognizant of the flashing pattern usually meaning a good meal at the other side. And when the animal, the larger fish, comes in to grab the prey items, out from the abyss or the darkness comes the cookiecutter to make a sneak grab and bite on the side of the animals.”
It’s also widely believed that the cookiecutter may be essentially cloaking itself to mimic a smaller prey item. Seen from below, the glow of its underside matches the light filtering down from the surface, so the cookiecutter would seem to disappear — save for a non-luminescent band around its neck that makes it a dead giveaway to predators.
But despite that popular view, the collar does in fact glow, Burgess says. And he suggests that by flashing, the band may help draw would-be predators to the “business end” of the shark. Plus, attracting big nasty teeth specifically from below is probably a silly idea, as the lady from the Jaws poster would no doubt tell you if she hadn’t been eaten by a shark or was even real in the first place.
Read more at Wired Science
This, ironically enough, is the absolute last resort when you get lost swimming in the darkness. With the kayak’s light now blazing as well, the creatures of the nighttime sea began to take notice. Squid amassed around Spalding as he slogged on, forming a slowly moving bait ball. He took a hit from one, and then another and another. After the fourth bump, Spalding felt a sharp pain in his chest.
It was the first bite, albeit just a nibble. The 62-year-old (that’s not a typo) Spalding broke for the kayak.
“As I was eggbeatering to get into the kayak with my legs perpendicular to the surface of the water, I felt this sharp hit on my leg,” he told WIRED. “It wasn’t painful, but it was like you got punched or something. And so I ran my fingers down my calf and I felt this hole.
“It’s a bigass hole.”
Spalding had earned the dubious title of first living human confirmed to have been attacked by a cookiecutter shark, which gored a 3-inch-wide crater in his leg. At no more than two feet long, this diminutive terror nevertheless packs a set of teeth that are bigger than any other shark relative to body size, according to George Burgess, an ichthyologist and director of the Florida Program for Shark Research at the Florida Museum of Natural History. It’s a glow-in-the-dark evolutionary marvel of the open ocean that takes on beasts hundreds of times its size, including submarines. And it almost always wins.
The cookiecutter shark doesn’t set out to kill its prey. Instead, it makes sneak attacks, using its fleshy lips to suction like a Nerf dart onto a whale or tuna or pretty much any other large critter. Its saw-like teeth easily tear through flesh as it “rotates its body in a 360-degree fashion around and around and around like a drill,” said Burgess. “And as it’s digging in, it gradually closes its jaw little by little, thereby making the crater wound as opposed to just a cylinder.”
Burgess, who authored a paper on Spalding’s attack, likens the action to using a melon baller, and in so doing has forever ruined melon for me. It all happens in no more than a second or two, and just like that, the cookiecutter is gone. It’s an ambush predator of the highest order.
The creature’s lower teeth are exceedingly sharp, even for a shark, and thus excavate very clean wounds. They’ve evolved to fuse together into what looks like a white picket fence of grave bodily injury, but like any other shark, the cookiecutter will lose these in its day-to-day gougings, perhaps as often as every two weeks, according to Burgess. But waiting in that jaw are row after row of beautiful new chompers.
In addition to such handy hunting tools as electroreceptors and a good sense of smell that come with being a shark, the cookiecutter has enormous eyes and a green bioluminescent glow, suggesting the creature is primarily a nighttime hunter.
This bioluminescence comes from light-emitting organs in its skin called photophores, Burgess says. “The control over showing or not showing the light is done by use of little cells called melanophores that are sort of masking organs,” he said. “And so they use these dark-colored cells to go over the top of the light or move away from the light.” In this way the cookiecutter can flash like a strobe, perhaps to communicate with its own species.
Interestingly, though, whereas the deep-sea anglerfish attracts smaller prey with its glowing lure, the cookiecutter may use a riskier strategy: luring big predators that could easily swallow it whole, only to juke around at the last second and torpedo their flanks.
This behavior might seem … really, really dumb. But animals obviously don’t evolve to die prematurely. Genes that aid in survival get passed along. Those that don’t will end up dissolving in the stomachs of predators. So if the cookiecutter is indeed playing chicken of the sea, it’s been doing it right for a real long time. Just call it the ocean’s James Dean.
“I’ve never seen a cookiecutter in the stomach of any other animal,” said Burgess. “Which means that they’re pretty wily, and they must be pretty fast and reclusive at the same time.”
Burgess reckons that like a lot of marine creatures, the cookiecutter patrols near the surface in the evening, then retreats deeper during the day, a behavior called diel vertical migration, diel being a fancy 10-dollar word meaning 24 hours. Its hunting tactics have never been observed, apart from poor Spalding observing the hole in his leg, but Burgess notes that the cookiecutter is often associated with bioluminescent squid, which also flash flamboyantly.
“We think that probably they simply stay close to these other critters,” he said, “and wait for predators to come in who are cognizant of the flashing pattern usually meaning a good meal at the other side. And when the animal, the larger fish, comes in to grab the prey items, out from the abyss or the darkness comes the cookiecutter to make a sneak grab and bite on the side of the animals.”
It’s also widely believed that the cookiecutter may be essentially cloaking itself to mimic a smaller prey item. Seen from below, the glow of its underside matches the light filtering down from the surface, so the cookiecutter would seem to disappear — save for a non-luminescent band around its neck that makes it a dead giveaway to predators.
But despite that popular view, the collar does in fact glow, Burgess says. And he suggests that by flashing, the band may help draw would-be predators to the “business end” of the shark. Plus, attracting big nasty teeth specifically from below is probably a silly idea, as the lady from the Jaws poster would no doubt tell you if she hadn’t been eaten by a shark or was even real in the first place.
Read more at Wired Science
Nov 28, 2013
Pushing Limits of Light Microscopy
A team of researchers from the IMP Vienna together with collaborators from the Vienna University of Technology established a new microscopy technique which greatly enhances resolution in the third dimension. In a simple set-up, the scientists used the translation of position information of fluorescent markers into color information. Overcoming the need for scanning the depth of a sample, they were able to generate the precise 3D information at the same speed as it would take to acquire a 2D image. The general principle of this innovative approach can be used for broader applications and is published online in the PNAS Early Edition this week.
For many disciplines in the natural sciences it is desirable to get highly enlarged, precise pictures of specimens such as cells. Depending on the purpose of an experiment and the preparation of the sample, different microscopy-techniques are used to analyze small structures or objects. However, a drawback of most current approaches is the need to scan the depth of a sample in order to get a 3D picture. Especially for optically sensitive or highly dynamic (fast moving) samples this often represents a serious problem. Katrin Heinze and Kareem Elsayad, lead authors of the PNAS publication, managed to circumvent this difficulty during their work at the IMP.
Precise images of sensitive and dynamic samples
Elsayad, who was part of a research team led by Katrin Heinze at the IMP, used fluorescence microscopy for his experimental set-up. The principle of fluorescence microscopy -- now a common tool in biomedical research labs -- is as follows: Fluorescent dyes, so-called fluorophores, are turned on by light of a certain wavelength and, as a consequence, "spontaneously" emit light of a different wavelength. Elsayad designed a thin biocompatible nanostructure consisting of a quartz microscope slide with a thin silver film and a dielectric layer. The IMP-scientist then labeled the sample -- fixed or live cells -- with a fluorescent dye and placed it above the coated slide.
Elsayad explains in simple terms how the biological imaging then took place: "The measured emission spectrum of a fluorescent dye above this substrate depends on its distance from the substrate. In other words, the position information of a collection of fluorophores is translated into color information, and this is what we were measuring in the end." With this elaborate method, only one measurement is needed to determine the fluorophore distribution above the substrate, with a resolution -- in the direction away from the substrate -- down to 10 nanometers (1/100.000th of a millimeter). "I believe that the beauty of our method is its simplicity. No elaborate set-up or machines are required to achieve this high resolution. Once the sample is placed on the substrate, which can be mass produced, a confocal microscope with spectral detection is all that is needed," Heinze points out.
Simple method, big potential
The novel technique was already successfully tested by Elsayad and Heinze. Together with collaborators at the Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences, they used it to study paxillin, a protein important for cell adhesion, in living cells. The scientists also visualized the 3D dynamics of filopodia, small cell protrusions made of bundled actin-filaments that move very quickly and have a high turnover-rate during cell migration.
Read more at Science Daily
For many disciplines in the natural sciences it is desirable to get highly enlarged, precise pictures of specimens such as cells. Depending on the purpose of an experiment and the preparation of the sample, different microscopy-techniques are used to analyze small structures or objects. However, a drawback of most current approaches is the need to scan the depth of a sample in order to get a 3D picture. Especially for optically sensitive or highly dynamic (fast moving) samples this often represents a serious problem. Katrin Heinze and Kareem Elsayad, lead authors of the PNAS publication, managed to circumvent this difficulty during their work at the IMP.
Precise images of sensitive and dynamic samples
Elsayad, who was part of a research team led by Katrin Heinze at the IMP, used fluorescence microscopy for his experimental set-up. The principle of fluorescence microscopy -- now a common tool in biomedical research labs -- is as follows: Fluorescent dyes, so-called fluorophores, are turned on by light of a certain wavelength and, as a consequence, "spontaneously" emit light of a different wavelength. Elsayad designed a thin biocompatible nanostructure consisting of a quartz microscope slide with a thin silver film and a dielectric layer. The IMP-scientist then labeled the sample -- fixed or live cells -- with a fluorescent dye and placed it above the coated slide.
Elsayad explains in simple terms how the biological imaging then took place: "The measured emission spectrum of a fluorescent dye above this substrate depends on its distance from the substrate. In other words, the position information of a collection of fluorophores is translated into color information, and this is what we were measuring in the end." With this elaborate method, only one measurement is needed to determine the fluorophore distribution above the substrate, with a resolution -- in the direction away from the substrate -- down to 10 nanometers (1/100.000th of a millimeter). "I believe that the beauty of our method is its simplicity. No elaborate set-up or machines are required to achieve this high resolution. Once the sample is placed on the substrate, which can be mass produced, a confocal microscope with spectral detection is all that is needed," Heinze points out.
Simple method, big potential
The novel technique was already successfully tested by Elsayad and Heinze. Together with collaborators at the Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences, they used it to study paxillin, a protein important for cell adhesion, in living cells. The scientists also visualized the 3D dynamics of filopodia, small cell protrusions made of bundled actin-filaments that move very quickly and have a high turnover-rate during cell migration.
Read more at Science Daily
Fruit Flies With Better Sex Lives Live Longer
Sex may in fact be one of the secrets to good health, youth and a longer life -- at least for fruit flies -- suggests a new University of Michigan study that appears in the journal Science.
Male fruit flies that perceived sexual pheromones of their female counterparts -- without the opportunity to mate -- experienced rapid decreases in fat stores, resistance to starvation and more stress. The sexually frustrated flies lived shorter lives.
Mating, on the other hand, partially reversed the negative effects on health and aging.
"Our findings give us a better understanding about how sensory perception and physiological state are integrated in the brain to affect long-term health and lifespan," says senior author Scott D. Pletcher, Ph.D, professor in the Department of Molecular and Integrative Physiology at the U-M Medical School and research professor at the U-M Geriatrics Center.
"The cutting-edge genetics and neurobiology used in this research suggests to us that for fruit flies at least, it may not be a myth that sexual frustration is a health issue. Expecting sex without any sexual reward was detrimental to their health and cut their lives short."
U-M scientists used sensory manipulations to give the common male fruit fly, Drosophila melanogaster, the perception that they were in a sexually rich environment by exposing them to genetically engineered males that produced female pheromones. They were also able to manipulate the specific neurons responsible for pheromone perception as well as parts of the brain linked to sexual reward (secreting a group of compounds associated with anxiety and sex drive).
"These data may provide the first direct evidence that aging and physiology are influenced by how the brain processes expectations and rewards," Pletcher says. "In this case, sexual rewards specifically promoted healthy aging."
Fruit flies have been a powerful tool for studying aging because they live on average 60 days yet many of the discoveries in flies have proven effective in longer-lived animals, such as mice.
Read more at Science Daily
Male fruit flies that perceived sexual pheromones of their female counterparts -- without the opportunity to mate -- experienced rapid decreases in fat stores, resistance to starvation and more stress. The sexually frustrated flies lived shorter lives.
Mating, on the other hand, partially reversed the negative effects on health and aging.
"Our findings give us a better understanding about how sensory perception and physiological state are integrated in the brain to affect long-term health and lifespan," says senior author Scott D. Pletcher, Ph.D, professor in the Department of Molecular and Integrative Physiology at the U-M Medical School and research professor at the U-M Geriatrics Center.
"The cutting-edge genetics and neurobiology used in this research suggests to us that for fruit flies at least, it may not be a myth that sexual frustration is a health issue. Expecting sex without any sexual reward was detrimental to their health and cut their lives short."
U-M scientists used sensory manipulations to give the common male fruit fly, Drosophila melanogaster, the perception that they were in a sexually rich environment by exposing them to genetically engineered males that produced female pheromones. They were also able to manipulate the specific neurons responsible for pheromone perception as well as parts of the brain linked to sexual reward (secreting a group of compounds associated with anxiety and sex drive).
"These data may provide the first direct evidence that aging and physiology are influenced by how the brain processes expectations and rewards," Pletcher says. "In this case, sexual rewards specifically promoted healthy aging."
Fruit flies have been a powerful tool for studying aging because they live on average 60 days yet many of the discoveries in flies have proven effective in longer-lived animals, such as mice.
Read more at Science Daily
Scientists Stitch Up Photosynthetic Megacomplex
When sunlight strikes a photosynthesizing organism, energy flashes between proteins just beneath its surface until it is trapped as separated electric charges. Improbable as it may seem these tiny hits of energy eventually power the growth and movement of all plants and animals. They are literally the sparks of life.
The three clumps of protein -- a light-harvesting antenna called a phycobilisome and photosystems I and II -- look like random scrawls in illustrations but this is misleading. They are able to do their job only because they are positioned with exquisite precision.
If the distances between proteins were too great or the transfers too slow, the energy would be wasted and -- ultimately -- all entropy-defying assemblages like plants and animals would fall to dust.
But until now scientists weren't even sure the three complex cohered as a single sun-worshipping megacomplex. Previous attempts to isolated connected complexes failed because the weak links that held them together broke and the megacomplex fell apart.
In the Nov. 29 issue of Science scientists at Washington University in St. Louis report on a new technique that finally allows the megacomplex to be plucked out entire and examined as a functioning whole.
Like a seamstress basting together the pieces of a dress, the scientists chemically linked the proteins in the megacomplex. Stabilized by the stitches, or crosslinks, it was isolated in its complete, fully functional form and subjected to the full armamentarium of their state-of-the-art labs, including tandem mass spectrometers and ultra-fast lasers.
The work was done at PARC (Photosynthetic Antenna Research Center), an Energy Frontier Research Center funded by the Department of Energy that is focused on the scientific groundwork needed to maximize photosynthetic efficiency in living organisms and to design biohybrid or synthetic ones to drive chemical processes or generate photocurrent.
Robert Blankenship, PhD, PARC's director and the Lucille P. Markey Distinguished Professor of Arts & Sciences, said that one outcome of the work in the long term might be the ability to double or triple the efficiency of crop plants -- now stuck at a woeful 1 to 3 percent. "We will need such a boost to feed the 9 or 10 billion people predicted to be alive by 2050," he said.
Wizards of the lab
The scientists worked with the model organism often used to study photosynthesis in the lab, a cyanobacterium, sometimes called a blue-green alga.
Cyanobacteria are ancient organisms, known from fossils that are 3.5 billion years old, nearly as old as the oldest known rocks, and thought to be the first organisms to release oxygen into the noxious primitive atmosphere.
All photosynthesizing organisms have light-harvesting anntenas made up of many molecules that absorb light and transfer the excitation energy to reaction centers, where it is stored as charge separation.
In free-living cyanobacteria the antenna, called a phycobilisome, consists of splayed rods made up of disks of proteins containing intensely colored bilin pigments. The antenna sits directly above one reaction center, Photosystem II, and kitty corner to the other, Photosystem I.
PARC research scientist Haijun Liu, PhD, proposed stitching together the megacomplex and then engineered a strain of cyanobacteria that has a tag on the bottom of Photosystem II.
The mutant cells were treated with reagents that stitched together the complexes, then broken open, and the tag used to pull out Photosystem II and anything attached to it.
To figure out how the proteins were interconnected, the scientists repeatedly cut or shattered the proteins, analyzing them by mass spectrometry down to the level of the individual amino acid.
The amino acid sequences derived in this way were then compared to known sequences within the megacomplex, and the location of cross links between different complexes helped establish the overall structure of the megacomplex.
"It's a very complicated data analysis routine that literally generates tens of thousands of peptides that took a team of students and postdoctoral associates overseen by Hao Zhang and Michael Gross, months to analyze," Blankenship said. Hao Zhang, PhD, is a PARC research Scientist and Michael Gross, PhD, is professor of chemistry and Director of the Mass Spectrometry Resource in Arts & Sciences.
In the meantime research scientist Dariusz Niedzwiedzki, PhD, in the PARC Ultrafast Laser Facility was exciting the phycobilisome in intact megacomplexes and tracking the energy through the complex by the faint glow of fluorescencing molecules.
Typical energy transfers within the complex take place in a picosecond (a trillionth of a second), way too fast for humans to perceive. If one picosecond were a second, a second would be 31,700 years.
"PARC is one of the only places in the world that has available this sophisticated combination of experience and advanced techniques," said Blankenship, "and to solve this problem we were brought all of our expertise to bear.
"The work provides a new level of understanding of the organization of these photosynthetic membranes and that is something that a lot of people have tried to understand for a long time," he said.
Read more at Science Daily
The three clumps of protein -- a light-harvesting antenna called a phycobilisome and photosystems I and II -- look like random scrawls in illustrations but this is misleading. They are able to do their job only because they are positioned with exquisite precision.
If the distances between proteins were too great or the transfers too slow, the energy would be wasted and -- ultimately -- all entropy-defying assemblages like plants and animals would fall to dust.
But until now scientists weren't even sure the three complex cohered as a single sun-worshipping megacomplex. Previous attempts to isolated connected complexes failed because the weak links that held them together broke and the megacomplex fell apart.
In the Nov. 29 issue of Science scientists at Washington University in St. Louis report on a new technique that finally allows the megacomplex to be plucked out entire and examined as a functioning whole.
Like a seamstress basting together the pieces of a dress, the scientists chemically linked the proteins in the megacomplex. Stabilized by the stitches, or crosslinks, it was isolated in its complete, fully functional form and subjected to the full armamentarium of their state-of-the-art labs, including tandem mass spectrometers and ultra-fast lasers.
The work was done at PARC (Photosynthetic Antenna Research Center), an Energy Frontier Research Center funded by the Department of Energy that is focused on the scientific groundwork needed to maximize photosynthetic efficiency in living organisms and to design biohybrid or synthetic ones to drive chemical processes or generate photocurrent.
Robert Blankenship, PhD, PARC's director and the Lucille P. Markey Distinguished Professor of Arts & Sciences, said that one outcome of the work in the long term might be the ability to double or triple the efficiency of crop plants -- now stuck at a woeful 1 to 3 percent. "We will need such a boost to feed the 9 or 10 billion people predicted to be alive by 2050," he said.
Wizards of the lab
The scientists worked with the model organism often used to study photosynthesis in the lab, a cyanobacterium, sometimes called a blue-green alga.
Cyanobacteria are ancient organisms, known from fossils that are 3.5 billion years old, nearly as old as the oldest known rocks, and thought to be the first organisms to release oxygen into the noxious primitive atmosphere.
All photosynthesizing organisms have light-harvesting anntenas made up of many molecules that absorb light and transfer the excitation energy to reaction centers, where it is stored as charge separation.
In free-living cyanobacteria the antenna, called a phycobilisome, consists of splayed rods made up of disks of proteins containing intensely colored bilin pigments. The antenna sits directly above one reaction center, Photosystem II, and kitty corner to the other, Photosystem I.
PARC research scientist Haijun Liu, PhD, proposed stitching together the megacomplex and then engineered a strain of cyanobacteria that has a tag on the bottom of Photosystem II.
The mutant cells were treated with reagents that stitched together the complexes, then broken open, and the tag used to pull out Photosystem II and anything attached to it.
To figure out how the proteins were interconnected, the scientists repeatedly cut or shattered the proteins, analyzing them by mass spectrometry down to the level of the individual amino acid.
The amino acid sequences derived in this way were then compared to known sequences within the megacomplex, and the location of cross links between different complexes helped establish the overall structure of the megacomplex.
"It's a very complicated data analysis routine that literally generates tens of thousands of peptides that took a team of students and postdoctoral associates overseen by Hao Zhang and Michael Gross, months to analyze," Blankenship said. Hao Zhang, PhD, is a PARC research Scientist and Michael Gross, PhD, is professor of chemistry and Director of the Mass Spectrometry Resource in Arts & Sciences.
In the meantime research scientist Dariusz Niedzwiedzki, PhD, in the PARC Ultrafast Laser Facility was exciting the phycobilisome in intact megacomplexes and tracking the energy through the complex by the faint glow of fluorescencing molecules.
Typical energy transfers within the complex take place in a picosecond (a trillionth of a second), way too fast for humans to perceive. If one picosecond were a second, a second would be 31,700 years.
"PARC is one of the only places in the world that has available this sophisticated combination of experience and advanced techniques," said Blankenship, "and to solve this problem we were brought all of our expertise to bear.
"The work provides a new level of understanding of the organization of these photosynthetic membranes and that is something that a lot of people have tried to understand for a long time," he said.
Read more at Science Daily
Fast, Furious, Refined: Smaller Black Holes Can Eat Plenty
Observations of a black hole powering an energetic X-ray source in a galaxy some 22 million light-years away could change our thinking about how some black holes consume matter. The findings indicate that this particular black hole, thought to be the engine behind the X-ray source's high-energy light output, is unexpectedly lightweight, and, despite the generous amount of dust and gas being fed to it by a massive stellar companion, it swallows this material in a surprisingly orderly fashion.
"It has elegant manners," says research team member Stephen Justham, of the National Astronomical Observatories of China, Chinese Academy of Sciences. Such lightweights, he explains, must devour matter at close to their theoretical limits of consumption to sustain the kind of energy output observed. "We thought that when small black holes were pushed to these limits, they would not be able to maintain such refined ways of consuming matter," Justham explains. "We expected them to display more complicated behavior when eating so quickly. Apparently we were wrong."
A Surprising Twist
X-ray sources give off high- and low-energy X-rays, which astronomers call hard and soft X-rays, respectively. In what might seem like a contradiction, larger black holes tend to produce more soft X-rays, while smaller black holes tend to produce relatively more hard X-rays. This source, called M101 ULX-1, is dominated by soft X-rays, so researchers expected to find a larger black hole as its energy source.
In a surprising twist, however, the new observations made at the Gemini Observatory, and published in the November 28th issue of the journal Nature, indicate that M101 ULX-1's black hole is on the small side, and astrophysicists don't understand why.
In theoretical models of how matter falls into black holes and radiates energy, the soft X-rays come primarily from the accretion disk (see illustration), while hard X-rays are typically generated by a high-energy "corona" around the disk. The models show that the corona's emission strength should increase as the rate of accretion gets closer to the theoretical limit of consumption. Interactions between the disk and corona are also expected to become more complex.
Based on the size of the black hole found in this work, the region around M101-ULX-1 should, theoretically, be dominated by hard X-rays and appear structurally more complicated. However, that isn't the case.
"Theories have been suggested which allow such low-mass black holes to eat this quickly and shine this brightly in X-rays. But those mechanisms leave signatures in the emitted X-ray spectrum, which this system does not display," says lead author Jifeng Liu, of the National Astronomical Observatories of China, Chinese Academy of Sciences. "Somehow this black hole, with a mass only 20-30 times the mass of our Sun, is able to eat at a rate near to its theoretical maximum while remaining relatively placid. It's amazing. Theory now needs to somehow explain what's going on."
An Intermediate-mass Black Hole Dilemma
The discovery also delivers a blow to astronomers hoping to find conclusive evidence for an "intermediate-mass" black hole in M101 ULX-1. Such black holes would have masses roughly between 100 and 1000 times the mass of the Sun, placing them between normal stellar-mass black holes and the monstrous supermassive black holes that reside in the centers of galaxies. So far these objects have been frustratingly elusive, with potential candidates but no broadly-accepted detection. Ultra-luminous X-ray sources (ULXs) have been one of the main proposed hiding places for intermediate-mass black holes, and M101 ULX-1 was one of the most promising-looking contenders.
"Astronomers hoping to study these objects will now have to focus on other locations for which indirect evidence of this class of black holes has been suggested, either in the even brighter 'hyper-luminous' X-ray sources or inside some dense clusters of stars," explains research team member Joel Bregman of the University of Michigan.
"Many scientists thought it was just a matter of time until we had evidence for an intermediate-mass black hole in M101 ULX-1," says Liu. But the new Gemini findings both take away some of that hope to solve an old puzzle and adds the fresh mystery of how this stellar-mass black hole can consume matter so calmly.
To determine the mass of the black hole, the researchers used the Gemini Multi-Object Spectrograph at the Gemini North telescope on Mauna Kea, Hawai'i to measure the motion of the companion. This star, which feeds matter to the black hole, is of the Wolf-Rayet variety. Such stars emit strong stellar winds, from which the black hole can then draw in material. This study also revealed that the black hole in M101 ULX-1 can capture more material from that stellar wind than astronomers had anticipated.
Read more at Science Daily
"It has elegant manners," says research team member Stephen Justham, of the National Astronomical Observatories of China, Chinese Academy of Sciences. Such lightweights, he explains, must devour matter at close to their theoretical limits of consumption to sustain the kind of energy output observed. "We thought that when small black holes were pushed to these limits, they would not be able to maintain such refined ways of consuming matter," Justham explains. "We expected them to display more complicated behavior when eating so quickly. Apparently we were wrong."
A Surprising Twist
X-ray sources give off high- and low-energy X-rays, which astronomers call hard and soft X-rays, respectively. In what might seem like a contradiction, larger black holes tend to produce more soft X-rays, while smaller black holes tend to produce relatively more hard X-rays. This source, called M101 ULX-1, is dominated by soft X-rays, so researchers expected to find a larger black hole as its energy source.
In a surprising twist, however, the new observations made at the Gemini Observatory, and published in the November 28th issue of the journal Nature, indicate that M101 ULX-1's black hole is on the small side, and astrophysicists don't understand why.
In theoretical models of how matter falls into black holes and radiates energy, the soft X-rays come primarily from the accretion disk (see illustration), while hard X-rays are typically generated by a high-energy "corona" around the disk. The models show that the corona's emission strength should increase as the rate of accretion gets closer to the theoretical limit of consumption. Interactions between the disk and corona are also expected to become more complex.
Based on the size of the black hole found in this work, the region around M101-ULX-1 should, theoretically, be dominated by hard X-rays and appear structurally more complicated. However, that isn't the case.
"Theories have been suggested which allow such low-mass black holes to eat this quickly and shine this brightly in X-rays. But those mechanisms leave signatures in the emitted X-ray spectrum, which this system does not display," says lead author Jifeng Liu, of the National Astronomical Observatories of China, Chinese Academy of Sciences. "Somehow this black hole, with a mass only 20-30 times the mass of our Sun, is able to eat at a rate near to its theoretical maximum while remaining relatively placid. It's amazing. Theory now needs to somehow explain what's going on."
An Intermediate-mass Black Hole Dilemma
The discovery also delivers a blow to astronomers hoping to find conclusive evidence for an "intermediate-mass" black hole in M101 ULX-1. Such black holes would have masses roughly between 100 and 1000 times the mass of the Sun, placing them between normal stellar-mass black holes and the monstrous supermassive black holes that reside in the centers of galaxies. So far these objects have been frustratingly elusive, with potential candidates but no broadly-accepted detection. Ultra-luminous X-ray sources (ULXs) have been one of the main proposed hiding places for intermediate-mass black holes, and M101 ULX-1 was one of the most promising-looking contenders.
"Astronomers hoping to study these objects will now have to focus on other locations for which indirect evidence of this class of black holes has been suggested, either in the even brighter 'hyper-luminous' X-ray sources or inside some dense clusters of stars," explains research team member Joel Bregman of the University of Michigan.
"Many scientists thought it was just a matter of time until we had evidence for an intermediate-mass black hole in M101 ULX-1," says Liu. But the new Gemini findings both take away some of that hope to solve an old puzzle and adds the fresh mystery of how this stellar-mass black hole can consume matter so calmly.
To determine the mass of the black hole, the researchers used the Gemini Multi-Object Spectrograph at the Gemini North telescope on Mauna Kea, Hawai'i to measure the motion of the companion. This star, which feeds matter to the black hole, is of the Wolf-Rayet variety. Such stars emit strong stellar winds, from which the black hole can then draw in material. This study also revealed that the black hole in M101 ULX-1 can capture more material from that stellar wind than astronomers had anticipated.
Read more at Science Daily
Nov 27, 2013
Figure Eights and Peanut Shells: How Stars Move at the Center of the Galaxy
Two months ago astronomers created a new 3D map of stars at the centre of our Galaxy (the Milky Way), showing more clearly than ever the bulge at its core. Previous explanations suggested that the stars that form the bulge are in banana-like orbits, but a paper published this week in Monthly Notices of the Royal Astronomical Society suggests that the stars probably move in peanut-shell or figure of eight-shaped orbits instead.
The difference is important; astronomers develop theories of star motions to not only understand how the stars in our galaxy are moving today but also how our galaxy formed and evolves. The Milky Way is shaped like a spiral, with a region of stars at the centre known as the "bar," because of its shape. In the middle of this region, there is a "bulge" that expands out vertically.
In the new work Alice Quillen, professor of astronomy at the University of Rochester, and her collaborators created a mathematical model of what might be happening at the centre of the Milky Way. Unlike the Solar System where most of the gravitational pull comes from the Sun and is simple to model, it is much harder to describe the gravitational field near the centre of the Galaxy, where millions of stars, vast clouds of dust, and even dark matter swirl about. In this case, Quillen and her colleagues considered the forces acting on the stars in or near the bulge.
As the stars go round in their orbits, they also move above or below the plane of the bar. When stars cross the plane they get a little push, like a child on a swing. At the resonance point, which is a point a certain distance from the centre of the bar, the timing of the pushes on the stars is such that this effect is strong enough to make the stars at this point move up higher above the plane. (It is like when a child on the swing has been pushed a little every time and eventually is swinging higher.) These stars are pushed out from the edge of the bulge.
The resonance at this point means that stars undergo two vertical oscillations for every orbital period. But what is the most likely shape of the orbits in between? The researchers showed through computer simulations that peanut-shell shaped orbits are consistent with the effect of this resonance and could give rise to the observed shape of the bulge, which is also like a peanut-shell.
Next month the European Space Agency will launch the Gaia spacecraft, which is designed to create a 3D map of the stars in the Milky Way and their motions. This 3D map will help astronomers better understand the composition, formation and evolution of our Galaxy.
"It is hard to look back into the past of our galaxy and know what was there, but simulations can give us clues," explained Quillen. "Using my model I saw that, over time, the resonance with the bar, which is what leads to these peculiarly shaped orbits, moves outwards. This may be what happened in our Galaxy."
"Gaia will generate huge amounts of data -- on billions of stars," said Quillen. This data will allow Quillen and her colleagues to finesse their model further. "This can lead to a better understanding of how the Milky Way might have evolved into the shape it has today."
Quillen explained that there are different models as to how the galactic bulge was formed. Astronomers are interested in finding out how much the bar has slowed down over time and whether the bulge "puffed up all at once or slowly." Understanding the distributions of speeds and directions of motion (velocities) of the stars in the bar and the bulge might help determine this evolution.
"One of the predictions of my model is that there is a sharp difference in the velocity distributions inside and outside the resonance," Quillen said. "Inside -- closer to the galactic centre -- the disk should be puffed up and the stars there would have higher vertical velocities. Gaia will measure the motions of the stars and allow us to look for variations in velocity distributions such as these."
To be able to generate a model for the orbits of stars in the bulge, Quillen needed to factor in different variables. She first needed to understand what happens at the region of the resonance, which depends on the speed of the rotating bar and the mass density of the bar.
"Before I could model the orbits, I needed the answer to what I thought was a simple question: what is the distribution of material in the inner galaxy?" Quillen said. "But this wasn't something I could just look up. Luckily my collaborator Sanjib Sharma was able to help out."
Sharma worked out how the speed of circular orbits changed with distance from the galactic centre (called the rotation curve). Using this information, Quillen could compute a mass density at the location of the resonance, which she needed for her model.
Read more at Science Daily
The difference is important; astronomers develop theories of star motions to not only understand how the stars in our galaxy are moving today but also how our galaxy formed and evolves. The Milky Way is shaped like a spiral, with a region of stars at the centre known as the "bar," because of its shape. In the middle of this region, there is a "bulge" that expands out vertically.
In the new work Alice Quillen, professor of astronomy at the University of Rochester, and her collaborators created a mathematical model of what might be happening at the centre of the Milky Way. Unlike the Solar System where most of the gravitational pull comes from the Sun and is simple to model, it is much harder to describe the gravitational field near the centre of the Galaxy, where millions of stars, vast clouds of dust, and even dark matter swirl about. In this case, Quillen and her colleagues considered the forces acting on the stars in or near the bulge.
As the stars go round in their orbits, they also move above or below the plane of the bar. When stars cross the plane they get a little push, like a child on a swing. At the resonance point, which is a point a certain distance from the centre of the bar, the timing of the pushes on the stars is such that this effect is strong enough to make the stars at this point move up higher above the plane. (It is like when a child on the swing has been pushed a little every time and eventually is swinging higher.) These stars are pushed out from the edge of the bulge.
The resonance at this point means that stars undergo two vertical oscillations for every orbital period. But what is the most likely shape of the orbits in between? The researchers showed through computer simulations that peanut-shell shaped orbits are consistent with the effect of this resonance and could give rise to the observed shape of the bulge, which is also like a peanut-shell.
Next month the European Space Agency will launch the Gaia spacecraft, which is designed to create a 3D map of the stars in the Milky Way and their motions. This 3D map will help astronomers better understand the composition, formation and evolution of our Galaxy.
"It is hard to look back into the past of our galaxy and know what was there, but simulations can give us clues," explained Quillen. "Using my model I saw that, over time, the resonance with the bar, which is what leads to these peculiarly shaped orbits, moves outwards. This may be what happened in our Galaxy."
"Gaia will generate huge amounts of data -- on billions of stars," said Quillen. This data will allow Quillen and her colleagues to finesse their model further. "This can lead to a better understanding of how the Milky Way might have evolved into the shape it has today."
Quillen explained that there are different models as to how the galactic bulge was formed. Astronomers are interested in finding out how much the bar has slowed down over time and whether the bulge "puffed up all at once or slowly." Understanding the distributions of speeds and directions of motion (velocities) of the stars in the bar and the bulge might help determine this evolution.
"One of the predictions of my model is that there is a sharp difference in the velocity distributions inside and outside the resonance," Quillen said. "Inside -- closer to the galactic centre -- the disk should be puffed up and the stars there would have higher vertical velocities. Gaia will measure the motions of the stars and allow us to look for variations in velocity distributions such as these."
To be able to generate a model for the orbits of stars in the bulge, Quillen needed to factor in different variables. She first needed to understand what happens at the region of the resonance, which depends on the speed of the rotating bar and the mass density of the bar.
"Before I could model the orbits, I needed the answer to what I thought was a simple question: what is the distribution of material in the inner galaxy?" Quillen said. "But this wasn't something I could just look up. Luckily my collaborator Sanjib Sharma was able to help out."
Sharma worked out how the speed of circular orbits changed with distance from the galactic centre (called the rotation curve). Using this information, Quillen could compute a mass density at the location of the resonance, which she needed for her model.
Read more at Science Daily
Mysteriously Intact T. Rex Tissue Finally Explained
The controversial discovery of 68-million-year-old soft tissue from the bones of a Tyrannosaurus rex finally has a physical explanation. According to new research, iron in the dinosaur's body preserved the tissue before it could decay.
The research, headed by Mary Schweitzer, a molecular paleontologist at North Carolina State University, explains how proteins — and possibly even DNA — can survive millennia. Schweitzer and her colleagues first raised this question in 2005, when they found the seemingly impossible: soft tissue preserved inside the leg of an adolescent T. rex unearthed in Montana.
"What we found was unusual, because it was still soft and still transparent and still flexible," Schweitzer told LiveScience.
T. rextissue?
The find was also controversial because scientists had thought proteins that make up soft tissue should degrade in less than 1 million years in the best of conditions. In most cases, microbes feast on a dead animal's soft tissue, destroying it within weeks. The tissue must be something else, perhaps the product of a later bacterial invasion, critics argued.
Then, in 2007, Schweitzer and her colleagues analyzed the chemistry of the T. rex proteins. They found the proteins really did come from dinosaur soft tissue. The tissue was collagen, they reported in the journal Science, and it shared similarities with bird collagen — which makes sense, as modern birds evolved from theropod dinosaurs such as T. rex.
The researchers also analyzed other fossils for the presence of soft tissue, and found it was present in about half of their samples going back to the Jurassic Period, which lasted from 145.5 million to 199.6 million years ago, Schweitzer said.
"The problem is, for 300 years, we thought, 'Well, the organics are all gone, so why should we look for something that's not going to be there?' and nobody looks," she said.
The obvious question, though, was how soft, pliable tissue could survive for millions of years. In a new study published today (Nov. 26) in the journal Proceedings of the Royal Society B: Biological Sciences, Schweitzer thinks she has the answer: Iron.
Iron lady
Iron is an element present in abundance in the body, particularly in the blood, where it is part of the protein that carries oxygen from the lungs to the tissues. Iron is also highly reactive with other molecules, so the body keeps it locked up tight, bound to molecules that prevent it from wreaking havoc on the tissues.
After death, though, iron is let free from its cage. It forms minuscule iron nanoparticles and also generates free radicals, which are highly reactive molecules thought to be involved in aging.
"The free radicals cause proteins and cell membranes to tie in knots," Schweitzer said. "They basically act like formaldehyde."
Formaldehyde, of course, preserves tissue. It works by linking up, or cross-linking, the amino acids that make up proteins, which makes those proteins more resistant to decay.
Schweitzer and her colleagues found that dinosaur soft tissue is closely associated with iron nanoparticles in both the T. rex and another soft-tissue specimen from Brachylophosaurus canadensis, a type of duck-billed dinosaur. They then tested the iron-as-preservative idea using modern ostrich blood vessels. They soaked one group of blood vessels in iron-rich liquid made of red blood cells and another group in water. The blood vessels left in water turned into a disgusting mess within days. The blood vessels soaked in red blood cells remain recognizable after sitting at room temperature for two years.
Searching for soft tissue
Dinosaurs' iron-rich blood, combined with a good environment for fossilization, may explain the amazing existence of soft tissue from the Cretaceous (a period that lasted from about 65.5 million to 145.5 million years ago) and even earlier. The specimens Schweitzer works with, including skin, show evidence of excellent preservation. The bones of these various specimens are articulated, not scattered, suggesting they were buried quickly. They're also buried in sandstone, which is porous and may wick away bacteria and reactive enzymes that would otherwise degrade the bone.
Schweitzer is set to search for more dinosaur soft tissue this summer. "I'd like to find a honking big T. rex that's completely articulated that's still in the ground, or something similar," she said. To preserve the chemistry of potential soft tissue, the specimens must not be treated with preservatives or glue, as most fossil bones are, she said. And they need to be tested quickly, as soft tissue could degrade once exposed to modern air and humidity.
Read more at Discovery News
The research, headed by Mary Schweitzer, a molecular paleontologist at North Carolina State University, explains how proteins — and possibly even DNA — can survive millennia. Schweitzer and her colleagues first raised this question in 2005, when they found the seemingly impossible: soft tissue preserved inside the leg of an adolescent T. rex unearthed in Montana.
"What we found was unusual, because it was still soft and still transparent and still flexible," Schweitzer told LiveScience.
T. rextissue?
The find was also controversial because scientists had thought proteins that make up soft tissue should degrade in less than 1 million years in the best of conditions. In most cases, microbes feast on a dead animal's soft tissue, destroying it within weeks. The tissue must be something else, perhaps the product of a later bacterial invasion, critics argued.
Then, in 2007, Schweitzer and her colleagues analyzed the chemistry of the T. rex proteins. They found the proteins really did come from dinosaur soft tissue. The tissue was collagen, they reported in the journal Science, and it shared similarities with bird collagen — which makes sense, as modern birds evolved from theropod dinosaurs such as T. rex.
The researchers also analyzed other fossils for the presence of soft tissue, and found it was present in about half of their samples going back to the Jurassic Period, which lasted from 145.5 million to 199.6 million years ago, Schweitzer said.
"The problem is, for 300 years, we thought, 'Well, the organics are all gone, so why should we look for something that's not going to be there?' and nobody looks," she said.
The obvious question, though, was how soft, pliable tissue could survive for millions of years. In a new study published today (Nov. 26) in the journal Proceedings of the Royal Society B: Biological Sciences, Schweitzer thinks she has the answer: Iron.
Iron lady
Iron is an element present in abundance in the body, particularly in the blood, where it is part of the protein that carries oxygen from the lungs to the tissues. Iron is also highly reactive with other molecules, so the body keeps it locked up tight, bound to molecules that prevent it from wreaking havoc on the tissues.
After death, though, iron is let free from its cage. It forms minuscule iron nanoparticles and also generates free radicals, which are highly reactive molecules thought to be involved in aging.
"The free radicals cause proteins and cell membranes to tie in knots," Schweitzer said. "They basically act like formaldehyde."
Formaldehyde, of course, preserves tissue. It works by linking up, or cross-linking, the amino acids that make up proteins, which makes those proteins more resistant to decay.
Schweitzer and her colleagues found that dinosaur soft tissue is closely associated with iron nanoparticles in both the T. rex and another soft-tissue specimen from Brachylophosaurus canadensis, a type of duck-billed dinosaur. They then tested the iron-as-preservative idea using modern ostrich blood vessels. They soaked one group of blood vessels in iron-rich liquid made of red blood cells and another group in water. The blood vessels left in water turned into a disgusting mess within days. The blood vessels soaked in red blood cells remain recognizable after sitting at room temperature for two years.
Searching for soft tissue
Dinosaurs' iron-rich blood, combined with a good environment for fossilization, may explain the amazing existence of soft tissue from the Cretaceous (a period that lasted from about 65.5 million to 145.5 million years ago) and even earlier. The specimens Schweitzer works with, including skin, show evidence of excellent preservation. The bones of these various specimens are articulated, not scattered, suggesting they were buried quickly. They're also buried in sandstone, which is porous and may wick away bacteria and reactive enzymes that would otherwise degrade the bone.
Schweitzer is set to search for more dinosaur soft tissue this summer. "I'd like to find a honking big T. rex that's completely articulated that's still in the ground, or something similar," she said. To preserve the chemistry of potential soft tissue, the specimens must not be treated with preservatives or glue, as most fossil bones are, she said. And they need to be tested quickly, as soft tissue could degrade once exposed to modern air and humidity.
Read more at Discovery News
Why Seahorses Are Shaped Like Horses
Seahorses are unique among fish for having bent necks and long-snouted heads that make them resemble horses. The overall shape of their body, including the lack of a tail fin, helps make them "one of the slowest swimmers on the planet," said Brad Gemmell, a marine biologist at the University of Texas at Austin. "They don't swim very much -- they tend to anchor themselves to surfaces like seagrass with their prehensile tails." (Prehensile tails, like those of monkeys, can grasp items.)
Gemmell and his colleagues were investigating how seahorses and other fish feed on microscopic shrimplike crustaceans known as copepods.
"Copepods are really important," Gemmell said. "They're fed on by a wide majority of marine animals during some point in their life histories -- in particular, a lot of commercially harvested fish."
Since virtually all marine animals like to eat copepods, "these crustaceans have evolved some very impressive escape behavior," Gemmell said. "They're very, very sensitive to disturbances in the water, such as those created by approaching predators."
Once copepods detect these disturbances, they can swim distances of more than 500 times their body length per second. In comparison, "a cheetah probably only runs 30 body lengths per second," Gemmell said. If the average U.S. adult male traveled 500 body lengths per second, based on their height, they would move nearly 2,000 mph (3,200 km/h).
Unexpectedly, even though seahorses are slow swimmers, "they were very effective at capturing these very fast-swimming, highly evasive prey," Gemmell told LiveScience.
Seahorses use their arched necks as springs to pivot their heads forward and catch prey. This limits the distances at which they can seize victims to only the length of their necks, about 0.04 inches (1 millimeter). However, seahorses nevertheless could get close enough to copepods to capture them.
"We found they captured copepods more than 90 percent of the time, which is extremely effective for any sort of predator, much less with such elusive prey," Gemmell said.
To find out how these fish catch their victims, the researchers experimented with the dwarf seahorse Hippocampus zosterae, which is native to the Bahamas and the United States and is only about 1 inch (2.5 centimeters) long. They suspended these fish with copepods in water loaded with hollow glass beads about one-sixth the average diameter of a human hair. They shone lasers into this water that illuminated the beads.
By analyzing how the beads moved as seahorses preyed on copepods, the scientists could deduce how they made the water flow around them in three dimensions. They found that the water around the seahorse snout barely moves while the hunter approaches its victims, helping the seahorse to close in undetected.
Read more at Discovery News
Gemmell and his colleagues were investigating how seahorses and other fish feed on microscopic shrimplike crustaceans known as copepods.
"Copepods are really important," Gemmell said. "They're fed on by a wide majority of marine animals during some point in their life histories -- in particular, a lot of commercially harvested fish."
Since virtually all marine animals like to eat copepods, "these crustaceans have evolved some very impressive escape behavior," Gemmell said. "They're very, very sensitive to disturbances in the water, such as those created by approaching predators."
Once copepods detect these disturbances, they can swim distances of more than 500 times their body length per second. In comparison, "a cheetah probably only runs 30 body lengths per second," Gemmell said. If the average U.S. adult male traveled 500 body lengths per second, based on their height, they would move nearly 2,000 mph (3,200 km/h).
Unexpectedly, even though seahorses are slow swimmers, "they were very effective at capturing these very fast-swimming, highly evasive prey," Gemmell told LiveScience.
Seahorses use their arched necks as springs to pivot their heads forward and catch prey. This limits the distances at which they can seize victims to only the length of their necks, about 0.04 inches (1 millimeter). However, seahorses nevertheless could get close enough to copepods to capture them.
"We found they captured copepods more than 90 percent of the time, which is extremely effective for any sort of predator, much less with such elusive prey," Gemmell said.
To find out how these fish catch their victims, the researchers experimented with the dwarf seahorse Hippocampus zosterae, which is native to the Bahamas and the United States and is only about 1 inch (2.5 centimeters) long. They suspended these fish with copepods in water loaded with hollow glass beads about one-sixth the average diameter of a human hair. They shone lasers into this water that illuminated the beads.
By analyzing how the beads moved as seahorses preyed on copepods, the scientists could deduce how they made the water flow around them in three dimensions. They found that the water around the seahorse snout barely moves while the hunter approaches its victims, helping the seahorse to close in undetected.
Read more at Discovery News
Beautiful Ice Circle Forms in North Dakota
A rare sight to behold, this circle of ice on North Dakota’s Sheyenne River is a winter spectacle that requires two ingredients to make: freezing cold air and an eddy in a not-so-freezing river. That and patience.
The circle of ice is a “collection of ice cubes,” caught in an eddy, Allen Schlag, a National Weather Service hydrologist in Bismarck told the AP. The cold weather over the weekend froze bits of the river in other areas that then broke apart and drifted with the current until getting caught in the eddy. Snow and frost continue to collect on top of the eddy, which grows larger in a series of concentric rings.
Retired engineer George Loegering came across the spinning disk of ice while out hunting with relatives and calculated the ice circle’s diameter to be about 55 feet. ”At first I thought, no way! It was surreal,” he told the AP. Then he looked up the phenomena online and found it is a relatively rare event.
Read more and see video at Discovery News
The circle of ice is a “collection of ice cubes,” caught in an eddy, Allen Schlag, a National Weather Service hydrologist in Bismarck told the AP. The cold weather over the weekend froze bits of the river in other areas that then broke apart and drifted with the current until getting caught in the eddy. Snow and frost continue to collect on top of the eddy, which grows larger in a series of concentric rings.
Retired engineer George Loegering came across the spinning disk of ice while out hunting with relatives and calculated the ice circle’s diameter to be about 55 feet. ”At first I thought, no way! It was surreal,” he told the AP. Then he looked up the phenomena online and found it is a relatively rare event.
Read more and see video at Discovery News
Shush! World's Oldest Resting Scorpion
This is a drawing of the scorpion who left behind the only fossil body ever found. |
The age of the trace fossil, as body impressions and tracks are called, takes scorpions way back to the early Permian. That confirms that scorpions have survived a lot of gigantic mass extinction events between then and now. What’s more, seeing how the carbon dioxide levels in the Permian atmosphere were probably three times what they are today on Earth, it’s not likely anthropogenic climate change will stop these hardy arthropods either.
“We gave it the name Alacranichnus, which means scorpion trace (alacran is Spanish for scorpion and ichnos is Greek for trace),” said Spencer Lucas, curator at the New Mexico Museum of Natural History and Science (NMMNHS). The discovery was just published in the journal Ichnos: An International Journal for Plant and Animal Traces by scientist at the museum in Albuquerque.
The scorpion fossil |
Scorpions are the oldest known arachnids, the researchers explain, with some fossils of probably aquatic scorpions dating back to the Silurian Periods about 430 million years ago. Later, in the Carboniferous (359 million to 299 million years ago), scorpions took to land. But then the fossils peter out.
Read more at Discovery News
Nov 26, 2013
Mechanism Behind Blood Stem Cells' Longevity Discovered
The blood stem cells that live in bone marrow are at the top of a complex family tree. Such stem cells split and divide down various pathways that ultimately produce red cells, white cells and platelets. These "daughter" cells must be produced at a rate of about one million per second to constantly replenish the body's blood supply.
Researchers have long wondered what allows these stem cells to persist for decades, when their progeny last for days, weeks or months before they need to be replaced. Now, a study from the University of Pennsylvania has uncovered one of the mechanisms that allow these stem cells to keep dividing in perpetuity.
The researchers found that a form of the motor protein that allows muscles to contract helps these cells divide asymmetrically, so that one part remains a stem cell while the other becomes a daughter cell. Their findings could provide new insight into blood cancers, such as leukemia, and eventually lead to ways of growing transfusable blood cells in a lab.
The research was conducted by Dennis Discher, professor in the Department of Chemical and Biomolecular Engineering in the School of Engineering and Applied Science, and members of his lab: lead author Jae-Won Shin, Amnon Buxboim, Kyle R. Spinler, Joe Swift, Dave P. Dingal, Irena L. Ivanovska and Florian Rehfeldt. They collaborated with researchers at the Université de Strasbourg, Lawrence Berkeley National Laboratory and University of California, San Francisco.
It was published in the journal Cell Stem Cell.
"Your blood cells are constantly getting worn out and replaced," Discher said. "We want to understand how the stem cells responsible for making these cells can last for decades without being exhausted."
The standing theory to explain these cells' near immortality is asymmetric division, though the cellular mechanism that enables this kind of division was unknown. Looking to identify the forces responsible for this phenomenon, the researchers analyzed all of the genes expressed in the stem cells and their more rapidly dividing progeny. Proteins that only went to one side of the dividing cell, the researchers thought, might play a role in partitioning other key factors responsible for keeping one side a stem cell.
They saw different expression patterns of the motor protein myosin II, which has two forms, A and B. Myosin II is the protein that enables the body's muscles to contract, but in nonmuscle cells it is also used in cell division, where it helps cleave and close off the cell walls as the cell splits apart.
"We found that the stem cell has both types of myosin," Shin said, "whereas the final red and white blood cells only had the A form. We inferred that the B form was key to splitting the stem cells in an asymmetric way that kept the B form only in the stem cell."
With these myosins as their top candidate, the researchers labeled key proteins in dividing stem cells with different colors and put them under the microscope.
"We could see that the myosin IIB goes to one side of the dividing cell, which causes it to cleave differently," Discher said. "It's like a tug of war, and the side with the B pulls harder and stays a stem cell."
The researchers then performed in vivo tests using mice that had human stem cells injected into their bone marrow. By genetically inhibiting myosin IIB production, the researchers saw the stem cells and their early progeny proliferating while the amount of downstream blood cells dropped.
"Because the stem cells were not able to divide asymmetrically, they just kept making more of themselves in the marrow at the expense of the differentiated cells," Discher said.
The researchers also used a drug that temporarily blocked both A and B forms of myosin II, finding that it increased the prevalence of non-dividing stem cells, blocking the more rapid division of progeny.
Read more at Science Daily
Researchers have long wondered what allows these stem cells to persist for decades, when their progeny last for days, weeks or months before they need to be replaced. Now, a study from the University of Pennsylvania has uncovered one of the mechanisms that allow these stem cells to keep dividing in perpetuity.
The researchers found that a form of the motor protein that allows muscles to contract helps these cells divide asymmetrically, so that one part remains a stem cell while the other becomes a daughter cell. Their findings could provide new insight into blood cancers, such as leukemia, and eventually lead to ways of growing transfusable blood cells in a lab.
The research was conducted by Dennis Discher, professor in the Department of Chemical and Biomolecular Engineering in the School of Engineering and Applied Science, and members of his lab: lead author Jae-Won Shin, Amnon Buxboim, Kyle R. Spinler, Joe Swift, Dave P. Dingal, Irena L. Ivanovska and Florian Rehfeldt. They collaborated with researchers at the Université de Strasbourg, Lawrence Berkeley National Laboratory and University of California, San Francisco.
It was published in the journal Cell Stem Cell.
"Your blood cells are constantly getting worn out and replaced," Discher said. "We want to understand how the stem cells responsible for making these cells can last for decades without being exhausted."
The standing theory to explain these cells' near immortality is asymmetric division, though the cellular mechanism that enables this kind of division was unknown. Looking to identify the forces responsible for this phenomenon, the researchers analyzed all of the genes expressed in the stem cells and their more rapidly dividing progeny. Proteins that only went to one side of the dividing cell, the researchers thought, might play a role in partitioning other key factors responsible for keeping one side a stem cell.
They saw different expression patterns of the motor protein myosin II, which has two forms, A and B. Myosin II is the protein that enables the body's muscles to contract, but in nonmuscle cells it is also used in cell division, where it helps cleave and close off the cell walls as the cell splits apart.
"We found that the stem cell has both types of myosin," Shin said, "whereas the final red and white blood cells only had the A form. We inferred that the B form was key to splitting the stem cells in an asymmetric way that kept the B form only in the stem cell."
With these myosins as their top candidate, the researchers labeled key proteins in dividing stem cells with different colors and put them under the microscope.
"We could see that the myosin IIB goes to one side of the dividing cell, which causes it to cleave differently," Discher said. "It's like a tug of war, and the side with the B pulls harder and stays a stem cell."
The researchers then performed in vivo tests using mice that had human stem cells injected into their bone marrow. By genetically inhibiting myosin IIB production, the researchers saw the stem cells and their early progeny proliferating while the amount of downstream blood cells dropped.
"Because the stem cells were not able to divide asymmetrically, they just kept making more of themselves in the marrow at the expense of the differentiated cells," Discher said.
The researchers also used a drug that temporarily blocked both A and B forms of myosin II, finding that it increased the prevalence of non-dividing stem cells, blocking the more rapid division of progeny.
Read more at Science Daily
Mach 1000 Shock Wave Lights Supernova Remnant
When a star explodes as a supernova, it shines brightly for a few weeks or months before fading away. Yet the material blasted outward from the explosion still glows hundreds or thousands of years later, forming a picturesque supernova remnant. What powers such long-lived brilliance?
In the case of Tycho's supernova remnant, astronomers have discovered that a reverse shock wave racing inward at Mach 1000 (1000 times the speed of sound) is heating the remnant and causing it to emit X-ray light.
"We wouldn't be able to study ancient supernova remnants without a reverse shock to light them up," says Hiroya Yamaguchi, who conducted this research at the Harvard-Smithsonian Center for Astrophysics (CfA).
Tycho's supernova was witnessed by astronomer Tycho Brahe in 1572. The appearance of this "new star" stunned those who thought the heavens were constant and unchanging. At its brightest, the supernova rivaled Venus before fading from sight a year later.
Modern astronomers know that the event Tycho and others observed was a Type Ia supernova, caused by the explosion of a white dwarf star. The explosion spewed elements like silicon and iron into space at speeds of more than 11 million miles per hour (5,000 km/s).
When that ejecta rammed into surrounding interstellar gas, it created a shock wave -- the equivalent of a cosmic "sonic boom." That shock wave continues to move outward today at about Mach 300. The interaction also created a violent "backwash" -- a reverse shock wave that speeds inward at Mach 1000.
"It's like the wave of brake lights that marches up a line of traffic after a fender-bender on a busy highway," explains CfA co-author Randall Smith.
The reverse shock wave heats gases inside the supernova remnant and causes them to fluoresce. The process is similar to what lights household fluorescent bulbs, except that the supernova remnant glows in X-rays rather than visible light. The reverse shock wave is what allows us to see supernova remnants and study them, hundreds of years after the supernova occurred.
"Thanks to the reverse shock, Tycho's supernova keeps on giving," says Smith.
Read more at Science Daily
In the case of Tycho's supernova remnant, astronomers have discovered that a reverse shock wave racing inward at Mach 1000 (1000 times the speed of sound) is heating the remnant and causing it to emit X-ray light.
"We wouldn't be able to study ancient supernova remnants without a reverse shock to light them up," says Hiroya Yamaguchi, who conducted this research at the Harvard-Smithsonian Center for Astrophysics (CfA).
Tycho's supernova was witnessed by astronomer Tycho Brahe in 1572. The appearance of this "new star" stunned those who thought the heavens were constant and unchanging. At its brightest, the supernova rivaled Venus before fading from sight a year later.
Modern astronomers know that the event Tycho and others observed was a Type Ia supernova, caused by the explosion of a white dwarf star. The explosion spewed elements like silicon and iron into space at speeds of more than 11 million miles per hour (5,000 km/s).
When that ejecta rammed into surrounding interstellar gas, it created a shock wave -- the equivalent of a cosmic "sonic boom." That shock wave continues to move outward today at about Mach 300. The interaction also created a violent "backwash" -- a reverse shock wave that speeds inward at Mach 1000.
"It's like the wave of brake lights that marches up a line of traffic after a fender-bender on a busy highway," explains CfA co-author Randall Smith.
The reverse shock wave heats gases inside the supernova remnant and causes them to fluoresce. The process is similar to what lights household fluorescent bulbs, except that the supernova remnant glows in X-rays rather than visible light. The reverse shock wave is what allows us to see supernova remnants and study them, hundreds of years after the supernova occurred.
"Thanks to the reverse shock, Tycho's supernova keeps on giving," says Smith.
Read more at Science Daily
Is a 17th Century Wreck Buried in Lake Michigan?
One of the Great Lakes’ most enduring puzzles, the fate of the 17th century vessel the Griffin, continues to be a mystery.
Experts are debating whether a wooden slab found protruding from the bed of Lake Michigan is a wreckage from the long sought vessel or just a pound net stake — an underwater stationary fishing device used in the Great Lakes in the 19th and early 20th centuries.
A 10.5-foot section of the timber was found by shipwreck hunter Steve Libert in 2001 in a remote area of Lake Michigan near Poverty Island.
Libert, the president of Great Lakes Exploration Group, who has spent three decades and more than $1 million on the hunt for the elusive ship, noticed the timber was protruding from the lake bed.
After 12 years of research and legal tussles, the U.S. government acknowledged France’s claim to the wreck. French archaeologists last June finally dislodged the nearly 20-foot beam and dug beneath it. The results were disappointing.
“Sadly the survey could not confirm the presence of a homogeneous wreck under the thick layer of sediment and zebra mussels which covers the bottom of Lake Michigan,” Libert said.
Long considered the Holy Grail of Great Lakes shipwrecks, the Griffin was built by the legendary French explorer Rene Robert Cavelier de la Salle, who journeyed across the Great Lakes and down the Mississippi in a quest for what he erroneously believed to be a passageway to China and Japan.
The ship vanished just a few months after her launch with a crew of six men and a cargo of furs.
According to Libert, the Griffin sailed between Green Bay and the Jesuit mission of Michilimackinac on the north bank of the Straits of Mackinac which join lakes Michigan and Huron.
“Searches shortly after her disappearance found nothing and for the following three centuries the circumstances and location of the loss of the Griffon were a mystery,” Libert said.
Theories about her fate included the ship sinking in a fierce storm, being captured and burned by Native Americans or scuttled by a mutinous crew.
Mystery also wraps the retrieved beam. Definitive answers about its original purpose came from neither carbon-14 dating, nor from CT scans.
The tests indicated the wood could have originated anywhere from 1670 to 1950, opening many possibilities. Analysis of ring patterns also proved incomplete, as only 29 out of the 50 rings necessary for the dating were visible in the scans.
“I’m looking at the evidence, and the evidence is pointing to a net stake,” Dean Anderson, Michigan’s state archaeologist, told the Associated Press.
“I’m not seeing any evidence of a vessel element here,” he added.
Libert hotly disagrees and claims the timber, which features four treenails, is a bowsprit — a spur or pole that extends from a vessel’s stem.
“It cannot be a pound net stake,” he told Discovery News. “Who would have put it there?”
“We know from the French archeologists that the bowsprit is at least 200 years old due to the erosion marks on this piece. It wasn’t until the 1880s that the fishing method was used by white settlers in Lake Michigan,” he said.
Read more at Discovery News
Experts are debating whether a wooden slab found protruding from the bed of Lake Michigan is a wreckage from the long sought vessel or just a pound net stake — an underwater stationary fishing device used in the Great Lakes in the 19th and early 20th centuries.
A 10.5-foot section of the timber was found by shipwreck hunter Steve Libert in 2001 in a remote area of Lake Michigan near Poverty Island.
Libert, the president of Great Lakes Exploration Group, who has spent three decades and more than $1 million on the hunt for the elusive ship, noticed the timber was protruding from the lake bed.
After 12 years of research and legal tussles, the U.S. government acknowledged France’s claim to the wreck. French archaeologists last June finally dislodged the nearly 20-foot beam and dug beneath it. The results were disappointing.
“Sadly the survey could not confirm the presence of a homogeneous wreck under the thick layer of sediment and zebra mussels which covers the bottom of Lake Michigan,” Libert said.
Long considered the Holy Grail of Great Lakes shipwrecks, the Griffin was built by the legendary French explorer Rene Robert Cavelier de la Salle, who journeyed across the Great Lakes and down the Mississippi in a quest for what he erroneously believed to be a passageway to China and Japan.
The ship vanished just a few months after her launch with a crew of six men and a cargo of furs.
According to Libert, the Griffin sailed between Green Bay and the Jesuit mission of Michilimackinac on the north bank of the Straits of Mackinac which join lakes Michigan and Huron.
“Searches shortly after her disappearance found nothing and for the following three centuries the circumstances and location of the loss of the Griffon were a mystery,” Libert said.
Theories about her fate included the ship sinking in a fierce storm, being captured and burned by Native Americans or scuttled by a mutinous crew.
Mystery also wraps the retrieved beam. Definitive answers about its original purpose came from neither carbon-14 dating, nor from CT scans.
The tests indicated the wood could have originated anywhere from 1670 to 1950, opening many possibilities. Analysis of ring patterns also proved incomplete, as only 29 out of the 50 rings necessary for the dating were visible in the scans.
“I’m looking at the evidence, and the evidence is pointing to a net stake,” Dean Anderson, Michigan’s state archaeologist, told the Associated Press.
“I’m not seeing any evidence of a vessel element here,” he added.
Libert hotly disagrees and claims the timber, which features four treenails, is a bowsprit — a spur or pole that extends from a vessel’s stem.
“It cannot be a pound net stake,” he told Discovery News. “Who would have put it there?”
“We know from the French archeologists that the bowsprit is at least 200 years old due to the erosion marks on this piece. It wasn’t until the 1880s that the fishing method was used by white settlers in Lake Michigan,” he said.
Read more at Discovery News
Where Are All the 'Inbetweener' Black Holes?
There's small black holes and supermassive black holes, but where are all the "inbetweener" black holes?
This question has been foxing astrophysicists for years; the apparent dearth of medium-sized or "intimediate" black holes -- between 100 to 1 million solar masses -- doesn't make logical sense. But when it comes to black holes, you can often check logic at the door.
One would assume that to make a supermassive black hole, there must be some growth mechanism that causes small black holes, say around 100 solar masses, to pack on the pounds and grow to the gravitational behemoths that occupy the centers of most known galaxies.
Black holes at the lower end of the mass spectrum are stellar-mass black holes and, as their name suggests, they were formed by the collapse of massive stars and the result of supernoavae. The most massive black holes that are found in the cores of galaxies -- often reaching tens of millions to billions of solar masses -- are less well understood and astronomers are currently trying to understand how they grew to be so massive.
But the scarcity of intermediate-mass black holes poses a quandary: Is there some black hole growth mechanism that is stranger than we can possibly imagine? Or are current observatories simply not sensitive to the emissions from these middleweight objects?
"Exactly how intermediate-sized black holes would form remains an open issue," said Dominic Walton of the California Institute of Technology (Caltech), Pasadena. "Some theories suggest they could form in rich, dense clusters of stars through repeated mergers, but there are a lot of questions left to be answered."
In an effort to get to know the nature of intermediate mass black holes, a collaboration of international observatories "went to town" on two ultraluminous X-ray sources (or ULXs) that were thought to contain black holes in the 100 to 10,000 solar mass range.
ULXs are likely composed of a star and a nearby black hole. The black hole does what it does best, sucking material from the unfortunate binary partner, generating radiation in the process. These compact sources of X-rays have led astronomers to believe that the feeding black holes in ULXs fall into the intermediate-mass category.
NASA's Nuclear Spectroscopic Telescope Array (NuSTAR) has joined Europe's XMM-Newton satellite in an effort to study a recently-identified ULX in the Circinus spiral galaxy some 13 million light-years distant (pictured top). Combining these X-ray observations with archival data from NASA's Chandra, Swift and Spitzer space telescopes plus the Japanese Suzaku satellite, this has become one of the most intensely-scrutinized ULXs ever.
In a paper published in the Astrophysical Journal, this collaboration deduced that the Circinus ULX is around 100 solar masses -- but it may not be an intermediate-mass black hole at all. It could actually just be a large stellar-mass black hole that has an exotic "feeding" mechanism that generates intense X-ray emissions.
In another study also accepted for publication in the Astrophysical Journal, two ULXs in NGC 1313, a spiral galaxy 13 million light-years away, were examined. Those too, after being studied by NuSTAR, appear to also be large stellar-mass black holes and not the much sought-after intermediate-mass black holes. So what's going on?
Read more at Discovery News
This question has been foxing astrophysicists for years; the apparent dearth of medium-sized or "intimediate" black holes -- between 100 to 1 million solar masses -- doesn't make logical sense. But when it comes to black holes, you can often check logic at the door.
One would assume that to make a supermassive black hole, there must be some growth mechanism that causes small black holes, say around 100 solar masses, to pack on the pounds and grow to the gravitational behemoths that occupy the centers of most known galaxies.
Black holes at the lower end of the mass spectrum are stellar-mass black holes and, as their name suggests, they were formed by the collapse of massive stars and the result of supernoavae. The most massive black holes that are found in the cores of galaxies -- often reaching tens of millions to billions of solar masses -- are less well understood and astronomers are currently trying to understand how they grew to be so massive.
But the scarcity of intermediate-mass black holes poses a quandary: Is there some black hole growth mechanism that is stranger than we can possibly imagine? Or are current observatories simply not sensitive to the emissions from these middleweight objects?
"Exactly how intermediate-sized black holes would form remains an open issue," said Dominic Walton of the California Institute of Technology (Caltech), Pasadena. "Some theories suggest they could form in rich, dense clusters of stars through repeated mergers, but there are a lot of questions left to be answered."
In an effort to get to know the nature of intermediate mass black holes, a collaboration of international observatories "went to town" on two ultraluminous X-ray sources (or ULXs) that were thought to contain black holes in the 100 to 10,000 solar mass range.
ULXs are likely composed of a star and a nearby black hole. The black hole does what it does best, sucking material from the unfortunate binary partner, generating radiation in the process. These compact sources of X-rays have led astronomers to believe that the feeding black holes in ULXs fall into the intermediate-mass category.
NASA's Nuclear Spectroscopic Telescope Array (NuSTAR) has joined Europe's XMM-Newton satellite in an effort to study a recently-identified ULX in the Circinus spiral galaxy some 13 million light-years distant (pictured top). Combining these X-ray observations with archival data from NASA's Chandra, Swift and Spitzer space telescopes plus the Japanese Suzaku satellite, this has become one of the most intensely-scrutinized ULXs ever.
In a paper published in the Astrophysical Journal, this collaboration deduced that the Circinus ULX is around 100 solar masses -- but it may not be an intermediate-mass black hole at all. It could actually just be a large stellar-mass black hole that has an exotic "feeding" mechanism that generates intense X-ray emissions.
In another study also accepted for publication in the Astrophysical Journal, two ULXs in NGC 1313, a spiral galaxy 13 million light-years away, were examined. Those too, after being studied by NuSTAR, appear to also be large stellar-mass black holes and not the much sought-after intermediate-mass black holes. So what's going on?
Read more at Discovery News
Nov 25, 2013
Oldest Buddha Shrine Dates Birth to 6th Century B.C.
The birthplace of the Buddha has been found in Nepal, revealing that the origins of Buddhism date to the sixth century B.C., according to archaeologists. What’s more, evidence of tree roots at the birth site reinforce the mythology of Buddha’s birth under a tree.
The excavations took place within the already sacred Maya Devi Temple at Lumbini, Nepal, a UNESCO World Heritage site long thought to have been the Buddha’s birthplace.
The archaeological team dug under a series of brick temples at the site and unearthed a previously unknown sixth-century B.C. timber structure. It is described in the latest issue of the journal Antiquity.
The timber structure contains an open space in the center that links to the nativity story of the Buddha himself.
“By placing the life of the Gautama Buddha firmly into the sixth century B.C. we can understand the exact character of the social and economic context in which he taught — it was a time of dramatic change with the introduction of coinage, the concept of the state, urbanization, the growth of merchants and the middle classes,” Robin Coningham, co-leader of the project, told Discovery News.
“The discovery of evidence of tree roots in the center of the earliest shrines at Lumbini — the presence of a tree shrine — add a real physical perspective to the Buddhist traditions of his life story, which associated Lumbini with the Buddha’s birth under a tree,” added Coningham, who is an archaeologist at Durham University.
Coningham, with Kosh Prasad Acharya of the Pashupati Area Development Trust in Nepal and colleagues used a combination of radiocarbon dating and optically stimulated luminescence techniques to date fragments of charcoal and grains of sand at the timber shrine. Analysis of the site’s geology confirmed the presence of ancient tree roots within the temple’s central void.
Buddhist tradition records that Queen Maya Devi, the mother of the Buddha, gave birth to him while holding on to the branch of a tree within the Lumbini Garden, midway between the kingdoms of her husband and parents. The researchers speculate that the open space in the center of the most ancient, timber shrine may have accommodated a tree. Brick temples built later above the timber one were also arranged around the central space, which was unroofed.
Coningham says the discovery contributes to a greater understanding of the early development of Buddhism as well as the spiritual importance of Lumbini.
“Most historical studies of early Buddhism start with the rule of the Emperor Asoka in the third century B.C. as he was personally responsible for patronizing Buddhism and helping it spread from Afghanistan to Bangladesh and Sri Lanka,” he said.
“However, the discovery of two earlier shrines at Lumbini demonstrate that Buddhism had already attracted powerful sponsors before his imperial intervention,” he continued. “The fact that all three shrines were constructed around a tree also provides us with a unique insight into Buddhist veneration before the introduction of the image of the Buddha centuries later.”
Lumbini is one of the key sites associated with the life of the Buddha. Others are Bodh Gaya, where he became a Buddha or enlightened one; Sarnath, where he first preached; and Kusinagara, where he died. At his passing at the age of 80, the Buddha is recorded as having recommended that all Buddhists visit “Lumbini.” The shrine was still popular in the middle of the first millennium A.D. and was recorded by Chinese pilgrims as having a shrine beside a tree.
The Maya Devi temple at Lumbini remains a living shrine. The archaeologists worked alongside meditating monks, nuns and pilgrims.
Bokova urged that there be “more archaeological research, intensified conservation work and strengthened site management” to ensure Lumbini’s protection.
Ram Kumar Shrestha, Nepal’s minister of culture, tourism and civil aviation, concluded, “These discoveries are very important to better understand the birthplace of the Buddha. The government of Nepal will spare no effort to preserve this significant site.”
Half a billion people around the world are Buddhists. Hundreds of thousands make a pilgrimage to Lumbini each year, numbers that are likely to increase all the more given today’s announcement.
Read more at Discovery News
The excavations took place within the already sacred Maya Devi Temple at Lumbini, Nepal, a UNESCO World Heritage site long thought to have been the Buddha’s birthplace.
The archaeological team dug under a series of brick temples at the site and unearthed a previously unknown sixth-century B.C. timber structure. It is described in the latest issue of the journal Antiquity.
The timber structure contains an open space in the center that links to the nativity story of the Buddha himself.
“By placing the life of the Gautama Buddha firmly into the sixth century B.C. we can understand the exact character of the social and economic context in which he taught — it was a time of dramatic change with the introduction of coinage, the concept of the state, urbanization, the growth of merchants and the middle classes,” Robin Coningham, co-leader of the project, told Discovery News.
“The discovery of evidence of tree roots in the center of the earliest shrines at Lumbini — the presence of a tree shrine — add a real physical perspective to the Buddhist traditions of his life story, which associated Lumbini with the Buddha’s birth under a tree,” added Coningham, who is an archaeologist at Durham University.
Coningham, with Kosh Prasad Acharya of the Pashupati Area Development Trust in Nepal and colleagues used a combination of radiocarbon dating and optically stimulated luminescence techniques to date fragments of charcoal and grains of sand at the timber shrine. Analysis of the site’s geology confirmed the presence of ancient tree roots within the temple’s central void.
Buddhist tradition records that Queen Maya Devi, the mother of the Buddha, gave birth to him while holding on to the branch of a tree within the Lumbini Garden, midway between the kingdoms of her husband and parents. The researchers speculate that the open space in the center of the most ancient, timber shrine may have accommodated a tree. Brick temples built later above the timber one were also arranged around the central space, which was unroofed.
Coningham says the discovery contributes to a greater understanding of the early development of Buddhism as well as the spiritual importance of Lumbini.
“Most historical studies of early Buddhism start with the rule of the Emperor Asoka in the third century B.C. as he was personally responsible for patronizing Buddhism and helping it spread from Afghanistan to Bangladesh and Sri Lanka,” he said.
“However, the discovery of two earlier shrines at Lumbini demonstrate that Buddhism had already attracted powerful sponsors before his imperial intervention,” he continued. “The fact that all three shrines were constructed around a tree also provides us with a unique insight into Buddhist veneration before the introduction of the image of the Buddha centuries later.”
Lumbini is one of the key sites associated with the life of the Buddha. Others are Bodh Gaya, where he became a Buddha or enlightened one; Sarnath, where he first preached; and Kusinagara, where he died. At his passing at the age of 80, the Buddha is recorded as having recommended that all Buddhists visit “Lumbini.” The shrine was still popular in the middle of the first millennium A.D. and was recorded by Chinese pilgrims as having a shrine beside a tree.
The Maya Devi temple at Lumbini remains a living shrine. The archaeologists worked alongside meditating monks, nuns and pilgrims.
Bokova urged that there be “more archaeological research, intensified conservation work and strengthened site management” to ensure Lumbini’s protection.
Ram Kumar Shrestha, Nepal’s minister of culture, tourism and civil aviation, concluded, “These discoveries are very important to better understand the birthplace of the Buddha. The government of Nepal will spare no effort to preserve this significant site.”
Half a billion people around the world are Buddhists. Hundreds of thousands make a pilgrimage to Lumbini each year, numbers that are likely to increase all the more given today’s announcement.
Read more at Discovery News
New Zealand Earthquakes Weakened Earth's Crust
A series of deadly earthquakes that shook New Zealand in 2010 and 2011 may have weakened a portion of Earth's crust, researchers say.
New Zealand lies along the dangerous Ring of Fire — a narrow zone around the Pacific Ocean where about 90 percent of all the world's earthquakes, and 80 percent of the largest ones, strike.
A devastating magnitude- 6.3 quake struck New Zealand's South Island in 2011. Centered very close to Christchurch, the country's second-largest city, it killed 185 people and damaged or destroyed 100,000 buildings. The earthquake was the costliest disaster to ever strike New Zealand, consuming about one-sixth of the country's gross domestic product.
This lethal earthquake was the aftershock of a magnitude-7.1 temblor that struck 172 days earlier (in 2010) in the area, causing millions of dollars in damage to bridges and buildings, and seriously injuring two people. Although the 2010 temblor was stronger than its aftershock, it caused less damage because it occurred farther away from any city. The 2011 earthquake was, in turn, followed by a number of large aftershocks of its own.
Scientists found that most of the earthquakes that struck New Zealand during these two years released abnormally high levels of energy, consistent with those seen from ruptures of very strong faults in the Earth's crust. To learn more about this long series of energetic quakes, researchers analyzed the rocks beneath the area hit, known as the Canterbury Plains.
Widespread weakening
Approximately 6 miles (10 kilometers) below the Canterbury Plains lies a large, extremely strong block of volcanic rock called the Hikurangi Plateau, which was pulled underground about 100 million years ago, when the portion of the Earth's surface it rested on dove under the edge of the ancient supercontinent Gondwana. It remains attached to Earth's crust, welded to chunks of a dark, gray sandstone known as greywacke.
The scientists analyzed seismic waves detected before and after the quakes by GeoNet, a network of seismographs across New Zealand. Based on this data, including seismic waves from more than 11,500 aftershocks of the 2010 quake, they mapped the 3D structure of the rock under the Canterbury Plains, similar to the way ultrasound data can provide an image of a fetus in a womb.
Beneath the surface broken by the quakes, the researchers identified a broad region that appeared to be dramatically weaker after the quakes. This suggests there was widespread cracking of greywacke 3 miles (5 km) around the fault. In contrast, earthquakes of similar magnitude in the crust elsewhere typically only "produce zones of cracked rock around the fault which are a few hundred meters wide," said study lead author Martin Reyners, a seismologist at research institute GNS Science in Lower Hutt, New Zealand.
Until now, scientists had assumed that the strength of Earth's crust remains constant during aftershocks. But these new findings, detailed online Nov. 24 in the journal Nature Geoscience, suggest energetic quakes can lead to widespread weakening of the crust.
"Such widespread weakening is not common, and has not been reported previously," Reyners told LiveScience's OurAmazingPlanet.
Why there?
To explain why weakening was seen in that particular region and not elsewhere after strong quakes, Reyners noted the increasing pressure and temperature seen with increasing depth in the crust that usually means that at depths of more than about 6.8 miles (10.9 km), rocks are no longer brittle. As a result, the rocks often flow, not crack, when force is applied to them.
"This is known as the brittle-plastic transition," Reyners said.
Read more at Discovery News
New Zealand lies along the dangerous Ring of Fire — a narrow zone around the Pacific Ocean where about 90 percent of all the world's earthquakes, and 80 percent of the largest ones, strike.
A devastating magnitude- 6.3 quake struck New Zealand's South Island in 2011. Centered very close to Christchurch, the country's second-largest city, it killed 185 people and damaged or destroyed 100,000 buildings. The earthquake was the costliest disaster to ever strike New Zealand, consuming about one-sixth of the country's gross domestic product.
This lethal earthquake was the aftershock of a magnitude-7.1 temblor that struck 172 days earlier (in 2010) in the area, causing millions of dollars in damage to bridges and buildings, and seriously injuring two people. Although the 2010 temblor was stronger than its aftershock, it caused less damage because it occurred farther away from any city. The 2011 earthquake was, in turn, followed by a number of large aftershocks of its own.
Scientists found that most of the earthquakes that struck New Zealand during these two years released abnormally high levels of energy, consistent with those seen from ruptures of very strong faults in the Earth's crust. To learn more about this long series of energetic quakes, researchers analyzed the rocks beneath the area hit, known as the Canterbury Plains.
Widespread weakening
Approximately 6 miles (10 kilometers) below the Canterbury Plains lies a large, extremely strong block of volcanic rock called the Hikurangi Plateau, which was pulled underground about 100 million years ago, when the portion of the Earth's surface it rested on dove under the edge of the ancient supercontinent Gondwana. It remains attached to Earth's crust, welded to chunks of a dark, gray sandstone known as greywacke.
The scientists analyzed seismic waves detected before and after the quakes by GeoNet, a network of seismographs across New Zealand. Based on this data, including seismic waves from more than 11,500 aftershocks of the 2010 quake, they mapped the 3D structure of the rock under the Canterbury Plains, similar to the way ultrasound data can provide an image of a fetus in a womb.
Beneath the surface broken by the quakes, the researchers identified a broad region that appeared to be dramatically weaker after the quakes. This suggests there was widespread cracking of greywacke 3 miles (5 km) around the fault. In contrast, earthquakes of similar magnitude in the crust elsewhere typically only "produce zones of cracked rock around the fault which are a few hundred meters wide," said study lead author Martin Reyners, a seismologist at research institute GNS Science in Lower Hutt, New Zealand.
Until now, scientists had assumed that the strength of Earth's crust remains constant during aftershocks. But these new findings, detailed online Nov. 24 in the journal Nature Geoscience, suggest energetic quakes can lead to widespread weakening of the crust.
"Such widespread weakening is not common, and has not been reported previously," Reyners told LiveScience's OurAmazingPlanet.
Why there?
To explain why weakening was seen in that particular region and not elsewhere after strong quakes, Reyners noted the increasing pressure and temperature seen with increasing depth in the crust that usually means that at depths of more than about 6.8 miles (10.9 km), rocks are no longer brittle. As a result, the rocks often flow, not crack, when force is applied to them.
"This is known as the brittle-plastic transition," Reyners said.
Read more at Discovery News
Twice as Much Methane Escaping Arctic Seafloor
The Arctic methane time bomb is bigger than scientists once thought and primed to blow, according to a study published today (Nov. 24) in the journal Nature Geoscience.
About 17 teragrams of methane, a potent greenhouse gas, escapes each year from a broad, shallow underwater platform called the East Siberian Arctic Shelf, said Natalia Shakova, lead study author and a biogeochemist at the University of Alaska, Fairbanks. A teragram is equal to about 1.1 million tons; the world emits about 500 million tons of methane every year from manmade and natural sources. The new measurement more than doubles the team's earlier estimate of Siberian methane release, published in 2010 in the journal Science.
"We believe that release of methane from the Arctic, in particular, from the East Siberian Arctic Shelf, could impact the entire globe, not just the Arctic alone," Shakova told LiveScience. "The picture that we are trying to understand is what is the actual contribution of the to the global methane budget and how it will change over time."
Waiting to escape
Arctic permafrost is an area of intense research focus because of its climate threat. The frozen ground holds enormous stores of methane because the ice traps methane rising from inside the Earth, as well as gas made by microbes living in the soil. Scientists worry that the warming Arctic could lead to rapidly melting permafrost, releasing all that stored methane and creating a global warming feedback loop as the methane in the atmosphere traps heat and melts even more permafrost.
Researchers are trying to gauge this risk by accurately measuring stores of methane in permafrost on land and in the ocean, and predicting how fast it will thaw as the planet warms. Though methane gas quickly decays once it escapes into the atmosphere, lasting only about 10 years, it is 30 times more efficient than carbon dioxide at trapping heat (the greenhouse effect).
Shakova and colleague Igor Semiletov of the Russian Academy of Sciences first discovered methane bubbling up from the shallow seafloor a decade ago in Russia's Laptev Sea. Methane is trapped there in ground frozen during past ice ages, when sea level was much lower.
Shallow waters
In their latest study, Shakova and her colleagues reported thousands of measurements of methane bubbles taken in summer and winter, between 2003 and 2012.
But the team also sampled seawater temperature and drilled into the ocean bottom, to see if the sediments are still frozen. Most of the survey was in water less than 100 feet (30 meters deep).
The shallow water is one reason so much methane escapes the Siberian shelf — in the deeper ocean, as methane-eating microbes digest the gas before it reaches the surface, Shakova said. But in the Laptev Sea, "it takes the bubbles only seconds, or at least a couple of minutes, to escape from the water column," Shakova said.
Arctic storms that churn the sea also speed up the release of methane from ocean water, like stirring a soft-drink releases gas bubbles, Shakova said. During the surveys, the amount of methane in the ocean and atmosphere dropped after two big Arctic storms passed through in 2009 and 2010, the researchers reported.
The temperature measurements revealed the water just above the ocean bottom warms by more than 12 degrees Fahrenheit (7 degrees Celsius) in some spots during the summer, the researchers found. And the drill core revealed that the surface sediment layers were unfrozen at the drill site, near the Lena River delta.
"We have now proved that the current state of subsea permafrost is incomparably closer to the thaw point than that of terrestrial permafrost," Shakova said.
Shakova and her colleagues attribute the warming of the permafrost to long-term changes initiated when sea levels rose starting at the end of the last glacial period. The seawater is several degrees warmer than the frozen ground, and is slowly melting the ice over thousands of years, they think.
Massive burst
But other researchers think the permafrost warming started only recently. "This is the first time in 12,000 years the Arctic Ocean has warmed up 7 degrees in the summer, and that's entirely new because the sea ice hasn't been there to hold the temperatures down," said Peter Wadhams, head of the Polar Ocean Physics Group at the University of Cambridge in the U.K., who was not involved in the study. The summer ice melt season has lasted longer since 2005, giving the sun more time to warm the ocean.
"If we do have a methane burst it's going to be catastrophic," Wadhams said. Earlier this year, Wadhams and colleagues in Britain calculated that a mega-methane release from the Siberian shelf could push global temperatures up by 1 degree Fahrenheit (0.6 degrees Celsius). The suggestion, published in the journal Nature, was widely debated by climate researchers. Climate change experts and international negotiators have said that keeping the rise in Earth's average temperature below 2 degrees Celsius (3.6 degrees Fahrenheit) is necessary to avoid catastrophic climate change.
Read more at Discovery News
About 17 teragrams of methane, a potent greenhouse gas, escapes each year from a broad, shallow underwater platform called the East Siberian Arctic Shelf, said Natalia Shakova, lead study author and a biogeochemist at the University of Alaska, Fairbanks. A teragram is equal to about 1.1 million tons; the world emits about 500 million tons of methane every year from manmade and natural sources. The new measurement more than doubles the team's earlier estimate of Siberian methane release, published in 2010 in the journal Science.
"We believe that release of methane from the Arctic, in particular, from the East Siberian Arctic Shelf, could impact the entire globe, not just the Arctic alone," Shakova told LiveScience. "The picture that we are trying to understand is what is the actual contribution of the to the global methane budget and how it will change over time."
Waiting to escape
Arctic permafrost is an area of intense research focus because of its climate threat. The frozen ground holds enormous stores of methane because the ice traps methane rising from inside the Earth, as well as gas made by microbes living in the soil. Scientists worry that the warming Arctic could lead to rapidly melting permafrost, releasing all that stored methane and creating a global warming feedback loop as the methane in the atmosphere traps heat and melts even more permafrost.
Researchers are trying to gauge this risk by accurately measuring stores of methane in permafrost on land and in the ocean, and predicting how fast it will thaw as the planet warms. Though methane gas quickly decays once it escapes into the atmosphere, lasting only about 10 years, it is 30 times more efficient than carbon dioxide at trapping heat (the greenhouse effect).
Shakova and colleague Igor Semiletov of the Russian Academy of Sciences first discovered methane bubbling up from the shallow seafloor a decade ago in Russia's Laptev Sea. Methane is trapped there in ground frozen during past ice ages, when sea level was much lower.
Shallow waters
In their latest study, Shakova and her colleagues reported thousands of measurements of methane bubbles taken in summer and winter, between 2003 and 2012.
But the team also sampled seawater temperature and drilled into the ocean bottom, to see if the sediments are still frozen. Most of the survey was in water less than 100 feet (30 meters deep).
The shallow water is one reason so much methane escapes the Siberian shelf — in the deeper ocean, as methane-eating microbes digest the gas before it reaches the surface, Shakova said. But in the Laptev Sea, "it takes the bubbles only seconds, or at least a couple of minutes, to escape from the water column," Shakova said.
Arctic storms that churn the sea also speed up the release of methane from ocean water, like stirring a soft-drink releases gas bubbles, Shakova said. During the surveys, the amount of methane in the ocean and atmosphere dropped after two big Arctic storms passed through in 2009 and 2010, the researchers reported.
The temperature measurements revealed the water just above the ocean bottom warms by more than 12 degrees Fahrenheit (7 degrees Celsius) in some spots during the summer, the researchers found. And the drill core revealed that the surface sediment layers were unfrozen at the drill site, near the Lena River delta.
"We have now proved that the current state of subsea permafrost is incomparably closer to the thaw point than that of terrestrial permafrost," Shakova said.
Shakova and her colleagues attribute the warming of the permafrost to long-term changes initiated when sea levels rose starting at the end of the last glacial period. The seawater is several degrees warmer than the frozen ground, and is slowly melting the ice over thousands of years, they think.
Massive burst
But other researchers think the permafrost warming started only recently. "This is the first time in 12,000 years the Arctic Ocean has warmed up 7 degrees in the summer, and that's entirely new because the sea ice hasn't been there to hold the temperatures down," said Peter Wadhams, head of the Polar Ocean Physics Group at the University of Cambridge in the U.K., who was not involved in the study. The summer ice melt season has lasted longer since 2005, giving the sun more time to warm the ocean.
"If we do have a methane burst it's going to be catastrophic," Wadhams said. Earlier this year, Wadhams and colleagues in Britain calculated that a mega-methane release from the Siberian shelf could push global temperatures up by 1 degree Fahrenheit (0.6 degrees Celsius). The suggestion, published in the journal Nature, was widely debated by climate researchers. Climate change experts and international negotiators have said that keeping the rise in Earth's average temperature below 2 degrees Celsius (3.6 degrees Fahrenheit) is necessary to avoid catastrophic climate change.
Read more at Discovery News
Watch Earth Spin From Your Browser
You might not have hundreds of thousands of dollars for a seat on Richard Branson’s private shuttle, but one enterprising outfit is about to offer the next best thing: the chance to see the Earth from space, from the comfort of your couch.
With the aid of Russian space authorities, Vancouver-based UrtheCast (pronounced “earthcast”) will launch two cameras into orbit today (Nov. 25) with the immediate goal of streaming images of the Earth back home in near-real time.
For free, Internet users will log on to UrtheCast.com anytime to see the beauty of the big blue ball we live on, as the cameras make the 90-minute revolution around Earth, 16 times a day. It's a sight few have ever seen before.
“Ten years ago it would have been incredibly difficult to do this,” Scott Larson, CEO of UrtheCast, told FoxNews.com. But after three years of raising money and working with Russian and Canadian engineers and developers, the project is about to lift-off -- literally. The cameras will ride a Russian Soyuz rocket on Monday at 3:53 p.m. EST from the Baikonur Cosmodrome in Kazakhstan. You can watch it live on FoxNews.com.
“I’ve never seen a launch before -- we’re all excited, there's no doubt about it,” Larson said.
The cameras will orbit for a few days before docking at the Russian portion of the International Space Station (ISS). The largest artificial body in orbit, the ISS serves as a research laboratory and testing facility for future space missions. It will add streaming media to its long list of functions.
Once calibrated – and this could take several months, Larson said – the cameras will start beaming down images. For the first time, ordinary web surfers will see the Earth in space with a delay of only 45 minutes to a couple hours at the most (this accounts for the near-real time nature). The crisp resolution will let them see not only the Earth -- with all the accompanying weather patterns and seasonal changes -- but moving vehicles, large crowds, boats and buildings.
Not only will viewers get the greatest panoramic view of all but they’ll be able to customize it too, locking on to their country, their state, their neighborhood when the cameras pan over that part of the world on rotation.
“Streaming video is a large amount of data that will have to reach Earth somehow, which will require a lot of bandwidth,” noted Austin Bradley, a Washington, D.C.-area space enthusiast who hopes one day to hitch a ride to Mars. Until then, he says that accessible video from space will definitely whet his appetite.
“For a lower cost than training as an astronaut and taking a weeklong vacation on the , it’s amazing that UrtheCast is bringing the opportunity to see Earth from the perspective that the few lucky astronauts in this world get to experience,” Bradley told FoxNews.com.
Larson said the company will be sending 200 gigabytes a day down in “big chunks,” which of course will create a bottleneck and thus the delay. But considering that the only other option for free space viewing – Google Earth – carries a delay of months if not years (Google superimposes pictures gleaned from various satellites), this “near-real time” opportunity is quite unprecedented.
And not inexpensive. to pay the bills, UrtheCast went public last July and raised $45 million in private funding. The company is striking numerous deals through partnerships too, including media companies like the Discovery Channel, which will have access to distribution once the cameras are up and running. UrtheCast is also marketing the images to private companies, and has already sold rights to the United Nations Institute for Training and Research’s operational satellite applications program (UNOSAT), which will use the pictures to track natural disasters and humanitarian crises.
This is all in line with promoting the webcast on social, educational, environmental and commercial fronts, said Larson.
An engineering firm based in British Columbia helped to build the cameras for about $15 million. The Russians will not only help deliver and stage the equipment, but will transmit the images too. This saves the project a lot of money. In return, UrtheCast will share the data with their Russian partners, who in the meantime get a payload of positive publicity for their space program.
Larson, who is Canadian, said it was the Russians that approached him several years back. They wanted to put cameras into space.
"It landed on my desk,” he recalled. “It was their idea frankly.”
Aerospace engineer and author Robert Zubrin said NASA should have been doing this kind of thing years ago. Currently, only major corporations and government agencies can afford to buy satellite images from space, and it's very expensive. A project like this not only makes space accessible to regular people, he told FoxNews.com, but it re-ignites a fascination with space travel that has been dormant in recent years.
Read more at Discovery News
With the aid of Russian space authorities, Vancouver-based UrtheCast (pronounced “earthcast”) will launch two cameras into orbit today (Nov. 25) with the immediate goal of streaming images of the Earth back home in near-real time.
For free, Internet users will log on to UrtheCast.com anytime to see the beauty of the big blue ball we live on, as the cameras make the 90-minute revolution around Earth, 16 times a day. It's a sight few have ever seen before.
“Ten years ago it would have been incredibly difficult to do this,” Scott Larson, CEO of UrtheCast, told FoxNews.com. But after three years of raising money and working with Russian and Canadian engineers and developers, the project is about to lift-off -- literally. The cameras will ride a Russian Soyuz rocket on Monday at 3:53 p.m. EST from the Baikonur Cosmodrome in Kazakhstan. You can watch it live on FoxNews.com.
“I’ve never seen a launch before -- we’re all excited, there's no doubt about it,” Larson said.
The cameras will orbit for a few days before docking at the Russian portion of the International Space Station (ISS). The largest artificial body in orbit, the ISS serves as a research laboratory and testing facility for future space missions. It will add streaming media to its long list of functions.
Once calibrated – and this could take several months, Larson said – the cameras will start beaming down images. For the first time, ordinary web surfers will see the Earth in space with a delay of only 45 minutes to a couple hours at the most (this accounts for the near-real time nature). The crisp resolution will let them see not only the Earth -- with all the accompanying weather patterns and seasonal changes -- but moving vehicles, large crowds, boats and buildings.
Not only will viewers get the greatest panoramic view of all but they’ll be able to customize it too, locking on to their country, their state, their neighborhood when the cameras pan over that part of the world on rotation.
“Streaming video is a large amount of data that will have to reach Earth somehow, which will require a lot of bandwidth,” noted Austin Bradley, a Washington, D.C.-area space enthusiast who hopes one day to hitch a ride to Mars. Until then, he says that accessible video from space will definitely whet his appetite.
“For a lower cost than training as an astronaut and taking a weeklong vacation on the , it’s amazing that UrtheCast is bringing the opportunity to see Earth from the perspective that the few lucky astronauts in this world get to experience,” Bradley told FoxNews.com.
Larson said the company will be sending 200 gigabytes a day down in “big chunks,” which of course will create a bottleneck and thus the delay. But considering that the only other option for free space viewing – Google Earth – carries a delay of months if not years (Google superimposes pictures gleaned from various satellites), this “near-real time” opportunity is quite unprecedented.
And not inexpensive. to pay the bills, UrtheCast went public last July and raised $45 million in private funding. The company is striking numerous deals through partnerships too, including media companies like the Discovery Channel, which will have access to distribution once the cameras are up and running. UrtheCast is also marketing the images to private companies, and has already sold rights to the United Nations Institute for Training and Research’s operational satellite applications program (UNOSAT), which will use the pictures to track natural disasters and humanitarian crises.
This is all in line with promoting the webcast on social, educational, environmental and commercial fronts, said Larson.
An engineering firm based in British Columbia helped to build the cameras for about $15 million. The Russians will not only help deliver and stage the equipment, but will transmit the images too. This saves the project a lot of money. In return, UrtheCast will share the data with their Russian partners, who in the meantime get a payload of positive publicity for their space program.
Larson, who is Canadian, said it was the Russians that approached him several years back. They wanted to put cameras into space.
"It landed on my desk,” he recalled. “It was their idea frankly.”
Aerospace engineer and author Robert Zubrin said NASA should have been doing this kind of thing years ago. Currently, only major corporations and government agencies can afford to buy satellite images from space, and it's very expensive. A project like this not only makes space accessible to regular people, he told FoxNews.com, but it re-ignites a fascination with space travel that has been dormant in recent years.
Read more at Discovery News
Nov 24, 2013
The Secrets of Owls' Near Noiseless Wings
Many owl species have developed specialized plumage to effectively eliminate the aerodynamic noise from their wings -- allowing them to hunt and capture their prey in silence.
A research group working to solve the mystery of exactly how owls achieve this acoustic stealth will present their findings at the American Physical Society's (APS) Division of Fluid Dynamics meeting, held Nov. 24 -- 26, in Pittsburgh, Pa. -- work that may one day help bring "silent owl technology" to the design of aircraft, wind turbines, and submarines.
"Owls possess no fewer than three distinct physical attributes that are thought to contribute to their silent flight capability: a comb of stiff feathers along the leading edge of the wing; a flexible fringe a the trailing edge of the wing; and a soft, downy material distributed on the top of the wing," explained Justin Jaworski, assistant professor in Lehigh University's Department of Mechanical Engineering and Mechanics. His group is exploring whether owl stealth is based upon a single attribute or the interaction of a combination of attributes.
For conventional wings, the sound from the hard trailing edge typically dominates the acoustic signature. But prior theoretical work carried out by Jaworski and Nigel Peake at the University of Cambridge revealed that the porous, compliant character of the owl wing's trailing edge results in significant aerodynamic noise reductions.
"We also predicted that the dominant edge-noise source could be effectively eliminated with properly tuned porous or elastic edge properties, which implies that that the noise signature from the wing can then be dictated by otherwise minor noise mechanisms such as the 'roughness' of the wing surface," said Jaworski.
The velvety down atop an owl's wing creates a compliant but rough surface, much like a soft carpet. This down material may be the least studied of the unique owl noise attributes, but Jaworski believes it may eliminate sound at the source through a novel mechanism that is much different than those of ordinary sound absorbers.
"Our current work predicts the sound resulting from air passing over the downy material, which is idealized as a collection of individual flexible fibers, and how the aerodynamic noise level varies with fiber composition," Jaworski said.
The researchers' results are providing details about how a fuzzy -- compliant but rough -- surface can be designed to tailor its acoustic signature.
A photographic study of actual owl feathers, carried out with Ian Clark of Virginia Tech, has revealed a surprising 'forest-like' geometry of the down material, so this will be incorporated into the researchers' future theoretical and experimental work to more faithfully replicate the down structure. Preliminary experiments performed at Virginia Tech show that a simple mesh covering, which replicates the top layer of the 'forest' structure, is effective in eliminating some sound generated by rough surfaces.
Read more at Science Daily
A research group working to solve the mystery of exactly how owls achieve this acoustic stealth will present their findings at the American Physical Society's (APS) Division of Fluid Dynamics meeting, held Nov. 24 -- 26, in Pittsburgh, Pa. -- work that may one day help bring "silent owl technology" to the design of aircraft, wind turbines, and submarines.
"Owls possess no fewer than three distinct physical attributes that are thought to contribute to their silent flight capability: a comb of stiff feathers along the leading edge of the wing; a flexible fringe a the trailing edge of the wing; and a soft, downy material distributed on the top of the wing," explained Justin Jaworski, assistant professor in Lehigh University's Department of Mechanical Engineering and Mechanics. His group is exploring whether owl stealth is based upon a single attribute or the interaction of a combination of attributes.
For conventional wings, the sound from the hard trailing edge typically dominates the acoustic signature. But prior theoretical work carried out by Jaworski and Nigel Peake at the University of Cambridge revealed that the porous, compliant character of the owl wing's trailing edge results in significant aerodynamic noise reductions.
"We also predicted that the dominant edge-noise source could be effectively eliminated with properly tuned porous or elastic edge properties, which implies that that the noise signature from the wing can then be dictated by otherwise minor noise mechanisms such as the 'roughness' of the wing surface," said Jaworski.
The velvety down atop an owl's wing creates a compliant but rough surface, much like a soft carpet. This down material may be the least studied of the unique owl noise attributes, but Jaworski believes it may eliminate sound at the source through a novel mechanism that is much different than those of ordinary sound absorbers.
"Our current work predicts the sound resulting from air passing over the downy material, which is idealized as a collection of individual flexible fibers, and how the aerodynamic noise level varies with fiber composition," Jaworski said.
The researchers' results are providing details about how a fuzzy -- compliant but rough -- surface can be designed to tailor its acoustic signature.
A photographic study of actual owl feathers, carried out with Ian Clark of Virginia Tech, has revealed a surprising 'forest-like' geometry of the down material, so this will be incorporated into the researchers' future theoretical and experimental work to more faithfully replicate the down structure. Preliminary experiments performed at Virginia Tech show that a simple mesh covering, which replicates the top layer of the 'forest' structure, is effective in eliminating some sound generated by rough surfaces.
Read more at Science Daily
Subscribe to:
Posts (Atom)